mirror of
https://gitlab.torproject.org/tpo/core/tor.git
synced 2024-11-20 02:09:24 +01:00
Merge branch 'bug14001-clang-warning' into bug13111-empty-key-files-fn-empty
Conflicts: src/or/router.c Choose newer comment. Merge changes to comment and function invocation.
This commit is contained in:
commit
c200ab46b8
3
README
3
README
@ -6,6 +6,9 @@ configure it properly.
|
||||
To build Tor from source:
|
||||
./configure && make && make install
|
||||
|
||||
To build Tor from a just-cloned git repository:
|
||||
sh autogen.sh && ./configure && make && make install
|
||||
|
||||
Home page:
|
||||
https://www.torproject.org/
|
||||
|
||||
|
10
changes/bug13126
Normal file
10
changes/bug13126
Normal file
@ -0,0 +1,10 @@
|
||||
o Code simplification and refactoring:
|
||||
|
||||
- Remove our old, non-weighted bandwidth-based node selection code.
|
||||
Previously, we used it as a fallback when we couldn't perform
|
||||
weighted bandwidth-based node selection. But that would only
|
||||
happen in the cases where we had no consensus, or when we had a
|
||||
consensus generated by buggy or ancient directory authorities. In
|
||||
either case, it's better to use the more modern, better maintained
|
||||
algorithm, with reasonable defaults for the weights. Closes
|
||||
ticket 13126.
|
7
changes/bug13214
Normal file
7
changes/bug13214
Normal file
@ -0,0 +1,7 @@
|
||||
o Minor bugfixes (hidden services):
|
||||
- When fetching hidden service descriptors, check not only for
|
||||
whether we got the hidden service we had in mind, but also
|
||||
whether we got the particular descriptors we wanted. This
|
||||
prevents a class of inefficient but annoying DoS attacks by
|
||||
hidden service directories. Fixes bug 13214; bugfix on
|
||||
0.2.1.6-alpha. Reported by "special".
|
5
changes/bug13296
Normal file
5
changes/bug13296
Normal file
@ -0,0 +1,5 @@
|
||||
o Directory authority changes:
|
||||
- Remove turtles as a directory authority.
|
||||
- Add longclaw as a new (v3) directory authority. This implements
|
||||
ticket 13296. This keeps the directory authority count at 9.
|
||||
|
5
changes/bug13315
Normal file
5
changes/bug13315
Normal file
@ -0,0 +1,5 @@
|
||||
o Minor features:
|
||||
- Validate hostnames in SOCKS5 requests more strictly. If SafeSocks
|
||||
is enabled, reject requests with IP addresses as hostnames. Resolves
|
||||
ticket 13315.
|
||||
|
12
changes/bug13399
Normal file
12
changes/bug13399
Normal file
@ -0,0 +1,12 @@
|
||||
o Minor bugfixes:
|
||||
- Use a full 256 bits of the SHA256 digest of a microdescriptor when
|
||||
computing which microdescriptors to download. This keeps us from
|
||||
erroneous download behavior if two microdescriptor digests ever have
|
||||
the same first 160 bits. Fixes part of bug 13399; bugfix on
|
||||
0.2.3.1-alpha.
|
||||
|
||||
- Reset a router's status if its microdescriptor digest changes,
|
||||
even if the first 160 bits remain the same. Fixes part of bug
|
||||
13399; bugfix on 0.2.3.1-alpha.
|
||||
|
||||
|
5
changes/bug13447
Normal file
5
changes/bug13447
Normal file
@ -0,0 +1,5 @@
|
||||
o Minor feature:
|
||||
- When re-enabling the network, don't try to build introduction circuits
|
||||
until we have successfully built a circuit. This makes hidden services
|
||||
come up faster when the network is re-enabled. Patch from
|
||||
"akwizgran". Closes ticket 13447.
|
4
changes/bug13644
Normal file
4
changes/bug13644
Normal file
@ -0,0 +1,4 @@
|
||||
o Code simplifications and refactoring:
|
||||
- Document all members of was_router_added_t enum and rename
|
||||
ROUTER_WAS_NOT_NEW to ROUTER_IS_ALREADY_KNOWN to make it less
|
||||
confusable with ROUTER_WAS_TOO_OLD. Fixes issue 13644.
|
6
changes/bug13678
Normal file
6
changes/bug13678
Normal file
@ -0,0 +1,6 @@
|
||||
|
||||
o Testing:
|
||||
- In the unit tests, use 'chgrp' to change the group of the unit test
|
||||
temporary directory to the current user, so that the sticky bit doesn't
|
||||
interfere with tests that check directory groups. Closes 13678.
|
||||
|
6
changes/bug13698
Normal file
6
changes/bug13698
Normal file
@ -0,0 +1,6 @@
|
||||
o Major bugfixes:
|
||||
- When closing an introduction circuit that was opened in
|
||||
parallel, don't mark the introduction point as
|
||||
unreachable. Previously, the first successful connection to an
|
||||
introduction point would make the other uintroduction points get
|
||||
marked as having timed out. Fixes bug 13698; bugfix on 0.0.6rc2.
|
4
changes/bug13701
Normal file
4
changes/bug13701
Normal file
@ -0,0 +1,4 @@
|
||||
o Minor bugfixes (logging):
|
||||
- Log the circuit identifier correctly in
|
||||
connection_ap_handshake_attach_circuit(). Fixes bug 13701;
|
||||
bugfix on 0.0.6.
|
4
changes/bug13707
Normal file
4
changes/bug13707
Normal file
@ -0,0 +1,4 @@
|
||||
o Documentation:
|
||||
- Fix typo in PredictedPortsRelevanceTime option description in
|
||||
manpage. Resolves issue 13707.
|
||||
|
3
changes/bug13713
Normal file
3
changes/bug13713
Normal file
@ -0,0 +1,3 @@
|
||||
o Documentation:
|
||||
- Document the bridge-authority-only 'networkstatus-bridges'
|
||||
file. Closes ticket 13713; patch from "tom".
|
3
changes/bug13840
Normal file
3
changes/bug13840
Normal file
@ -0,0 +1,3 @@
|
||||
o Code simplifications and refactoring:
|
||||
- In connection_exit_begin_conn(), use END_CIRC_REASON_TORPROTOCOL
|
||||
constant instead of hardcoded value. Fixes issue 13840.
|
6
changes/bug13941
Normal file
6
changes/bug13941
Normal file
@ -0,0 +1,6 @@
|
||||
o Minor bugfixes (hidden services):
|
||||
- When adding a new hidden-service (for example, via SETCONF) Tor
|
||||
no longer logs a congratulations for running a relay. Fixes bug
|
||||
13941; bugfix on 0.2.6.1-alpha.
|
||||
|
||||
|
5
changes/bug13942
Normal file
5
changes/bug13942
Normal file
@ -0,0 +1,5 @@
|
||||
o Minor bugfixes (hidden services):
|
||||
- Pre-check directory permissions for new hidden-services to avoid
|
||||
at least one case of "Bug: Acting on config options left us in a
|
||||
broken state. Dying." Fixes bug 13942.
|
||||
|
6
changes/bug14001-clang-warning
Normal file
6
changes/bug14001-clang-warning
Normal file
@ -0,0 +1,6 @@
|
||||
o Minor bugfixes:
|
||||
- The address of an array in the middle of a structure will
|
||||
always be non-NULL. clang recognises this and complains.
|
||||
Disable the tautologous and redundant check to silence
|
||||
this warning.
|
||||
Fixes bug 14001.
|
4
changes/bug7484
Normal file
4
changes/bug7484
Normal file
@ -0,0 +1,4 @@
|
||||
o Minor bugfixes:
|
||||
- Stop allowing invalid address patterns containing both a wildcard
|
||||
address and a bit prefix length. This affects all our
|
||||
address-range parsing code. Fixes bug 7484; bugfix on 0.0.2pre14.
|
5
changes/bug7803
Normal file
5
changes/bug7803
Normal file
@ -0,0 +1,5 @@
|
||||
o Removed features:
|
||||
- Tor clients no longer support connecting to hidden services running on
|
||||
Tor 0.2.2.x and earlier; the Support022HiddenServices option has been
|
||||
removed. (There shouldn't be any hidden services running these
|
||||
versions on the network.)
|
6
changes/bug9812
Normal file
6
changes/bug9812
Normal file
@ -0,0 +1,6 @@
|
||||
o Minor bugfixes (logging):
|
||||
- Downgrade warnings about RSA signature failures to info log
|
||||
level. Emit a warning when extra info document is found
|
||||
incompatible with a corresponding router descriptor. Fixes bug
|
||||
9812; bugfix on 0.0.6rc3.
|
||||
|
5
changes/doc13381
Normal file
5
changes/doc13381
Normal file
@ -0,0 +1,5 @@
|
||||
o Documentation:
|
||||
- Stop suggesting that users specify nodes by nickname: it isn't a
|
||||
good idea. Also, properly cross-reference how to specify nodes
|
||||
in all parts of the manual for options that take a list of
|
||||
nodes. Closes ticket 13381.
|
4
changes/feature13212
Normal file
4
changes/feature13212
Normal file
@ -0,0 +1,4 @@
|
||||
o Minor features (hidden services):
|
||||
- Inform Tor controller about nature of failure to retrieve
|
||||
hidden service descriptor by sending reason string with HS_DESC
|
||||
FAILED controller event. Implements feature 13212.
|
4
changes/feature9503
Normal file
4
changes/feature9503
Normal file
@ -0,0 +1,4 @@
|
||||
o Minor features (controller):
|
||||
- Add a "SIGNAL HEARTBEAT" Tor controller command that provokes
|
||||
writing unscheduled heartbeat message to the log. Implements
|
||||
feature 9503.
|
3
changes/geoip-november2014
Normal file
3
changes/geoip-november2014
Normal file
@ -0,0 +1,3 @@
|
||||
o Minor features:
|
||||
- Update geoip to the November 15 2014 Maxmind GeoLite2 Country database.
|
||||
|
3
changes/geoip6-november2014
Normal file
3
changes/geoip6-november2014
Normal file
@ -0,0 +1,3 @@
|
||||
o Minor features:
|
||||
- Update geoip6 to the November 15 2014 Maxmind GeoLite2 Country database.
|
||||
|
12
changes/global_scheduler
Normal file
12
changes/global_scheduler
Normal file
@ -0,0 +1,12 @@
|
||||
o Major features (relay, infrastructure):
|
||||
- Implement a new inter-cmux comparison API, a global high/low watermark
|
||||
mechanism and a global scheduler loop for transmission prioritization
|
||||
across all channels as well as among circuits on one channel. This
|
||||
schedule is currently tuned to (tolerantly) avoid making changes
|
||||
in the current network performance, but it should form the basis
|
||||
major circuit performance increases. Code by Andrea; implements
|
||||
ticket 9262.
|
||||
|
||||
o Testing:
|
||||
- New tests for many parts of channel, relay, and circuit mux
|
||||
functionality. Code by Andrea; part of 9262.
|
3
changes/no_global_ccc
Normal file
3
changes/no_global_ccc
Normal file
@ -0,0 +1,3 @@
|
||||
o Code Simplification and Refactoring:
|
||||
- Stop using can_complete_circuits as a global variable; access it with
|
||||
a function instead.
|
4
changes/ticket-11291
Normal file
4
changes/ticket-11291
Normal file
@ -0,0 +1,4 @@
|
||||
o Minor features (hidden services):
|
||||
- New HiddenServiceDirGroupReadable option to cause hidden service
|
||||
directories and hostname files to be created group-readable.
|
||||
Patch from "anon", David Stainton, and "meejah".
|
4
changes/ticket13172
Normal file
4
changes/ticket13172
Normal file
@ -0,0 +1,4 @@
|
||||
o Code simplification and refactoring:
|
||||
- Avoid using operators directly as macro arguments: this lets us
|
||||
apply coccinelle transformations to our codebase more
|
||||
directly. Closes ticket 13172.
|
6
changes/tickets6456
Normal file
6
changes/tickets6456
Normal file
@ -0,0 +1,6 @@
|
||||
o Code simplification and refactoring:
|
||||
- Combine the functions used to parse ClientTransportPlugin and
|
||||
ServerTransportPlugin into a single function. Closes ticket 6456.
|
||||
|
||||
o Testing:
|
||||
- New tests for parse_transport_line(). Part of ticket 6456.
|
@ -550,7 +550,7 @@ GENERAL OPTIONS
|
||||
\'info'. (Default: 0)
|
||||
|
||||
[[PredictedPortsRelevanceTime]] **PredictedPortsRelevanceTime** __NUM__::
|
||||
Set how long, after the client has mad an anonymized connection to a
|
||||
Set how long, after the client has made an anonymized connection to a
|
||||
given port, we will try to make sure that we build circuits to
|
||||
exits that support that port. The maximum value for this option is 1
|
||||
hour. (Default: 1 hour)
|
||||
@ -711,10 +711,11 @@ The following options are useful only for clients (that is, if
|
||||
unless ORPort, ExtORPort, or DirPort are configured.) (Default: 0)
|
||||
|
||||
[[ExcludeNodes]] **ExcludeNodes** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, country codes and address
|
||||
patterns of nodes to avoid when building a circuit.
|
||||
A list of identity fingerprints, country codes, and address
|
||||
patterns of nodes to avoid when building a circuit. Country codes must
|
||||
be wrapped in braces; fingerprints may be preceded by a dollar sign.
|
||||
(Example:
|
||||
ExcludeNodes SlowServer, ABCD1234CDEF5678ABCD1234CDEF5678ABCD1234, \{cc}, 255.254.0.0/8) +
|
||||
ExcludeNodes ABCD1234CDEF5678ABCD1234CDEF5678ABCD1234, \{cc}, 255.254.0.0/8) +
|
||||
+
|
||||
By default, this option is treated as a preference that Tor is allowed
|
||||
to override in order to keep working.
|
||||
@ -734,11 +735,13 @@ The following options are useful only for clients (that is, if
|
||||
|
||||
|
||||
[[ExcludeExitNodes]] **ExcludeExitNodes** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, country codes and address
|
||||
A list of identity fingerprints, country codes, and address
|
||||
patterns of nodes to never use when picking an exit node---that is, a
|
||||
node that delivers traffic for you outside the Tor network. Note that any
|
||||
node listed in ExcludeNodes is automatically considered to be part of this
|
||||
list too. See also the caveats on the "ExitNodes" option below.
|
||||
list too. See
|
||||
the **ExcludeNodes** option for more information on how to specify
|
||||
nodes. See also the caveats on the "ExitNodes" option below.
|
||||
|
||||
[[GeoIPExcludeUnknown]] **GeoIPExcludeUnknown** **0**|**1**|**auto**::
|
||||
If this option is set to 'auto', then whenever any country code is set in
|
||||
@ -749,9 +752,10 @@ The following options are useful only for clients (that is, if
|
||||
configured or can't be found. (Default: auto)
|
||||
|
||||
[[ExitNodes]] **ExitNodes** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, country codes and address
|
||||
A list of identity fingerprints, country codes, and address
|
||||
patterns of nodes to use as exit node---that is, a
|
||||
node that delivers traffic for you outside the Tor network. +
|
||||
node that delivers traffic for you outside the Tor network. See
|
||||
the **ExcludeNodes** option for more information on how to specify nodes. +
|
||||
+
|
||||
Note that if you list too few nodes here, or if you exclude too many exit
|
||||
nodes with ExcludeExitNodes, you can degrade functionality. For example,
|
||||
@ -772,7 +776,7 @@ The following options are useful only for clients (that is, if
|
||||
this option.
|
||||
|
||||
[[EntryNodes]] **EntryNodes** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, and country codes of nodes
|
||||
A list of identity fingerprints and country codes of nodes
|
||||
to use for the first hop in your normal circuits.
|
||||
Normal circuits include all
|
||||
circuits except for direct connections to directory servers. The Bridge
|
||||
@ -780,7 +784,8 @@ The following options are useful only for clients (that is, if
|
||||
UseBridges is 1, the Bridges are used as your entry nodes. +
|
||||
+
|
||||
The ExcludeNodes option overrides this option: any node listed in both
|
||||
EntryNodes and ExcludeNodes is treated as excluded.
|
||||
EntryNodes and ExcludeNodes is treated as excluded. See
|
||||
the **ExcludeNodes** option for more information on how to specify nodes.
|
||||
|
||||
[[StrictNodes]] **StrictNodes** **0**|**1**::
|
||||
If StrictNodes is set to 1, Tor will treat the ExcludeNodes option as a
|
||||
@ -929,12 +934,14 @@ The following options are useful only for clients (that is, if
|
||||
but it has not yet been completely constructed. (Default: 32)
|
||||
|
||||
[[NodeFamily]] **NodeFamily** __node__,__node__,__...__::
|
||||
The Tor servers, defined by their identity fingerprints or nicknames,
|
||||
The Tor servers, defined by their identity fingerprints,
|
||||
constitute a "family" of similar or co-administered servers, so never use
|
||||
any two of them in the same circuit. Defining a NodeFamily is only needed
|
||||
when a server doesn't list the family itself (with MyFamily). This option
|
||||
can be used multiple times. In addition to nodes, you can also list
|
||||
IP address and ranges and country codes in {curly braces}.
|
||||
can be used multiple times; each instance defines a separate family. In
|
||||
addition to nodes, you can also list IP address and ranges and country
|
||||
codes in {curly braces}. See the **ExcludeNodes** option for more
|
||||
information on how to specify nodes.
|
||||
|
||||
[[EnforceDistinctSubnets]] **EnforceDistinctSubnets** **0**|**1**::
|
||||
If 1, Tor will not put two servers whose IP addresses are "too close" on
|
||||
@ -1419,16 +1426,6 @@ The following options are useful only for clients (that is, if
|
||||
Tor will use a default value chosen by the directory
|
||||
authorities. (Default: -1.)
|
||||
|
||||
[[Support022HiddenServices]] **Support022HiddenServices** **0**|**1**|**auto**::
|
||||
Tor hidden services running versions before 0.2.3.x required clients to
|
||||
send timestamps, which can potentially be used to distinguish clients
|
||||
whose view of the current time is skewed. If this option is set to 0, we
|
||||
do not send this timestamp, and hidden services on obsolete Tor versions
|
||||
will not work. If this option is set to 1, we send the timestamp. If
|
||||
this option is "auto", we take a recommendation from the latest consensus
|
||||
document. (Default: auto)
|
||||
|
||||
|
||||
SERVER OPTIONS
|
||||
--------------
|
||||
|
||||
@ -1538,7 +1535,7 @@ is non-zero):
|
||||
[[MyFamily]] **MyFamily** __node__,__node__,__...__::
|
||||
Declare that this Tor server is controlled or administered by a group or
|
||||
organization identical or similar to that of the other servers, defined by
|
||||
their identity fingerprints or nicknames. When two servers both declare
|
||||
their identity fingerprints. When two servers both declare
|
||||
that they are in the same \'family', Tor clients will not use them in the
|
||||
same circuit. (Each server only needs to list the other servers in its
|
||||
family; it doesn't need to list itself, but it won't hurt.) Do not list
|
||||
@ -2035,6 +2032,7 @@ The following options are used to configure a hidden service.
|
||||
recent HiddenServiceDir. By default, this option maps the virtual port to
|
||||
the same port on 127.0.0.1 over TCP. You may override the target port,
|
||||
address, or both by specifying a target of addr, port, or addr:port.
|
||||
(You can specify an IPv6 target as [addr]:port.)
|
||||
You may also have multiple lines with the same VIRTPORT: when a user
|
||||
connects to that VIRTPORT, one of the TARGETs from those lines will be
|
||||
chosen at random.
|
||||
@ -2066,6 +2064,12 @@ The following options are used to configure a hidden service.
|
||||
service descriptors to the directory servers. This information is also
|
||||
uploaded whenever it changes. (Default: 1 hour)
|
||||
|
||||
[[HiddenServiceDirGroupReadable]] **HiddenServiceDirGroupReadable** **0**|**1**::
|
||||
If this option is set to 1, allow the filesystem group to read the
|
||||
hidden service directory and hostname file. If the option is set to 0,
|
||||
only owner is able to read the hidden service directory. (Default: 0)
|
||||
Has no effect on Windows.
|
||||
|
||||
TESTING NETWORK OPTIONS
|
||||
-----------------------
|
||||
|
||||
@ -2198,16 +2202,17 @@ The following options are used for running a testing Tor network.
|
||||
Changing this requires that **TestingTorNetwork** is set. (Default: 8)
|
||||
|
||||
[[TestingDirAuthVoteExit]] **TestingDirAuthVoteExit** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, country codes and
|
||||
A list of identity fingerprints, country codes, and
|
||||
address patterns of nodes to vote Exit for regardless of their
|
||||
uptime, bandwidth, or exit policy. See the **ExcludeNodes**
|
||||
option for more information on how to specify nodes.
|
||||
+
|
||||
In order for this option to have any effect, **TestingTorNetwork**
|
||||
has to be set.
|
||||
has to be set. See the **ExcludeNodes** option for more
|
||||
information on how to specify nodes.
|
||||
|
||||
[[TestingDirAuthVoteGuard]] **TestingDirAuthVoteGuard** __node__,__node__,__...__::
|
||||
A list of identity fingerprints, nicknames, country codes and
|
||||
A list of identity fingerprints and country codes and
|
||||
address patterns of nodes to vote Guard for regardless of their
|
||||
uptime and bandwidth. See the **ExcludeNodes** option for more
|
||||
information on how to specify nodes.
|
||||
@ -2395,6 +2400,11 @@ __DataDirectory__**/stats/conn-stats**::
|
||||
Only used by servers. This file is used to collect approximate connection
|
||||
history (number of active connections over time).
|
||||
|
||||
__DataDirectory__**/networkstatus-bridges**::
|
||||
Only used by authoritative bridge directories. Contains information
|
||||
about bridges that have self-reported themselves to the bridge
|
||||
authority.
|
||||
|
||||
__HiddenServiceDirectory__**/hostname**::
|
||||
The <base32-encoded-fingerprint>.onion domain name for this hidden service.
|
||||
If the hidden service is restricted to authorized clients only, this file
|
||||
|
@ -1,16 +1,19 @@
|
||||
// Use calloc or realloc as appropriate instead of multiply-and-alloc
|
||||
|
||||
@malloc_to_calloc@
|
||||
expression a,b;
|
||||
identifier f =~ "(tor_malloc|tor_malloc_zero)";
|
||||
expression a;
|
||||
constant b;
|
||||
@@
|
||||
- tor_malloc(a * b)
|
||||
- f(a * b)
|
||||
+ tor_calloc(a, b)
|
||||
|
||||
@malloc_zero_to_calloc@
|
||||
expression a, b;
|
||||
@calloc_arg_order@
|
||||
expression a;
|
||||
type t;
|
||||
@@
|
||||
- tor_malloc_zero(a * b)
|
||||
+ tor_calloc(a, b)
|
||||
- tor_calloc(sizeof(t), a)
|
||||
+ tor_calloc(a, sizeof(t))
|
||||
|
||||
@realloc_to_reallocarray@
|
||||
expression a, b;
|
||||
|
@ -128,7 +128,8 @@ for $fn (@ARGV) {
|
||||
if ($1 ne "if" and $1 ne "while" and $1 ne "for" and
|
||||
$1 ne "switch" and $1 ne "return" and $1 ne "int" and
|
||||
$1 ne "elsif" and $1 ne "WINAPI" and $2 ne "WINAPI" and
|
||||
$1 ne "void" and $1 ne "__attribute__" and $1 ne "op") {
|
||||
$1 ne "void" and $1 ne "__attribute__" and $1 ne "op" and
|
||||
$1 ne "size_t" and $1 ne "double") {
|
||||
print " fn ():$fn:$.\n";
|
||||
}
|
||||
}
|
||||
|
@ -723,6 +723,11 @@ tor_addr_parse_mask_ports(const char *s,
|
||||
/* XXXX_IP6 is this really what we want? */
|
||||
bits = 96 + bits%32; /* map v4-mapped masks onto 96-128 bits */
|
||||
}
|
||||
if (any_flag) {
|
||||
log_warn(LD_GENERAL,
|
||||
"Found bit prefix with wildcard address; rejecting");
|
||||
goto err;
|
||||
}
|
||||
} else { /* pick an appropriate mask, as none was given */
|
||||
if (any_flag)
|
||||
bits = 0; /* This is okay whether it's V6 or V4 (FIX V4-mapped V6!) */
|
||||
@ -1114,7 +1119,8 @@ fmt_addr32(uint32_t addr)
|
||||
int
|
||||
tor_addr_parse(tor_addr_t *addr, const char *src)
|
||||
{
|
||||
char *tmp = NULL; /* Holds substring if we got a dotted quad. */
|
||||
/* Holds substring of IPv6 address after removing square brackets */
|
||||
char *tmp = NULL;
|
||||
int result;
|
||||
struct in_addr in_tmp;
|
||||
struct in6_addr in6_tmp;
|
||||
|
@ -1698,7 +1698,7 @@ log_credential_status(void)
|
||||
|
||||
/* log supplementary groups */
|
||||
sup_gids_size = 64;
|
||||
sup_gids = tor_calloc(sizeof(gid_t), 64);
|
||||
sup_gids = tor_calloc(64, sizeof(gid_t));
|
||||
while ((ngids = getgroups(sup_gids_size, sup_gids)) < 0 &&
|
||||
errno == EINVAL &&
|
||||
sup_gids_size < NGROUPS_MAX) {
|
||||
|
@ -203,6 +203,15 @@ extern INLINE double U64_TO_DBL(uint64_t x) {
|
||||
#define STMT_END } while (0)
|
||||
#endif
|
||||
|
||||
/* Some tools (like coccinelle) don't like to see operators as macro
|
||||
* arguments. */
|
||||
#define OP_LT <
|
||||
#define OP_GT >
|
||||
#define OP_GE >=
|
||||
#define OP_LE <=
|
||||
#define OP_EQ ==
|
||||
#define OP_NE !=
|
||||
|
||||
/* ===== String compatibility */
|
||||
#ifdef _WIN32
|
||||
/* Windows names string functions differently from most other platforms. */
|
||||
|
@ -283,8 +283,8 @@ tor_libevent_initialize(tor_libevent_cfg *torcfg)
|
||||
}
|
||||
|
||||
/** Return the current Libevent event base that we're set up to use. */
|
||||
struct event_base *
|
||||
tor_libevent_get_base(void)
|
||||
MOCK_IMPL(struct event_base *,
|
||||
tor_libevent_get_base, (void))
|
||||
{
|
||||
return the_event_base;
|
||||
}
|
||||
@ -717,7 +717,7 @@ tor_gettimeofday_cached_monotonic(struct timeval *tv)
|
||||
struct timeval last_tv = { 0, 0 };
|
||||
|
||||
tor_gettimeofday_cached(tv);
|
||||
if (timercmp(tv, &last_tv, <)) {
|
||||
if (timercmp(tv, &last_tv, OP_LT)) {
|
||||
memcpy(tv, &last_tv, sizeof(struct timeval));
|
||||
} else {
|
||||
memcpy(&last_tv, tv, sizeof(struct timeval));
|
||||
|
@ -72,7 +72,7 @@ typedef struct tor_libevent_cfg {
|
||||
} tor_libevent_cfg;
|
||||
|
||||
void tor_libevent_initialize(tor_libevent_cfg *cfg);
|
||||
struct event_base *tor_libevent_get_base(void);
|
||||
MOCK_DECL(struct event_base *, tor_libevent_get_base, (void));
|
||||
const char *tor_libevent_get_method(void);
|
||||
void tor_check_libevent_version(const char *m, int server,
|
||||
const char **badness_out);
|
||||
|
@ -1012,7 +1012,7 @@ crypto_pk_public_checksig(crypto_pk_t *env, char *to,
|
||||
env->key, RSA_PKCS1_PADDING);
|
||||
|
||||
if (r<0) {
|
||||
crypto_log_errors(LOG_WARN, "checking RSA signature");
|
||||
crypto_log_errors(LOG_INFO, "checking RSA signature");
|
||||
return -1;
|
||||
}
|
||||
return r;
|
||||
|
@ -451,7 +451,7 @@ MOCK_IMPL(STATIC void,
|
||||
logv,(int severity, log_domain_mask_t domain, const char *funcname,
|
||||
const char *suffix, const char *format, va_list ap))
|
||||
{
|
||||
char buf[10024];
|
||||
char buf[10240];
|
||||
size_t msg_len = 0;
|
||||
int formatted = 0;
|
||||
logfile_t *lf;
|
||||
|
@ -97,8 +97,10 @@
|
||||
#define LD_HEARTBEAT (1u<<20)
|
||||
/** Abstract channel_t code */
|
||||
#define LD_CHANNEL (1u<<21)
|
||||
/** Scheduler */
|
||||
#define LD_SCHED (1u<<22)
|
||||
/** Number of logging domains in the code. */
|
||||
#define N_LOGGING_DOMAINS 22
|
||||
#define N_LOGGING_DOMAINS 23
|
||||
|
||||
/** This log message is not safe to send to a callback-based logger
|
||||
* immediately. Used as a flag, not a log domain. */
|
||||
|
@ -195,33 +195,40 @@ tor_malloc_zero_(size_t size DMALLOC_PARAMS)
|
||||
return result;
|
||||
}
|
||||
|
||||
/* The square root of SIZE_MAX + 1. If a is less than this, and b is less
|
||||
* than this, then a*b is less than SIZE_MAX. (For example, if size_t is
|
||||
* 32 bits, then SIZE_MAX is 0xffffffff and this value is 0x10000. If a and
|
||||
* b are less than this, then their product is at most (65535*65535) ==
|
||||
* 0xfffe0001. */
|
||||
#define SQRT_SIZE_MAX_P1 (((size_t)1) << (sizeof(size_t)*4))
|
||||
|
||||
/** Return non-zero if and only if the product of the arguments is exact. */
|
||||
static INLINE int
|
||||
size_mul_check(const size_t x, const size_t y)
|
||||
{
|
||||
/* This first check is equivalent to
|
||||
(x < SQRT_SIZE_MAX_P1 && y < SQRT_SIZE_MAX_P1)
|
||||
|
||||
Rationale: if either one of x or y is >= SQRT_SIZE_MAX_P1, then it
|
||||
will have some bit set in its most significant half.
|
||||
*/
|
||||
return ((x|y) < SQRT_SIZE_MAX_P1 ||
|
||||
y == 0 ||
|
||||
x <= SIZE_MAX / y);
|
||||
}
|
||||
|
||||
/** Allocate a chunk of <b>nmemb</b>*<b>size</b> bytes of memory, fill
|
||||
* the memory with zero bytes, and return a pointer to the result.
|
||||
* Log and terminate the process on error. (Same as
|
||||
* calloc(<b>nmemb</b>,<b>size</b>), but never returns NULL.)
|
||||
*
|
||||
* XXXX This implementation probably asserts in cases where it could
|
||||
* work, because it only tries dividing SIZE_MAX by size (according to
|
||||
* the calloc(3) man page, the size of an element of the nmemb-element
|
||||
* array to be allocated), not by nmemb (which could in theory be
|
||||
* smaller than size). Don't do that then.
|
||||
* The second argument (<b>size</b>) should preferably be non-zero
|
||||
* and a compile-time constant.
|
||||
*/
|
||||
void *
|
||||
tor_calloc_(size_t nmemb, size_t size DMALLOC_PARAMS)
|
||||
{
|
||||
/* You may ask yourself, "wouldn't it be smart to use calloc instead of
|
||||
* malloc+memset? Perhaps libc's calloc knows some nifty optimization trick
|
||||
* we don't!" Indeed it does, but its optimizations are only a big win when
|
||||
* we're allocating something very big (it knows if it just got the memory
|
||||
* from the OS in a pre-zeroed state). We don't want to use tor_malloc_zero
|
||||
* for big stuff, so we don't bother with calloc. */
|
||||
void *result;
|
||||
size_t max_nmemb = (size == 0) ? SIZE_MAX : SIZE_MAX/size;
|
||||
|
||||
tor_assert(nmemb < max_nmemb);
|
||||
|
||||
result = tor_malloc_zero_((nmemb * size) DMALLOC_FN_ARGS);
|
||||
return result;
|
||||
tor_assert(size_mul_check(nmemb, size));
|
||||
return tor_malloc_zero_((nmemb * size) DMALLOC_FN_ARGS);
|
||||
}
|
||||
|
||||
/** Change the size of the memory block pointed to by <b>ptr</b> to <b>size</b>
|
||||
@ -264,7 +271,7 @@ tor_reallocarray_(void *ptr, size_t sz1, size_t sz2 DMALLOC_PARAMS)
|
||||
{
|
||||
/* XXXX we can make this return 0, but we would need to check all the
|
||||
* reallocarray users. */
|
||||
tor_assert(sz2 == 0 || sz1 < SIZE_T_CEILING / sz2);
|
||||
tor_assert(size_mul_check(sz1, sz2));
|
||||
|
||||
return tor_realloc(ptr, (sz1 * sz2) DMALLOC_FN_ARGS);
|
||||
}
|
||||
@ -957,6 +964,68 @@ string_is_key_value(int severity, const char *string)
|
||||
return 1;
|
||||
}
|
||||
|
||||
/** Return true if <b>string</b> represents a valid IPv4 adddress in
|
||||
* 'a.b.c.d' form.
|
||||
*/
|
||||
int
|
||||
string_is_valid_ipv4_address(const char *string)
|
||||
{
|
||||
struct in_addr addr;
|
||||
|
||||
return (tor_inet_pton(AF_INET,string,&addr) == 1);
|
||||
}
|
||||
|
||||
/** Return true if <b>string</b> represents a valid IPv6 address in
|
||||
* a form that inet_pton() can parse.
|
||||
*/
|
||||
int
|
||||
string_is_valid_ipv6_address(const char *string)
|
||||
{
|
||||
struct in6_addr addr;
|
||||
|
||||
return (tor_inet_pton(AF_INET6,string,&addr) == 1);
|
||||
}
|
||||
|
||||
/** Return true iff <b>string</b> matches a pattern of DNS names
|
||||
* that we allow Tor clients to connect to.
|
||||
*/
|
||||
int
|
||||
string_is_valid_hostname(const char *string)
|
||||
{
|
||||
int result = 1;
|
||||
smartlist_t *components;
|
||||
|
||||
components = smartlist_new();
|
||||
|
||||
smartlist_split_string(components,string,".",0,0);
|
||||
|
||||
SMARTLIST_FOREACH_BEGIN(components, char *, c) {
|
||||
if (c[0] == '-') {
|
||||
result = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
do {
|
||||
if ((*c >= 'a' && *c <= 'z') ||
|
||||
(*c >= 'A' && *c <= 'Z') ||
|
||||
(*c >= '0' && *c <= '9') ||
|
||||
(*c == '-'))
|
||||
c++;
|
||||
else
|
||||
result = 0;
|
||||
} while (result && *c);
|
||||
|
||||
} SMARTLIST_FOREACH_END(c);
|
||||
|
||||
SMARTLIST_FOREACH_BEGIN(components, char *, c) {
|
||||
tor_free(c);
|
||||
} SMARTLIST_FOREACH_END(c);
|
||||
|
||||
smartlist_free(components);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/** Return true iff the DIGEST256_LEN bytes in digest are all zero. */
|
||||
int
|
||||
tor_digest256_is_zero(const char *digest)
|
||||
@ -1942,8 +2011,12 @@ file_status(const char *fname)
|
||||
* <b>check</b>&CPD_CHECK, and we think we can create it, return 0. Else
|
||||
* return -1. If CPD_GROUP_OK is set, then it's okay if the directory
|
||||
* is group-readable, but in all cases we create the directory mode 0700.
|
||||
* If CPD_CHECK_MODE_ONLY is set, then we don't alter the directory permissions
|
||||
* if they are too permissive: we just return -1.
|
||||
* If CPD_GROUP_READ is set, existing directory behaves as CPD_GROUP_OK and
|
||||
* if the directory is created it will use mode 0750 with group read
|
||||
* permission. Group read privileges also assume execute permission
|
||||
* as norm for directories. If CPD_CHECK_MODE_ONLY is set, then we don't
|
||||
* alter the directory permissions if they are too permissive:
|
||||
* we just return -1.
|
||||
* When effective_user is not NULL, check permissions against the given user
|
||||
* and its primary group.
|
||||
*/
|
||||
@ -1955,7 +2028,7 @@ check_private_dir(const char *dirname, cpd_check_t check,
|
||||
struct stat st;
|
||||
char *f;
|
||||
#ifndef _WIN32
|
||||
int mask;
|
||||
unsigned unwanted_bits = 0;
|
||||
const struct passwd *pw = NULL;
|
||||
uid_t running_uid;
|
||||
gid_t running_gid;
|
||||
@ -1980,7 +2053,11 @@ check_private_dir(const char *dirname, cpd_check_t check,
|
||||
#if defined (_WIN32)
|
||||
r = mkdir(dirname);
|
||||
#else
|
||||
r = mkdir(dirname, 0700);
|
||||
if (check & CPD_GROUP_READ) {
|
||||
r = mkdir(dirname, 0750);
|
||||
} else {
|
||||
r = mkdir(dirname, 0700);
|
||||
}
|
||||
#endif
|
||||
if (r) {
|
||||
log_warn(LD_FS, "Error creating directory %s: %s", dirname,
|
||||
@ -2033,7 +2110,8 @@ check_private_dir(const char *dirname, cpd_check_t check,
|
||||
tor_free(process_ownername);
|
||||
return -1;
|
||||
}
|
||||
if ((check & CPD_GROUP_OK) && st.st_gid != running_gid) {
|
||||
if ( (check & (CPD_GROUP_OK|CPD_GROUP_READ))
|
||||
&& (st.st_gid != running_gid) ) {
|
||||
struct group *gr;
|
||||
char *process_groupname = NULL;
|
||||
gr = getgrgid(running_gid);
|
||||
@ -2048,12 +2126,12 @@ check_private_dir(const char *dirname, cpd_check_t check,
|
||||
tor_free(process_groupname);
|
||||
return -1;
|
||||
}
|
||||
if (check & CPD_GROUP_OK) {
|
||||
mask = 0027;
|
||||
if (check & (CPD_GROUP_OK|CPD_GROUP_READ)) {
|
||||
unwanted_bits = 0027;
|
||||
} else {
|
||||
mask = 0077;
|
||||
unwanted_bits = 0077;
|
||||
}
|
||||
if (st.st_mode & mask) {
|
||||
if ((st.st_mode & unwanted_bits) != 0) {
|
||||
unsigned new_mode;
|
||||
if (check & CPD_CHECK_MODE_ONLY) {
|
||||
log_warn(LD_FS, "Permissions on directory %s are too permissive.",
|
||||
@ -2063,10 +2141,13 @@ check_private_dir(const char *dirname, cpd_check_t check,
|
||||
log_warn(LD_FS, "Fixing permissions on directory %s", dirname);
|
||||
new_mode = st.st_mode;
|
||||
new_mode |= 0700; /* Owner should have rwx */
|
||||
new_mode &= ~mask; /* Clear the other bits that we didn't want set...*/
|
||||
if (check & CPD_GROUP_READ) {
|
||||
new_mode |= 0050; /* Group should have rx */
|
||||
}
|
||||
new_mode &= ~unwanted_bits; /* Clear the bits that we didn't want set...*/
|
||||
if (chmod(dirname, new_mode)) {
|
||||
log_warn(LD_FS, "Could not chmod directory %s: %s", dirname,
|
||||
strerror(errno));
|
||||
strerror(errno));
|
||||
return -1;
|
||||
} else {
|
||||
return 0;
|
||||
@ -3474,8 +3555,9 @@ format_win_cmdline_argument(const char *arg)
|
||||
smartlist_add(arg_chars, (void*)&backslash);
|
||||
|
||||
/* Allocate space for argument, quotes (if needed), and terminator */
|
||||
formatted_arg = tor_calloc(sizeof(char),
|
||||
(smartlist_len(arg_chars) + (need_quotes ? 2 : 0) + 1));
|
||||
const size_t formatted_arg_len = smartlist_len(arg_chars) +
|
||||
(need_quotes ? 2 : 0) + 1;
|
||||
formatted_arg = tor_malloc_zero(formatted_arg_len);
|
||||
|
||||
/* Add leading quote */
|
||||
i=0;
|
||||
@ -5113,7 +5195,7 @@ tor_check_port_forwarding(const char *filename,
|
||||
for each smartlist element (one for "-p" and one for the
|
||||
ports), and one for the final NULL. */
|
||||
args_n = 1 + 2*smartlist_len(ports_to_forward) + 1;
|
||||
argv = tor_calloc(sizeof(char *), args_n);
|
||||
argv = tor_calloc(args_n, sizeof(char *));
|
||||
|
||||
argv[argv_index++] = filename;
|
||||
SMARTLIST_FOREACH_BEGIN(ports_to_forward, const char *, port) {
|
||||
|
@ -227,6 +227,9 @@ const char *find_str_at_start_of_line(const char *haystack,
|
||||
const char *needle);
|
||||
int string_is_C_identifier(const char *string);
|
||||
int string_is_key_value(int severity, const char *string);
|
||||
int string_is_valid_hostname(const char *string);
|
||||
int string_is_valid_ipv4_address(const char *string);
|
||||
int string_is_valid_ipv6_address(const char *string);
|
||||
|
||||
int tor_mem_is_zero(const char *mem, size_t len);
|
||||
int tor_digest_is_zero(const char *digest);
|
||||
@ -344,9 +347,11 @@ typedef unsigned int cpd_check_t;
|
||||
#define CPD_CREATE 1
|
||||
#define CPD_CHECK 2
|
||||
#define CPD_GROUP_OK 4
|
||||
#define CPD_CHECK_MODE_ONLY 8
|
||||
#define CPD_GROUP_READ 8
|
||||
#define CPD_CHECK_MODE_ONLY 16
|
||||
int check_private_dir(const char *dirname, cpd_check_t check,
|
||||
const char *effective_user);
|
||||
|
||||
#define OPEN_FLAGS_REPLACE (O_WRONLY|O_CREAT|O_TRUNC)
|
||||
#define OPEN_FLAGS_APPEND (O_WRONLY|O_CREAT|O_APPEND)
|
||||
#define OPEN_FLAGS_DONT_REPLACE (O_CREAT|O_EXCL|O_APPEND|O_WRONLY)
|
||||
|
20036
src/config/geoip
20036
src/config/geoip
File diff suppressed because it is too large
Load Diff
5649
src/config/geoip6
5649
src/config/geoip6
File diff suppressed because it is too large
Load Diff
@ -74,13 +74,13 @@ test_strcmp(void *data)
|
||||
values of the failing things.
|
||||
|
||||
Fail unless strcmp("abc, "abc") == 0 */
|
||||
tt_int_op(strcmp("abc", "abc"), ==, 0);
|
||||
tt_int_op(strcmp("abc", "abc"), OP_EQ, 0);
|
||||
|
||||
/* Fail unless strcmp("abc, "abcd") is less than 0 */
|
||||
tt_int_op(strcmp("abc", "abcd"), < , 0);
|
||||
tt_int_op(strcmp("abc", "abcd"), OP_LT, 0);
|
||||
|
||||
/* Incidentally, there's a test_str_op that uses strcmp internally. */
|
||||
tt_str_op("abc", <, "abcd");
|
||||
tt_str_op("abc", OP_LT, "abcd");
|
||||
|
||||
|
||||
/* Every test-case function needs to finish with an "end:"
|
||||
@ -153,11 +153,11 @@ test_memcpy(void *ptr)
|
||||
/* Let's make sure that memcpy does what we'd like. */
|
||||
strcpy(db->buffer1, "String 0");
|
||||
memcpy(db->buffer2, db->buffer1, sizeof(db->buffer1));
|
||||
tt_str_op(db->buffer1, ==, db->buffer2);
|
||||
tt_str_op(db->buffer1, OP_EQ, db->buffer2);
|
||||
|
||||
/* tt_mem_op() does a memcmp, as opposed to the strcmp in tt_str_op() */
|
||||
db->buffer2[100] = 3; /* Make the buffers unequal */
|
||||
tt_mem_op(db->buffer1, <, db->buffer2, sizeof(db->buffer1));
|
||||
tt_mem_op(db->buffer1, OP_LT, db->buffer2, sizeof(db->buffer1));
|
||||
|
||||
/* Now we've allocated memory that's referenced by a local variable.
|
||||
The end block of the function will clean it up. */
|
||||
@ -165,7 +165,7 @@ test_memcpy(void *ptr)
|
||||
tt_assert(mem);
|
||||
|
||||
/* Another rather trivial test. */
|
||||
tt_str_op(db->buffer1, !=, mem);
|
||||
tt_str_op(db->buffer1, OP_NE, mem);
|
||||
|
||||
end:
|
||||
/* This time our end block has something to do. */
|
||||
@ -186,9 +186,9 @@ test_timeout(void *ptr)
|
||||
#endif
|
||||
t2 = time(NULL);
|
||||
|
||||
tt_int_op(t2-t1, >=, 4);
|
||||
tt_int_op(t2-t1, OP_GE, 4);
|
||||
|
||||
tt_int_op(t2-t1, <=, 6);
|
||||
tt_int_op(t2-t1, OP_LE, 6);
|
||||
|
||||
end:
|
||||
;
|
||||
|
@ -63,6 +63,7 @@ LIBTOR_OBJECTS = \
|
||||
routerlist.obj \
|
||||
routerparse.obj \
|
||||
routerset.obj \
|
||||
scheduler.obj \
|
||||
statefile.obj \
|
||||
status.obj \
|
||||
transports.obj
|
||||
|
@ -562,8 +562,8 @@ buf_clear(buf_t *buf)
|
||||
}
|
||||
|
||||
/** Return the number of bytes stored in <b>buf</b> */
|
||||
size_t
|
||||
buf_datalen(const buf_t *buf)
|
||||
MOCK_IMPL(size_t,
|
||||
buf_datalen, (const buf_t *buf))
|
||||
{
|
||||
return buf->datalen;
|
||||
}
|
||||
@ -2054,8 +2054,20 @@ parse_socks(const char *data, size_t datalen, socks_request_t *req,
|
||||
req->address[len] = 0;
|
||||
req->port = ntohs(get_uint16(data+5+len));
|
||||
*drain_out = 5+len+2;
|
||||
if (!tor_strisprint(req->address) || strchr(req->address,'\"')) {
|
||||
|
||||
if (string_is_valid_ipv4_address(req->address) ||
|
||||
string_is_valid_ipv6_address(req->address)) {
|
||||
log_unsafe_socks_warning(5,req->address,req->port,safe_socks);
|
||||
|
||||
if (safe_socks) {
|
||||
socks_request_set_socks5_error(req, SOCKS5_NOT_ALLOWED);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if (!string_is_valid_hostname(req->address)) {
|
||||
socks_request_set_socks5_error(req, SOCKS5_GENERAL_ERROR);
|
||||
|
||||
log_warn(LD_PROTOCOL,
|
||||
"Your application (using socks5 to port %d) gave Tor "
|
||||
"a malformed hostname: %s. Rejecting the connection.",
|
||||
|
@ -24,7 +24,7 @@ void buf_shrink(buf_t *buf);
|
||||
size_t buf_shrink_freelists(int free_all);
|
||||
void buf_dump_freelist_sizes(int severity);
|
||||
|
||||
size_t buf_datalen(const buf_t *buf);
|
||||
MOCK_DECL(size_t, buf_datalen, (const buf_t *buf));
|
||||
size_t buf_allocation(const buf_t *buf);
|
||||
size_t buf_slack(const buf_t *buf);
|
||||
|
||||
|
477
src/or/channel.c
477
src/or/channel.c
@ -13,6 +13,9 @@
|
||||
|
||||
#define TOR_CHANNEL_INTERNAL_
|
||||
|
||||
/* This one's for stuff only channel.c and the test suite should see */
|
||||
#define CHANNEL_PRIVATE_
|
||||
|
||||
#include "or.h"
|
||||
#include "channel.h"
|
||||
#include "channeltls.h"
|
||||
@ -29,29 +32,7 @@
|
||||
#include "rephist.h"
|
||||
#include "router.h"
|
||||
#include "routerlist.h"
|
||||
|
||||
/* Cell queue structure */
|
||||
|
||||
typedef struct cell_queue_entry_s cell_queue_entry_t;
|
||||
struct cell_queue_entry_s {
|
||||
TOR_SIMPLEQ_ENTRY(cell_queue_entry_s) next;
|
||||
enum {
|
||||
CELL_QUEUE_FIXED,
|
||||
CELL_QUEUE_VAR,
|
||||
CELL_QUEUE_PACKED
|
||||
} type;
|
||||
union {
|
||||
struct {
|
||||
cell_t *cell;
|
||||
} fixed;
|
||||
struct {
|
||||
var_cell_t *var_cell;
|
||||
} var;
|
||||
struct {
|
||||
packed_cell_t *packed_cell;
|
||||
} packed;
|
||||
} u;
|
||||
};
|
||||
#include "scheduler.h"
|
||||
|
||||
/* Global lists of channels */
|
||||
|
||||
@ -76,6 +57,60 @@ static smartlist_t *finished_listeners = NULL;
|
||||
/* Counter for ID numbers */
|
||||
static uint64_t n_channels_allocated = 0;
|
||||
|
||||
/*
|
||||
* Channel global byte/cell counters, for statistics and for scheduler high
|
||||
* /low-water marks.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Total number of cells ever given to any channel with the
|
||||
* channel_write_*_cell() functions.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_cells_queued = 0;
|
||||
|
||||
/*
|
||||
* Total number of cells ever passed to a channel lower layer with the
|
||||
* write_*_cell() methods.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_cells_passed_to_lower_layer = 0;
|
||||
|
||||
/*
|
||||
* Current number of cells in all channel queues; should be
|
||||
* n_channel_cells_queued - n_channel_cells_passed_to_lower_layer.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_cells_in_queues = 0;
|
||||
|
||||
/*
|
||||
* Total number of bytes for all cells ever queued to a channel and
|
||||
* counted in n_channel_cells_queued.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_bytes_queued = 0;
|
||||
|
||||
/*
|
||||
* Total number of bytes for all cells ever passed to a channel lower layer
|
||||
* and counted in n_channel_cells_passed_to_lower_layer.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_bytes_passed_to_lower_layer = 0;
|
||||
|
||||
/*
|
||||
* Current number of bytes in all channel queues; should be
|
||||
* n_channel_bytes_queued - n_channel_bytes_passed_to_lower_layer.
|
||||
*/
|
||||
|
||||
static uint64_t n_channel_bytes_in_queues = 0;
|
||||
|
||||
/*
|
||||
* Current total estimated queue size *including lower layer queues and
|
||||
* transmit overhead*
|
||||
*/
|
||||
|
||||
STATIC uint64_t estimated_total_queue_size = 0;
|
||||
|
||||
/* Digest->channel map
|
||||
*
|
||||
* Similar to the one used in connection_or.c, this maps from the identity
|
||||
@ -123,6 +158,8 @@ cell_queue_entry_new_var(var_cell_t *var_cell);
|
||||
static int is_destroy_cell(channel_t *chan,
|
||||
const cell_queue_entry_t *q, circid_t *circid_out);
|
||||
|
||||
static void channel_assert_counter_consistency(void);
|
||||
|
||||
/* Functions to maintain the digest map */
|
||||
static void channel_add_to_digest_map(channel_t *chan);
|
||||
static void channel_remove_from_digest_map(channel_t *chan);
|
||||
@ -140,6 +177,8 @@ channel_free_list(smartlist_t *channels, int mark_for_close);
|
||||
static void
|
||||
channel_listener_free_list(smartlist_t *channels, int mark_for_close);
|
||||
static void channel_listener_force_free(channel_listener_t *chan_l);
|
||||
static size_t channel_get_cell_queue_entry_size(channel_t *chan,
|
||||
cell_queue_entry_t *q);
|
||||
static void
|
||||
channel_write_cell_queue_entry(channel_t *chan, cell_queue_entry_t *q);
|
||||
|
||||
@ -746,6 +785,9 @@ channel_init(channel_t *chan)
|
||||
|
||||
/* It hasn't been open yet. */
|
||||
chan->has_been_open = 0;
|
||||
|
||||
/* Scheduler state is idle */
|
||||
chan->scheduler_state = SCHED_CHAN_IDLE;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -788,6 +830,9 @@ channel_free(channel_t *chan)
|
||||
"Freeing channel " U64_FORMAT " at %p",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
|
||||
/* Get this one out of the scheduler */
|
||||
scheduler_release_channel(chan);
|
||||
|
||||
/*
|
||||
* Get rid of cmux policy before we do anything, so cmux policies don't
|
||||
* see channels in weird half-freed states.
|
||||
@ -863,6 +908,9 @@ channel_force_free(channel_t *chan)
|
||||
"Force-freeing channel " U64_FORMAT " at %p",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
|
||||
/* Get this one out of the scheduler */
|
||||
scheduler_release_channel(chan);
|
||||
|
||||
/*
|
||||
* Get rid of cmux policy before we do anything, so cmux policies don't
|
||||
* see channels in weird half-freed states.
|
||||
@ -1665,6 +1713,36 @@ cell_queue_entry_new_var(var_cell_t *var_cell)
|
||||
return q;
|
||||
}
|
||||
|
||||
/**
|
||||
* Ask how big the cell contained in a cell_queue_entry_t is
|
||||
*/
|
||||
|
||||
static size_t
|
||||
channel_get_cell_queue_entry_size(channel_t *chan, cell_queue_entry_t *q)
|
||||
{
|
||||
size_t rv = 0;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(q);
|
||||
|
||||
switch (q->type) {
|
||||
case CELL_QUEUE_FIXED:
|
||||
rv = get_cell_network_size(chan->wide_circ_ids);
|
||||
break;
|
||||
case CELL_QUEUE_VAR:
|
||||
rv = get_var_cell_header_size(chan->wide_circ_ids) +
|
||||
(q->u.var.var_cell ? q->u.var.var_cell->payload_len : 0);
|
||||
break;
|
||||
case CELL_QUEUE_PACKED:
|
||||
rv = get_cell_network_size(chan->wide_circ_ids);
|
||||
break;
|
||||
default:
|
||||
tor_assert(1);
|
||||
}
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write to a channel based on a cell_queue_entry_t
|
||||
*
|
||||
@ -1677,6 +1755,7 @@ channel_write_cell_queue_entry(channel_t *chan, cell_queue_entry_t *q)
|
||||
{
|
||||
int result = 0, sent = 0;
|
||||
cell_queue_entry_t *tmp = NULL;
|
||||
size_t cell_bytes;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(q);
|
||||
@ -1693,6 +1772,9 @@ channel_write_cell_queue_entry(channel_t *chan, cell_queue_entry_t *q)
|
||||
}
|
||||
}
|
||||
|
||||
/* For statistical purposes, figure out how big this cell is */
|
||||
cell_bytes = channel_get_cell_queue_entry_size(chan, q);
|
||||
|
||||
/* Can we send it right out? If so, try */
|
||||
if (TOR_SIMPLEQ_EMPTY(&chan->outgoing_queue) &&
|
||||
chan->state == CHANNEL_STATE_OPEN) {
|
||||
@ -1726,6 +1808,13 @@ channel_write_cell_queue_entry(channel_t *chan, cell_queue_entry_t *q)
|
||||
channel_timestamp_drained(chan);
|
||||
/* Update the counter */
|
||||
++(chan->n_cells_xmitted);
|
||||
chan->n_bytes_xmitted += cell_bytes;
|
||||
/* Update global counters */
|
||||
++n_channel_cells_queued;
|
||||
++n_channel_cells_passed_to_lower_layer;
|
||||
n_channel_bytes_queued += cell_bytes;
|
||||
n_channel_bytes_passed_to_lower_layer += cell_bytes;
|
||||
channel_assert_counter_consistency();
|
||||
}
|
||||
}
|
||||
|
||||
@ -1737,6 +1826,14 @@ channel_write_cell_queue_entry(channel_t *chan, cell_queue_entry_t *q)
|
||||
*/
|
||||
tmp = cell_queue_entry_dup(q);
|
||||
TOR_SIMPLEQ_INSERT_TAIL(&chan->outgoing_queue, tmp, next);
|
||||
/* Update global counters */
|
||||
++n_channel_cells_queued;
|
||||
++n_channel_cells_in_queues;
|
||||
n_channel_bytes_queued += cell_bytes;
|
||||
n_channel_bytes_in_queues += cell_bytes;
|
||||
channel_assert_counter_consistency();
|
||||
/* Update channel queue size */
|
||||
chan->bytes_in_queue += cell_bytes;
|
||||
/* Try to process the queue? */
|
||||
if (chan->state == CHANNEL_STATE_OPEN) channel_flush_cells(chan);
|
||||
}
|
||||
@ -1775,6 +1872,9 @@ channel_write_cell(channel_t *chan, cell_t *cell)
|
||||
q.type = CELL_QUEUE_FIXED;
|
||||
q.u.fixed.cell = cell;
|
||||
channel_write_cell_queue_entry(chan, &q);
|
||||
|
||||
/* Update the queue size estimate */
|
||||
channel_update_xmit_queue_size(chan);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1810,6 +1910,9 @@ channel_write_packed_cell(channel_t *chan, packed_cell_t *packed_cell)
|
||||
q.type = CELL_QUEUE_PACKED;
|
||||
q.u.packed.packed_cell = packed_cell;
|
||||
channel_write_cell_queue_entry(chan, &q);
|
||||
|
||||
/* Update the queue size estimate */
|
||||
channel_update_xmit_queue_size(chan);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1846,6 +1949,9 @@ channel_write_var_cell(channel_t *chan, var_cell_t *var_cell)
|
||||
q.type = CELL_QUEUE_VAR;
|
||||
q.u.var.var_cell = var_cell;
|
||||
channel_write_cell_queue_entry(chan, &q);
|
||||
|
||||
/* Update the queue size estimate */
|
||||
channel_update_xmit_queue_size(chan);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1941,6 +2047,41 @@ channel_change_state(channel_t *chan, channel_state_t to_state)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're going to a closed/closing state, we don't need scheduling any
|
||||
* more; in CHANNEL_STATE_MAINT we can't accept writes.
|
||||
*/
|
||||
if (to_state == CHANNEL_STATE_CLOSING ||
|
||||
to_state == CHANNEL_STATE_CLOSED ||
|
||||
to_state == CHANNEL_STATE_ERROR) {
|
||||
scheduler_release_channel(chan);
|
||||
} else if (to_state == CHANNEL_STATE_MAINT) {
|
||||
scheduler_channel_doesnt_want_writes(chan);
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're closing, this channel no longer counts toward the global
|
||||
* estimated queue size; if we're open, it now does.
|
||||
*/
|
||||
if ((to_state == CHANNEL_STATE_CLOSING ||
|
||||
to_state == CHANNEL_STATE_CLOSED ||
|
||||
to_state == CHANNEL_STATE_ERROR) &&
|
||||
(from_state == CHANNEL_STATE_OPEN ||
|
||||
from_state == CHANNEL_STATE_MAINT)) {
|
||||
estimated_total_queue_size -= chan->bytes_in_queue;
|
||||
}
|
||||
|
||||
/*
|
||||
* If we're opening, this channel now does count toward the global
|
||||
* estimated queue size.
|
||||
*/
|
||||
if ((to_state == CHANNEL_STATE_OPEN ||
|
||||
to_state == CHANNEL_STATE_MAINT) &&
|
||||
!(from_state == CHANNEL_STATE_OPEN ||
|
||||
from_state == CHANNEL_STATE_MAINT)) {
|
||||
estimated_total_queue_size += chan->bytes_in_queue;
|
||||
}
|
||||
|
||||
/* Tell circuits if we opened and stuff */
|
||||
if (to_state == CHANNEL_STATE_OPEN) {
|
||||
channel_do_open_actions(chan);
|
||||
@ -2056,12 +2197,13 @@ channel_listener_change_state(channel_listener_t *chan_l,
|
||||
|
||||
#define MAX_CELLS_TO_GET_FROM_CIRCUITS_FOR_UNLIMITED 256
|
||||
|
||||
ssize_t
|
||||
channel_flush_some_cells(channel_t *chan, ssize_t num_cells)
|
||||
MOCK_IMPL(ssize_t,
|
||||
channel_flush_some_cells, (channel_t *chan, ssize_t num_cells))
|
||||
{
|
||||
unsigned int unlimited = 0;
|
||||
ssize_t flushed = 0;
|
||||
int num_cells_from_circs, clamped_num_cells;
|
||||
int q_len_before, q_len_after;
|
||||
|
||||
tor_assert(chan);
|
||||
|
||||
@ -2087,14 +2229,45 @@ channel_flush_some_cells(channel_t *chan, ssize_t num_cells)
|
||||
clamped_num_cells = (int)(num_cells - flushed);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Keep track of the change in queue size; we have to count cells
|
||||
* channel_flush_from_first_active_circuit() writes out directly,
|
||||
* but not double-count ones we might get later in
|
||||
* channel_flush_some_cells_from_outgoing_queue()
|
||||
*/
|
||||
q_len_before = chan_cell_queue_len(&(chan->outgoing_queue));
|
||||
|
||||
/* Try to get more cells from any active circuits */
|
||||
num_cells_from_circs = channel_flush_from_first_active_circuit(
|
||||
chan, clamped_num_cells);
|
||||
|
||||
/* If it claims we got some, process the queue again */
|
||||
q_len_after = chan_cell_queue_len(&(chan->outgoing_queue));
|
||||
|
||||
/*
|
||||
* If it claims we got some, adjust the flushed counter and consider
|
||||
* processing the queue again
|
||||
*/
|
||||
if (num_cells_from_circs > 0) {
|
||||
flushed += channel_flush_some_cells_from_outgoing_queue(chan,
|
||||
(unlimited ? -1 : num_cells - flushed));
|
||||
/*
|
||||
* Adjust flushed by the number of cells counted in
|
||||
* num_cells_from_circs that didn't go to the cell queue.
|
||||
*/
|
||||
|
||||
if (q_len_after > q_len_before) {
|
||||
num_cells_from_circs -= (q_len_after - q_len_before);
|
||||
if (num_cells_from_circs < 0) num_cells_from_circs = 0;
|
||||
}
|
||||
|
||||
flushed += num_cells_from_circs;
|
||||
|
||||
/* Now process the queue if necessary */
|
||||
|
||||
if ((q_len_after > q_len_before) &&
|
||||
(unlimited || (flushed < num_cells))) {
|
||||
flushed += channel_flush_some_cells_from_outgoing_queue(chan,
|
||||
(unlimited ? -1 : num_cells - flushed));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -2117,6 +2290,8 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
unsigned int unlimited = 0;
|
||||
ssize_t flushed = 0;
|
||||
cell_queue_entry_t *q = NULL;
|
||||
size_t cell_size;
|
||||
int free_q = 0, handed_off = 0;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(chan->write_cell);
|
||||
@ -2130,8 +2305,12 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
if (chan->state == CHANNEL_STATE_OPEN) {
|
||||
while ((unlimited || num_cells > flushed) &&
|
||||
NULL != (q = TOR_SIMPLEQ_FIRST(&chan->outgoing_queue))) {
|
||||
free_q = 0;
|
||||
handed_off = 0;
|
||||
|
||||
if (1) {
|
||||
/* Figure out how big it is for statistical purposes */
|
||||
cell_size = channel_get_cell_queue_entry_size(chan, q);
|
||||
/*
|
||||
* Okay, we have a good queue entry, try to give it to the lower
|
||||
* layer.
|
||||
@ -2144,8 +2323,9 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
++flushed;
|
||||
channel_timestamp_xmit(chan);
|
||||
++(chan->n_cells_xmitted);
|
||||
cell_queue_entry_free(q, 1);
|
||||
q = NULL;
|
||||
chan->n_bytes_xmitted += cell_size;
|
||||
free_q = 1;
|
||||
handed_off = 1;
|
||||
}
|
||||
/* Else couldn't write it; leave it on the queue */
|
||||
} else {
|
||||
@ -2156,8 +2336,8 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
"(global ID " U64_FORMAT ").",
|
||||
chan, U64_PRINTF_ARG(chan->global_identifier));
|
||||
/* Throw it away */
|
||||
cell_queue_entry_free(q, 0);
|
||||
q = NULL;
|
||||
free_q = 1;
|
||||
handed_off = 0;
|
||||
}
|
||||
break;
|
||||
case CELL_QUEUE_PACKED:
|
||||
@ -2167,8 +2347,9 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
++flushed;
|
||||
channel_timestamp_xmit(chan);
|
||||
++(chan->n_cells_xmitted);
|
||||
cell_queue_entry_free(q, 1);
|
||||
q = NULL;
|
||||
chan->n_bytes_xmitted += cell_size;
|
||||
free_q = 1;
|
||||
handed_off = 1;
|
||||
}
|
||||
/* Else couldn't write it; leave it on the queue */
|
||||
} else {
|
||||
@ -2179,8 +2360,8 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
"(global ID " U64_FORMAT ").",
|
||||
chan, U64_PRINTF_ARG(chan->global_identifier));
|
||||
/* Throw it away */
|
||||
cell_queue_entry_free(q, 0);
|
||||
q = NULL;
|
||||
free_q = 1;
|
||||
handed_off = 0;
|
||||
}
|
||||
break;
|
||||
case CELL_QUEUE_VAR:
|
||||
@ -2190,8 +2371,9 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
++flushed;
|
||||
channel_timestamp_xmit(chan);
|
||||
++(chan->n_cells_xmitted);
|
||||
cell_queue_entry_free(q, 1);
|
||||
q = NULL;
|
||||
chan->n_bytes_xmitted += cell_size;
|
||||
free_q = 1;
|
||||
handed_off = 1;
|
||||
}
|
||||
/* Else couldn't write it; leave it on the queue */
|
||||
} else {
|
||||
@ -2202,8 +2384,8 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
"(global ID " U64_FORMAT ").",
|
||||
chan, U64_PRINTF_ARG(chan->global_identifier));
|
||||
/* Throw it away */
|
||||
cell_queue_entry_free(q, 0);
|
||||
q = NULL;
|
||||
free_q = 1;
|
||||
handed_off = 0;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
@ -2213,12 +2395,32 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
"(global ID " U64_FORMAT "; ignoring it."
|
||||
" Someone should fix this.",
|
||||
q->type, chan, U64_PRINTF_ARG(chan->global_identifier));
|
||||
cell_queue_entry_free(q, 0);
|
||||
q = NULL;
|
||||
free_q = 1;
|
||||
handed_off = 0;
|
||||
}
|
||||
|
||||
/* if q got NULLed out, we used it and should remove the queue entry */
|
||||
if (!q) TOR_SIMPLEQ_REMOVE_HEAD(&chan->outgoing_queue, next);
|
||||
/*
|
||||
* if free_q is set, we used it and should remove the queue entry;
|
||||
* we have to do the free down here so TOR_SIMPLEQ_REMOVE_HEAD isn't
|
||||
* accessing freed memory
|
||||
*/
|
||||
if (free_q) {
|
||||
TOR_SIMPLEQ_REMOVE_HEAD(&chan->outgoing_queue, next);
|
||||
/*
|
||||
* ...and we handed a cell off to the lower layer, so we should
|
||||
* update the counters.
|
||||
*/
|
||||
++n_channel_cells_passed_to_lower_layer;
|
||||
--n_channel_cells_in_queues;
|
||||
n_channel_bytes_passed_to_lower_layer += cell_size;
|
||||
n_channel_bytes_in_queues -= cell_size;
|
||||
channel_assert_counter_consistency();
|
||||
/* Update the channel's queue size too */
|
||||
chan->bytes_in_queue -= cell_size;
|
||||
/* Finally, free q */
|
||||
cell_queue_entry_free(q, handed_off);
|
||||
q = NULL;
|
||||
}
|
||||
/* No cell removed from list, so we can't go on any further */
|
||||
else break;
|
||||
}
|
||||
@ -2230,6 +2432,9 @@ channel_flush_some_cells_from_outgoing_queue(channel_t *chan,
|
||||
channel_timestamp_drained(chan);
|
||||
}
|
||||
|
||||
/* Update the estimate queue size */
|
||||
channel_update_xmit_queue_size(chan);
|
||||
|
||||
return flushed;
|
||||
}
|
||||
|
||||
@ -2541,8 +2746,9 @@ channel_queue_cell(channel_t *chan, cell_t *cell)
|
||||
/* Timestamp for receiving */
|
||||
channel_timestamp_recv(chan);
|
||||
|
||||
/* Update the counter */
|
||||
/* Update the counters */
|
||||
++(chan->n_cells_recved);
|
||||
chan->n_bytes_recved += get_cell_network_size(chan->wide_circ_ids);
|
||||
|
||||
/* If we don't need to queue we can just call cell_handler */
|
||||
if (!need_to_queue) {
|
||||
@ -2596,6 +2802,8 @@ channel_queue_var_cell(channel_t *chan, var_cell_t *var_cell)
|
||||
|
||||
/* Update the counter */
|
||||
++(chan->n_cells_recved);
|
||||
chan->n_bytes_recved += get_var_cell_header_size(chan->wide_circ_ids) +
|
||||
var_cell->payload_len;
|
||||
|
||||
/* If we don't need to queue we can just call cell_handler */
|
||||
if (!need_to_queue) {
|
||||
@ -2645,6 +2853,19 @@ packed_cell_is_destroy(channel_t *chan,
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Assert that the global channel stats counters are internally consistent
|
||||
*/
|
||||
|
||||
static void
|
||||
channel_assert_counter_consistency(void)
|
||||
{
|
||||
tor_assert(n_channel_cells_queued ==
|
||||
(n_channel_cells_in_queues + n_channel_cells_passed_to_lower_layer));
|
||||
tor_assert(n_channel_bytes_queued ==
|
||||
(n_channel_bytes_in_queues + n_channel_bytes_passed_to_lower_layer));
|
||||
}
|
||||
|
||||
/** DOCDOC */
|
||||
static int
|
||||
is_destroy_cell(channel_t *chan,
|
||||
@ -2726,6 +2947,19 @@ void
|
||||
channel_dumpstats(int severity)
|
||||
{
|
||||
if (all_channels && smartlist_len(all_channels) > 0) {
|
||||
tor_log(severity, LD_GENERAL,
|
||||
"Channels have queued " U64_FORMAT " bytes in " U64_FORMAT " cells, "
|
||||
"and handed " U64_FORMAT " bytes in " U64_FORMAT " cells to the lower"
|
||||
" layer.",
|
||||
U64_PRINTF_ARG(n_channel_bytes_queued),
|
||||
U64_PRINTF_ARG(n_channel_cells_queued),
|
||||
U64_PRINTF_ARG(n_channel_bytes_passed_to_lower_layer),
|
||||
U64_PRINTF_ARG(n_channel_cells_passed_to_lower_layer));
|
||||
tor_log(severity, LD_GENERAL,
|
||||
"There are currently " U64_FORMAT " bytes in " U64_FORMAT " cells "
|
||||
"in channel queues.",
|
||||
U64_PRINTF_ARG(n_channel_bytes_in_queues),
|
||||
U64_PRINTF_ARG(n_channel_cells_in_queues));
|
||||
tor_log(severity, LD_GENERAL,
|
||||
"Dumping statistics about %d channels:",
|
||||
smartlist_len(all_channels));
|
||||
@ -3200,7 +3434,7 @@ channel_listener_describe_transport(channel_listener_t *chan_l)
|
||||
/**
|
||||
* Return the number of entries in <b>queue</b>
|
||||
*/
|
||||
static int
|
||||
STATIC int
|
||||
chan_cell_queue_len(const chan_cell_queue_t *queue)
|
||||
{
|
||||
int r = 0;
|
||||
@ -3216,8 +3450,8 @@ chan_cell_queue_len(const chan_cell_queue_t *queue)
|
||||
* Dump statistics for one channel to the log
|
||||
*/
|
||||
|
||||
void
|
||||
channel_dump_statistics(channel_t *chan, int severity)
|
||||
MOCK_IMPL(void,
|
||||
channel_dump_statistics, (channel_t *chan, int severity))
|
||||
{
|
||||
double avg, interval, age;
|
||||
time_t now = time(NULL);
|
||||
@ -3369,12 +3603,22 @@ channel_dump_statistics(channel_t *chan, int severity)
|
||||
/* Describe counters and rates */
|
||||
tor_log(severity, LD_GENERAL,
|
||||
" * Channel " U64_FORMAT " has received "
|
||||
U64_FORMAT " cells and transmitted " U64_FORMAT,
|
||||
U64_FORMAT " bytes in " U64_FORMAT " cells and transmitted "
|
||||
U64_FORMAT " bytes in " U64_FORMAT " cells",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
U64_PRINTF_ARG(chan->n_bytes_recved),
|
||||
U64_PRINTF_ARG(chan->n_cells_recved),
|
||||
U64_PRINTF_ARG(chan->n_bytes_xmitted),
|
||||
U64_PRINTF_ARG(chan->n_cells_xmitted));
|
||||
if (now > chan->timestamp_created &&
|
||||
chan->timestamp_created > 0) {
|
||||
if (chan->n_bytes_recved > 0) {
|
||||
avg = (double)(chan->n_bytes_recved) / age;
|
||||
tor_log(severity, LD_GENERAL,
|
||||
" * Channel " U64_FORMAT " has averaged %f "
|
||||
"bytes received per second",
|
||||
U64_PRINTF_ARG(chan->global_identifier), avg);
|
||||
}
|
||||
if (chan->n_cells_recved > 0) {
|
||||
avg = (double)(chan->n_cells_recved) / age;
|
||||
if (avg >= 1.0) {
|
||||
@ -3390,6 +3634,13 @@ channel_dump_statistics(channel_t *chan, int severity)
|
||||
U64_PRINTF_ARG(chan->global_identifier), interval);
|
||||
}
|
||||
}
|
||||
if (chan->n_bytes_xmitted > 0) {
|
||||
avg = (double)(chan->n_bytes_xmitted) / age;
|
||||
tor_log(severity, LD_GENERAL,
|
||||
" * Channel " U64_FORMAT " has averaged %f "
|
||||
"bytes transmitted per second",
|
||||
U64_PRINTF_ARG(chan->global_identifier), avg);
|
||||
}
|
||||
if (chan->n_cells_xmitted > 0) {
|
||||
avg = (double)(chan->n_cells_xmitted) / age;
|
||||
if (avg >= 1.0) {
|
||||
@ -3807,6 +4058,50 @@ channel_mark_outgoing(channel_t *chan)
|
||||
chan->is_incoming = 0;
|
||||
}
|
||||
|
||||
/************************
|
||||
* Flow control queries *
|
||||
***********************/
|
||||
|
||||
/*
|
||||
* Get the latest estimate for the total queue size of all open channels
|
||||
*/
|
||||
|
||||
uint64_t
|
||||
channel_get_global_queue_estimate(void)
|
||||
{
|
||||
return estimated_total_queue_size;
|
||||
}
|
||||
|
||||
/*
|
||||
* Estimate the number of writeable cells
|
||||
*
|
||||
* Ask the lower layer for an estimate of how many cells it can accept, and
|
||||
* then subtract the length of our outgoing_queue, if any, to produce an
|
||||
* estimate of the number of cells this channel can accept for writes.
|
||||
*/
|
||||
|
||||
int
|
||||
channel_num_cells_writeable(channel_t *chan)
|
||||
{
|
||||
int result;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(chan->num_cells_writeable);
|
||||
|
||||
if (chan->state == CHANNEL_STATE_OPEN) {
|
||||
/* Query lower layer */
|
||||
result = chan->num_cells_writeable(chan);
|
||||
/* Subtract cell queue length, if any */
|
||||
result -= chan_cell_queue_len(&chan->outgoing_queue);
|
||||
if (result < 0) result = 0;
|
||||
} else {
|
||||
/* No cells are writeable in any other state */
|
||||
result = 0;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/*********************
|
||||
* Timestamp updates *
|
||||
********************/
|
||||
@ -4209,3 +4504,87 @@ channel_set_circid_type(channel_t *chan,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the estimated number of bytes queued to transmit for this channel,
|
||||
* and notify the scheduler. The estimate includes both the channel queue and
|
||||
* the queue size reported by the lower layer, and an overhead estimate
|
||||
* optionally provided by the lower layer.
|
||||
*/
|
||||
|
||||
void
|
||||
channel_update_xmit_queue_size(channel_t *chan)
|
||||
{
|
||||
uint64_t queued, adj;
|
||||
double overhead;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(chan->num_bytes_queued);
|
||||
|
||||
/*
|
||||
* First, get the number of bytes we have queued without factoring in
|
||||
* lower-layer overhead.
|
||||
*/
|
||||
queued = chan->num_bytes_queued(chan) + chan->bytes_in_queue;
|
||||
/* Next, adjust by the overhead factor, if any is available */
|
||||
if (chan->get_overhead_estimate) {
|
||||
overhead = chan->get_overhead_estimate(chan);
|
||||
if (overhead >= 1.0f) {
|
||||
queued *= overhead;
|
||||
} else {
|
||||
/* Ignore silly overhead factors */
|
||||
log_notice(LD_CHANNEL, "Ignoring silly overhead factor %f", overhead);
|
||||
}
|
||||
}
|
||||
|
||||
/* Now, compare to the previous estimate */
|
||||
if (queued > chan->bytes_queued_for_xmit) {
|
||||
adj = queued - chan->bytes_queued_for_xmit;
|
||||
log_debug(LD_CHANNEL,
|
||||
"Increasing queue size for channel " U64_FORMAT " by " U64_FORMAT
|
||||
" from " U64_FORMAT " to " U64_FORMAT,
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
U64_PRINTF_ARG(adj),
|
||||
U64_PRINTF_ARG(chan->bytes_queued_for_xmit),
|
||||
U64_PRINTF_ARG(queued));
|
||||
/* Update the channel's estimate */
|
||||
chan->bytes_queued_for_xmit = queued;
|
||||
|
||||
/* Update the global queue size estimate if appropriate */
|
||||
if (chan->state == CHANNEL_STATE_OPEN ||
|
||||
chan->state == CHANNEL_STATE_MAINT) {
|
||||
estimated_total_queue_size += adj;
|
||||
log_debug(LD_CHANNEL,
|
||||
"Increasing global queue size by " U64_FORMAT " for channel "
|
||||
U64_FORMAT ", new size is " U64_FORMAT,
|
||||
U64_PRINTF_ARG(adj), U64_PRINTF_ARG(chan->global_identifier),
|
||||
U64_PRINTF_ARG(estimated_total_queue_size));
|
||||
/* Tell the scheduler we're increasing the queue size */
|
||||
scheduler_adjust_queue_size(chan, 1, adj);
|
||||
}
|
||||
} else if (queued < chan->bytes_queued_for_xmit) {
|
||||
adj = chan->bytes_queued_for_xmit - queued;
|
||||
log_debug(LD_CHANNEL,
|
||||
"Decreasing queue size for channel " U64_FORMAT " by " U64_FORMAT
|
||||
" from " U64_FORMAT " to " U64_FORMAT,
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
U64_PRINTF_ARG(adj),
|
||||
U64_PRINTF_ARG(chan->bytes_queued_for_xmit),
|
||||
U64_PRINTF_ARG(queued));
|
||||
/* Update the channel's estimate */
|
||||
chan->bytes_queued_for_xmit = queued;
|
||||
|
||||
/* Update the global queue size estimate if appropriate */
|
||||
if (chan->state == CHANNEL_STATE_OPEN ||
|
||||
chan->state == CHANNEL_STATE_MAINT) {
|
||||
estimated_total_queue_size -= adj;
|
||||
log_debug(LD_CHANNEL,
|
||||
"Decreasing global queue size by " U64_FORMAT " for channel "
|
||||
U64_FORMAT ", new size is " U64_FORMAT,
|
||||
U64_PRINTF_ARG(adj), U64_PRINTF_ARG(chan->global_identifier),
|
||||
U64_PRINTF_ARG(estimated_total_queue_size));
|
||||
/* Tell the scheduler we're decreasing the queue size */
|
||||
scheduler_adjust_queue_size(chan, -1, adj);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -57,6 +57,32 @@ struct channel_s {
|
||||
CHANNEL_CLOSE_FOR_ERROR
|
||||
} reason_for_closing;
|
||||
|
||||
/** State variable for use by the scheduler */
|
||||
enum {
|
||||
/*
|
||||
* The channel is not open, or it has a full output buffer but no queued
|
||||
* cells.
|
||||
*/
|
||||
SCHED_CHAN_IDLE = 0,
|
||||
/*
|
||||
* The channel has space on its output buffer to write, but no queued
|
||||
* cells.
|
||||
*/
|
||||
SCHED_CHAN_WAITING_FOR_CELLS,
|
||||
/*
|
||||
* The scheduler has queued cells but no output buffer space to write.
|
||||
*/
|
||||
SCHED_CHAN_WAITING_TO_WRITE,
|
||||
/*
|
||||
* The scheduler has both queued cells and output buffer space, and is
|
||||
* eligible for the scheduler loop.
|
||||
*/
|
||||
SCHED_CHAN_PENDING
|
||||
} scheduler_state;
|
||||
|
||||
/** Heap index for use by the scheduler */
|
||||
int sched_heap_idx;
|
||||
|
||||
/** Timestamps for both cell channels and listeners */
|
||||
time_t timestamp_created; /* Channel created */
|
||||
time_t timestamp_active; /* Any activity */
|
||||
@ -79,6 +105,11 @@ struct channel_s {
|
||||
/* Methods implemented by the lower layer */
|
||||
|
||||
/**
|
||||
* Ask the lower layer for an estimate of the average overhead for
|
||||
* transmissions on this channel.
|
||||
*/
|
||||
double (*get_overhead_estimate)(channel_t *);
|
||||
/*
|
||||
* Ask the underlying transport what the remote endpoint address is, in
|
||||
* a tor_addr_t. This is optional and subclasses may leave this NULL.
|
||||
* If they implement it, they should write the address out to the
|
||||
@ -110,7 +141,11 @@ struct channel_s {
|
||||
int (*matches_extend_info)(channel_t *, extend_info_t *);
|
||||
/** Check if this channel matches a target address when extending */
|
||||
int (*matches_target)(channel_t *, const tor_addr_t *);
|
||||
/** Write a cell to an open channel */
|
||||
/* Ask the lower layer how many bytes it has queued but not yet sent */
|
||||
size_t (*num_bytes_queued)(channel_t *);
|
||||
/* Ask the lower layer how many cells can be written */
|
||||
int (*num_cells_writeable)(channel_t *);
|
||||
/* Write a cell to an open channel */
|
||||
int (*write_cell)(channel_t *, cell_t *);
|
||||
/** Write a packed cell to an open channel */
|
||||
int (*write_packed_cell)(channel_t *, packed_cell_t *);
|
||||
@ -198,8 +233,16 @@ struct channel_s {
|
||||
uint64_t dirreq_id;
|
||||
|
||||
/** Channel counters for cell channels */
|
||||
uint64_t n_cells_recved;
|
||||
uint64_t n_cells_xmitted;
|
||||
uint64_t n_cells_recved, n_bytes_recved;
|
||||
uint64_t n_cells_xmitted, n_bytes_xmitted;
|
||||
|
||||
/** Our current contribution to the scheduler's total xmit queue */
|
||||
uint64_t bytes_queued_for_xmit;
|
||||
|
||||
/** Number of bytes in this channel's cell queue; does not include
|
||||
* lower-layer queueing.
|
||||
*/
|
||||
uint64_t bytes_in_queue;
|
||||
};
|
||||
|
||||
struct channel_listener_s {
|
||||
@ -311,6 +354,34 @@ void channel_set_cmux_policy_everywhere(circuitmux_policy_t *pol);
|
||||
|
||||
#ifdef TOR_CHANNEL_INTERNAL_
|
||||
|
||||
#ifdef CHANNEL_PRIVATE_
|
||||
/* Cell queue structure (here rather than channel.c for test suite use) */
|
||||
|
||||
typedef struct cell_queue_entry_s cell_queue_entry_t;
|
||||
struct cell_queue_entry_s {
|
||||
TOR_SIMPLEQ_ENTRY(cell_queue_entry_s) next;
|
||||
enum {
|
||||
CELL_QUEUE_FIXED,
|
||||
CELL_QUEUE_VAR,
|
||||
CELL_QUEUE_PACKED
|
||||
} type;
|
||||
union {
|
||||
struct {
|
||||
cell_t *cell;
|
||||
} fixed;
|
||||
struct {
|
||||
var_cell_t *var_cell;
|
||||
} var;
|
||||
struct {
|
||||
packed_cell_t *packed_cell;
|
||||
} packed;
|
||||
} u;
|
||||
};
|
||||
|
||||
/* Cell queue functions for benefit of test suite */
|
||||
STATIC int chan_cell_queue_len(const chan_cell_queue_t *queue);
|
||||
#endif
|
||||
|
||||
/* Channel operations for subclasses and internal use only */
|
||||
|
||||
/* Initialize a newly allocated channel - do this first in subclass
|
||||
@ -384,7 +455,8 @@ void channel_queue_var_cell(channel_t *chan, var_cell_t *var_cell);
|
||||
void channel_flush_cells(channel_t *chan);
|
||||
|
||||
/* Request from lower layer for more cells if available */
|
||||
ssize_t channel_flush_some_cells(channel_t *chan, ssize_t num_cells);
|
||||
MOCK_DECL(ssize_t, channel_flush_some_cells,
|
||||
(channel_t *chan, ssize_t num_cells));
|
||||
|
||||
/* Query if data available on this channel */
|
||||
int channel_more_to_flush(channel_t *chan);
|
||||
@ -435,7 +507,7 @@ channel_t * channel_next_with_digest(channel_t *chan);
|
||||
*/
|
||||
|
||||
const char * channel_describe_transport(channel_t *chan);
|
||||
void channel_dump_statistics(channel_t *chan, int severity);
|
||||
MOCK_DECL(void, channel_dump_statistics, (channel_t *chan, int severity));
|
||||
void channel_dump_transport_statistics(channel_t *chan, int severity);
|
||||
const char * channel_get_actual_remote_descr(channel_t *chan);
|
||||
const char * channel_get_actual_remote_address(channel_t *chan);
|
||||
@ -458,6 +530,7 @@ unsigned int channel_num_circuits(channel_t *chan);
|
||||
void channel_set_circid_type(channel_t *chan, crypto_pk_t *identity_rcvd,
|
||||
int consider_identity);
|
||||
void channel_timestamp_client(channel_t *chan);
|
||||
void channel_update_xmit_queue_size(channel_t *chan);
|
||||
|
||||
const char * channel_listener_describe_transport(channel_listener_t *chan_l);
|
||||
void channel_listener_dump_statistics(channel_listener_t *chan_l,
|
||||
@ -465,6 +538,10 @@ void channel_listener_dump_statistics(channel_listener_t *chan_l,
|
||||
void channel_listener_dump_transport_statistics(channel_listener_t *chan_l,
|
||||
int severity);
|
||||
|
||||
/* Flow control queries */
|
||||
uint64_t channel_get_global_queue_estimate(void);
|
||||
int channel_num_cells_writeable(channel_t *chan);
|
||||
|
||||
/* Timestamp queries */
|
||||
time_t channel_when_created(channel_t *chan);
|
||||
time_t channel_when_last_active(channel_t *chan);
|
||||
|
@ -25,6 +25,7 @@
|
||||
#include "relay.h"
|
||||
#include "router.h"
|
||||
#include "routerlist.h"
|
||||
#include "scheduler.h"
|
||||
|
||||
/** How many CELL_PADDING cells have we received, ever? */
|
||||
uint64_t stats_n_padding_cells_processed = 0;
|
||||
@ -54,6 +55,7 @@ static void channel_tls_common_init(channel_tls_t *tlschan);
|
||||
static void channel_tls_close_method(channel_t *chan);
|
||||
static const char * channel_tls_describe_transport_method(channel_t *chan);
|
||||
static void channel_tls_free_method(channel_t *chan);
|
||||
static double channel_tls_get_overhead_estimate_method(channel_t *chan);
|
||||
static int
|
||||
channel_tls_get_remote_addr_method(channel_t *chan, tor_addr_t *addr_out);
|
||||
static int
|
||||
@ -67,6 +69,8 @@ channel_tls_matches_extend_info_method(channel_t *chan,
|
||||
extend_info_t *extend_info);
|
||||
static int channel_tls_matches_target_method(channel_t *chan,
|
||||
const tor_addr_t *target);
|
||||
static int channel_tls_num_cells_writeable_method(channel_t *chan);
|
||||
static size_t channel_tls_num_bytes_queued_method(channel_t *chan);
|
||||
static int channel_tls_write_cell_method(channel_t *chan,
|
||||
cell_t *cell);
|
||||
static int channel_tls_write_packed_cell_method(channel_t *chan,
|
||||
@ -116,6 +120,7 @@ channel_tls_common_init(channel_tls_t *tlschan)
|
||||
chan->close = channel_tls_close_method;
|
||||
chan->describe_transport = channel_tls_describe_transport_method;
|
||||
chan->free = channel_tls_free_method;
|
||||
chan->get_overhead_estimate = channel_tls_get_overhead_estimate_method;
|
||||
chan->get_remote_addr = channel_tls_get_remote_addr_method;
|
||||
chan->get_remote_descr = channel_tls_get_remote_descr_method;
|
||||
chan->get_transport_name = channel_tls_get_transport_name_method;
|
||||
@ -123,6 +128,8 @@ channel_tls_common_init(channel_tls_t *tlschan)
|
||||
chan->is_canonical = channel_tls_is_canonical_method;
|
||||
chan->matches_extend_info = channel_tls_matches_extend_info_method;
|
||||
chan->matches_target = channel_tls_matches_target_method;
|
||||
chan->num_bytes_queued = channel_tls_num_bytes_queued_method;
|
||||
chan->num_cells_writeable = channel_tls_num_cells_writeable_method;
|
||||
chan->write_cell = channel_tls_write_cell_method;
|
||||
chan->write_packed_cell = channel_tls_write_packed_cell_method;
|
||||
chan->write_var_cell = channel_tls_write_var_cell_method;
|
||||
@ -434,6 +441,40 @@ channel_tls_free_method(channel_t *chan)
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get an estimate of the average TLS overhead for the upper layer
|
||||
*/
|
||||
|
||||
static double
|
||||
channel_tls_get_overhead_estimate_method(channel_t *chan)
|
||||
{
|
||||
double overhead = 1.0f;
|
||||
channel_tls_t *tlschan = BASE_CHAN_TO_TLS(chan);
|
||||
|
||||
tor_assert(tlschan);
|
||||
tor_assert(tlschan->conn);
|
||||
|
||||
/* Just return 1.0f if we don't have sensible data */
|
||||
if (tlschan->conn->bytes_xmitted > 0 &&
|
||||
tlschan->conn->bytes_xmitted_by_tls >=
|
||||
tlschan->conn->bytes_xmitted) {
|
||||
overhead = ((double)(tlschan->conn->bytes_xmitted_by_tls)) /
|
||||
((double)(tlschan->conn->bytes_xmitted));
|
||||
|
||||
/*
|
||||
* Never estimate more than 2.0; otherwise we get silly large estimates
|
||||
* at the very start of a new TLS connection.
|
||||
*/
|
||||
if (overhead > 2.0f) overhead = 2.0f;
|
||||
}
|
||||
|
||||
log_debug(LD_CHANNEL,
|
||||
"Estimated overhead ratio for TLS chan " U64_FORMAT " is %f",
|
||||
U64_PRINTF_ARG(chan->global_identifier), overhead);
|
||||
|
||||
return overhead;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the remote address of a channel_tls_t
|
||||
*
|
||||
@ -672,6 +713,53 @@ channel_tls_matches_target_method(channel_t *chan,
|
||||
return tor_addr_eq(&(tlschan->conn->real_addr), target);
|
||||
}
|
||||
|
||||
/**
|
||||
* Tell the upper layer how many bytes we have queued and not yet
|
||||
* sent.
|
||||
*/
|
||||
|
||||
static size_t
|
||||
channel_tls_num_bytes_queued_method(channel_t *chan)
|
||||
{
|
||||
channel_tls_t *tlschan = BASE_CHAN_TO_TLS(chan);
|
||||
|
||||
tor_assert(tlschan);
|
||||
tor_assert(tlschan->conn);
|
||||
|
||||
return connection_get_outbuf_len(TO_CONN(tlschan->conn));
|
||||
}
|
||||
|
||||
/**
|
||||
* Tell the upper layer how many cells we can accept to write
|
||||
*
|
||||
* This implements the num_cells_writeable method for channel_tls_t; it
|
||||
* returns an estimate of the number of cells we can accept with
|
||||
* channel_tls_write_*_cell().
|
||||
*/
|
||||
|
||||
static int
|
||||
channel_tls_num_cells_writeable_method(channel_t *chan)
|
||||
{
|
||||
size_t outbuf_len;
|
||||
ssize_t n;
|
||||
channel_tls_t *tlschan = BASE_CHAN_TO_TLS(chan);
|
||||
size_t cell_network_size;
|
||||
|
||||
tor_assert(tlschan);
|
||||
tor_assert(tlschan->conn);
|
||||
|
||||
cell_network_size = get_cell_network_size(tlschan->conn->wide_circ_ids);
|
||||
outbuf_len = connection_get_outbuf_len(TO_CONN(tlschan->conn));
|
||||
/* Get the number of cells */
|
||||
n = CEIL_DIV(OR_CONN_HIGHWATER - outbuf_len, cell_network_size);
|
||||
if (n < 0) n = 0;
|
||||
#if SIZEOF_SIZE_T > SIZEOF_INT
|
||||
if (n > INT_MAX) n = INT_MAX;
|
||||
#endif
|
||||
|
||||
return (int)n;
|
||||
}
|
||||
|
||||
/**
|
||||
* Write a cell to a channel_tls_t
|
||||
*
|
||||
@ -867,6 +955,10 @@ channel_tls_handle_state_change_on_orconn(channel_tls_t *chan,
|
||||
* CHANNEL_STATE_MAINT on this.
|
||||
*/
|
||||
channel_change_state(base_chan, CHANNEL_STATE_OPEN);
|
||||
/* We might have just become writeable; check and tell the scheduler */
|
||||
if (connection_or_num_cells_writeable(conn) > 0) {
|
||||
scheduler_channel_wants_writes(base_chan);
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* Not open, so from CHANNEL_STATE_OPEN we go to CHANNEL_STATE_MAINT,
|
||||
@ -878,58 +970,6 @@ channel_tls_handle_state_change_on_orconn(channel_tls_t *chan,
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Flush cells from a channel_tls_t
|
||||
*
|
||||
* Try to flush up to about num_cells cells, and return how many we flushed.
|
||||
*/
|
||||
|
||||
ssize_t
|
||||
channel_tls_flush_some_cells(channel_tls_t *chan, ssize_t num_cells)
|
||||
{
|
||||
ssize_t flushed = 0;
|
||||
|
||||
tor_assert(chan);
|
||||
|
||||
if (flushed >= num_cells) goto done;
|
||||
|
||||
/*
|
||||
* If channel_tls_t ever buffers anything below the channel_t layer, flush
|
||||
* that first here.
|
||||
*/
|
||||
|
||||
flushed += channel_flush_some_cells(TLS_CHAN_TO_BASE(chan),
|
||||
num_cells - flushed);
|
||||
|
||||
/*
|
||||
* If channel_tls_t ever buffers anything below the channel_t layer, check
|
||||
* how much we actually got and push it on down here.
|
||||
*/
|
||||
|
||||
done:
|
||||
return flushed;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a channel_tls_t has anything to flush
|
||||
*
|
||||
* Return true if there is any more to flush on this channel (cells in queue
|
||||
* or active circuits).
|
||||
*/
|
||||
|
||||
int
|
||||
channel_tls_more_to_flush(channel_tls_t *chan)
|
||||
{
|
||||
tor_assert(chan);
|
||||
|
||||
/*
|
||||
* If channel_tls_t ever buffers anything below channel_t, the
|
||||
* check for that should go here first.
|
||||
*/
|
||||
|
||||
return channel_more_to_flush(TLS_CHAN_TO_BASE(chan));
|
||||
}
|
||||
|
||||
#ifdef KEEP_TIMING_STATS
|
||||
|
||||
/**
|
||||
|
@ -40,8 +40,6 @@ channel_t * channel_tls_to_base(channel_tls_t *tlschan);
|
||||
channel_tls_t * channel_tls_from_base(channel_t *chan);
|
||||
|
||||
/* Things for connection_or.c to call back into */
|
||||
ssize_t channel_tls_flush_some_cells(channel_tls_t *chan, ssize_t num_cells);
|
||||
int channel_tls_more_to_flush(channel_tls_t *chan);
|
||||
void channel_tls_handle_cell(cell_t *cell, or_connection_t *conn);
|
||||
void channel_tls_handle_state_change_on_orconn(channel_tls_t *chan,
|
||||
or_connection_t *conn,
|
||||
|
@ -14,6 +14,7 @@
|
||||
#include "or.h"
|
||||
#include "channel.h"
|
||||
#include "circpathbias.h"
|
||||
#define CIRCUITBUILD_PRIVATE
|
||||
#include "circuitbuild.h"
|
||||
#include "circuitlist.h"
|
||||
#include "circuitstats.h"
|
||||
@ -943,9 +944,9 @@ circuit_send_next_onion_skin(origin_circuit_t *circ)
|
||||
circuit_rep_hist_note_result(circ);
|
||||
circuit_has_opened(circ); /* do other actions as necessary */
|
||||
|
||||
if (!can_complete_circuit && !circ->build_state->onehop_tunnel) {
|
||||
if (!have_completed_a_circuit() && !circ->build_state->onehop_tunnel) {
|
||||
const or_options_t *options = get_options();
|
||||
can_complete_circuit=1;
|
||||
note_that_we_completed_a_circuit();
|
||||
/* FFFF Log a count of known routers here */
|
||||
log_notice(LD_GENERAL,
|
||||
"Tor has successfully opened a circuit. "
|
||||
@ -1033,7 +1034,8 @@ circuit_note_clock_jumped(int seconds_elapsed)
|
||||
seconds_elapsed >=0 ? "forward" : "backward");
|
||||
control_event_general_status(LOG_WARN, "CLOCK_JUMPED TIME=%d",
|
||||
seconds_elapsed);
|
||||
can_complete_circuit=0; /* so it'll log when it works again */
|
||||
/* so we log when it works again */
|
||||
note_that_we_maybe_cant_complete_circuits();
|
||||
control_event_client_status(severity, "CIRCUIT_NOT_ESTABLISHED REASON=%s",
|
||||
"CLOCK_JUMPED");
|
||||
circuit_mark_all_unused_circs();
|
||||
@ -1548,7 +1550,7 @@ choose_good_exit_server_general(int need_uptime, int need_capacity)
|
||||
* -1 means "Don't use this router at all."
|
||||
*/
|
||||
the_nodes = nodelist_get_list();
|
||||
n_supported = tor_calloc(sizeof(int), smartlist_len(the_nodes));
|
||||
n_supported = tor_calloc(smartlist_len(the_nodes), sizeof(int));
|
||||
SMARTLIST_FOREACH_BEGIN(the_nodes, const node_t *, node) {
|
||||
const int i = node_sl_idx;
|
||||
if (router_digest_is_me(node->identity)) {
|
||||
|
@ -302,8 +302,8 @@ channel_note_destroy_pending(channel_t *chan, circid_t id)
|
||||
|
||||
/** Called to indicate that a DESTROY is no longer pending on <b>chan</b> with
|
||||
* circuit ID <b>id</b> -- typically, because it has been sent. */
|
||||
void
|
||||
channel_note_destroy_not_pending(channel_t *chan, circid_t id)
|
||||
MOCK_IMPL(void, channel_note_destroy_not_pending,
|
||||
(channel_t *chan, circid_t id))
|
||||
{
|
||||
circuit_t *circ = circuit_get_by_circid_channel_even_if_marked(id,chan);
|
||||
if (circ) {
|
||||
@ -1719,30 +1719,36 @@ circuit_mark_for_close_, (circuit_t *circ, int reason, int line,
|
||||
tor_assert(circ->state == CIRCUIT_STATE_OPEN);
|
||||
tor_assert(ocirc->build_state->chosen_exit);
|
||||
tor_assert(ocirc->rend_data);
|
||||
/* treat this like getting a nack from it */
|
||||
log_info(LD_REND, "Failed intro circ %s to %s (awaiting ack). %s",
|
||||
safe_str_client(ocirc->rend_data->onion_address),
|
||||
safe_str_client(build_state_get_exit_nickname(ocirc->build_state)),
|
||||
timed_out ? "Recording timeout." : "Removing from descriptor.");
|
||||
rend_client_report_intro_point_failure(ocirc->build_state->chosen_exit,
|
||||
ocirc->rend_data,
|
||||
timed_out ?
|
||||
INTRO_POINT_FAILURE_TIMEOUT :
|
||||
INTRO_POINT_FAILURE_GENERIC);
|
||||
if (orig_reason != END_CIRC_REASON_IP_NOW_REDUNDANT) {
|
||||
/* treat this like getting a nack from it */
|
||||
log_info(LD_REND, "Failed intro circ %s to %s (awaiting ack). %s",
|
||||
safe_str_client(ocirc->rend_data->onion_address),
|
||||
safe_str_client(build_state_get_exit_nickname(ocirc->build_state)),
|
||||
timed_out ? "Recording timeout." : "Removing from descriptor.");
|
||||
rend_client_report_intro_point_failure(ocirc->build_state->chosen_exit,
|
||||
ocirc->rend_data,
|
||||
timed_out ?
|
||||
INTRO_POINT_FAILURE_TIMEOUT :
|
||||
INTRO_POINT_FAILURE_GENERIC);
|
||||
}
|
||||
} else if (circ->purpose == CIRCUIT_PURPOSE_C_INTRODUCING &&
|
||||
reason != END_CIRC_REASON_TIMEOUT) {
|
||||
origin_circuit_t *ocirc = TO_ORIGIN_CIRCUIT(circ);
|
||||
if (ocirc->build_state->chosen_exit && ocirc->rend_data) {
|
||||
log_info(LD_REND, "Failed intro circ %s to %s "
|
||||
"(building circuit to intro point). "
|
||||
"Marking intro point as possibly unreachable.",
|
||||
safe_str_client(ocirc->rend_data->onion_address),
|
||||
safe_str_client(build_state_get_exit_nickname(ocirc->build_state)));
|
||||
rend_client_report_intro_point_failure(ocirc->build_state->chosen_exit,
|
||||
ocirc->rend_data,
|
||||
INTRO_POINT_FAILURE_UNREACHABLE);
|
||||
if (orig_reason != END_CIRC_REASON_IP_NOW_REDUNDANT) {
|
||||
log_info(LD_REND, "Failed intro circ %s to %s "
|
||||
"(building circuit to intro point). "
|
||||
"Marking intro point as possibly unreachable.",
|
||||
safe_str_client(ocirc->rend_data->onion_address),
|
||||
safe_str_client(build_state_get_exit_nickname(
|
||||
ocirc->build_state)));
|
||||
rend_client_report_intro_point_failure(ocirc->build_state->chosen_exit,
|
||||
ocirc->rend_data,
|
||||
INTRO_POINT_FAILURE_UNREACHABLE);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (circ->n_chan) {
|
||||
circuit_clear_cell_queue(circ, circ->n_chan);
|
||||
/* Only send destroy if the channel isn't closing anyway */
|
||||
|
@ -72,7 +72,8 @@ void circuit_free_all(void);
|
||||
void circuits_handle_oom(size_t current_allocation);
|
||||
|
||||
void channel_note_destroy_pending(channel_t *chan, circid_t id);
|
||||
void channel_note_destroy_not_pending(channel_t *chan, circid_t id);
|
||||
MOCK_DECL(void, channel_note_destroy_not_pending,
|
||||
(channel_t *chan, circid_t id));
|
||||
|
||||
#ifdef CIRCUITLIST_PRIVATE
|
||||
STATIC void circuit_free(circuit_t *circ);
|
||||
|
@ -621,8 +621,8 @@ circuitmux_clear_policy(circuitmux_t *cmux)
|
||||
* Return the policy currently installed on a circuitmux_t
|
||||
*/
|
||||
|
||||
const circuitmux_policy_t *
|
||||
circuitmux_get_policy(circuitmux_t *cmux)
|
||||
MOCK_IMPL(const circuitmux_policy_t *,
|
||||
circuitmux_get_policy, (circuitmux_t *cmux))
|
||||
{
|
||||
tor_assert(cmux);
|
||||
|
||||
@ -896,8 +896,8 @@ circuitmux_num_cells_for_circuit(circuitmux_t *cmux, circuit_t *circ)
|
||||
* Query total number of available cells on a circuitmux
|
||||
*/
|
||||
|
||||
unsigned int
|
||||
circuitmux_num_cells(circuitmux_t *cmux)
|
||||
MOCK_IMPL(unsigned int,
|
||||
circuitmux_num_cells, (circuitmux_t *cmux))
|
||||
{
|
||||
tor_assert(cmux);
|
||||
|
||||
@ -1951,3 +1951,51 @@ circuitmux_count_queued_destroy_cells(const channel_t *chan,
|
||||
return n_destroy_cells;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare cmuxes to see which is more preferred; return < 0 if
|
||||
* cmux_1 has higher priority (i.e., cmux_1 < cmux_2 in the scheduler's
|
||||
* sort order), > 0 if cmux_2 has higher priority, or 0 if they are
|
||||
* equally preferred.
|
||||
*
|
||||
* If the cmuxes have different cmux policies or the policy does not
|
||||
* support the cmp_cmux method, return 0.
|
||||
*/
|
||||
|
||||
MOCK_IMPL(int,
|
||||
circuitmux_compare_muxes, (circuitmux_t *cmux_1, circuitmux_t *cmux_2))
|
||||
{
|
||||
const circuitmux_policy_t *policy;
|
||||
|
||||
tor_assert(cmux_1);
|
||||
tor_assert(cmux_2);
|
||||
|
||||
if (cmux_1 == cmux_2) {
|
||||
/* Equivalent because they're the same cmux */
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (cmux_1->policy && cmux_2->policy) {
|
||||
if (cmux_1->policy == cmux_2->policy) {
|
||||
policy = cmux_1->policy;
|
||||
|
||||
if (policy->cmp_cmux) {
|
||||
/* Okay, we can compare! */
|
||||
return policy->cmp_cmux(cmux_1, cmux_1->policy_data,
|
||||
cmux_2, cmux_2->policy_data);
|
||||
} else {
|
||||
/*
|
||||
* Equivalent because the policy doesn't know how to compare between
|
||||
* muxes.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
} else {
|
||||
/* Equivalent because they have different policies */
|
||||
return 0;
|
||||
}
|
||||
} else {
|
||||
/* Equivalent because one or both are missing a policy */
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -57,6 +57,9 @@ struct circuitmux_policy_s {
|
||||
/* Choose a circuit */
|
||||
circuit_t * (*pick_active_circuit)(circuitmux_t *cmux,
|
||||
circuitmux_policy_data_t *pol_data);
|
||||
/* Optional: channel comparator for use by the scheduler */
|
||||
int (*cmp_cmux)(circuitmux_t *cmux_1, circuitmux_policy_data_t *pol_data_1,
|
||||
circuitmux_t *cmux_2, circuitmux_policy_data_t *pol_data_2);
|
||||
};
|
||||
|
||||
/*
|
||||
@ -105,7 +108,8 @@ void circuitmux_free(circuitmux_t *cmux);
|
||||
|
||||
/* Policy control */
|
||||
void circuitmux_clear_policy(circuitmux_t *cmux);
|
||||
const circuitmux_policy_t * circuitmux_get_policy(circuitmux_t *cmux);
|
||||
MOCK_DECL(const circuitmux_policy_t *,
|
||||
circuitmux_get_policy, (circuitmux_t *cmux));
|
||||
void circuitmux_set_policy(circuitmux_t *cmux,
|
||||
const circuitmux_policy_t *pol);
|
||||
|
||||
@ -117,7 +121,7 @@ int circuitmux_is_circuit_attached(circuitmux_t *cmux, circuit_t *circ);
|
||||
int circuitmux_is_circuit_active(circuitmux_t *cmux, circuit_t *circ);
|
||||
unsigned int circuitmux_num_cells_for_circuit(circuitmux_t *cmux,
|
||||
circuit_t *circ);
|
||||
unsigned int circuitmux_num_cells(circuitmux_t *cmux);
|
||||
MOCK_DECL(unsigned int, circuitmux_num_cells, (circuitmux_t *cmux));
|
||||
unsigned int circuitmux_num_circuits(circuitmux_t *cmux);
|
||||
unsigned int circuitmux_num_active_circuits(circuitmux_t *cmux);
|
||||
|
||||
@ -148,5 +152,9 @@ void circuitmux_append_destroy_cell(channel_t *chan,
|
||||
void circuitmux_mark_destroyed_circids_usable(circuitmux_t *cmux,
|
||||
channel_t *chan);
|
||||
|
||||
/* Optional interchannel comparisons for scheduling */
|
||||
MOCK_DECL(int, circuitmux_compare_muxes,
|
||||
(circuitmux_t *cmux_1, circuitmux_t *cmux_2));
|
||||
|
||||
#endif /* TOR_CIRCUITMUX_H */
|
||||
|
||||
|
@ -187,6 +187,9 @@ ewma_notify_xmit_cells(circuitmux_t *cmux,
|
||||
static circuit_t *
|
||||
ewma_pick_active_circuit(circuitmux_t *cmux,
|
||||
circuitmux_policy_data_t *pol_data);
|
||||
static int
|
||||
ewma_cmp_cmux(circuitmux_t *cmux_1, circuitmux_policy_data_t *pol_data_1,
|
||||
circuitmux_t *cmux_2, circuitmux_policy_data_t *pol_data_2);
|
||||
|
||||
/*** EWMA global variables ***/
|
||||
|
||||
@ -209,7 +212,8 @@ circuitmux_policy_t ewma_policy = {
|
||||
/*.notify_circ_inactive =*/ ewma_notify_circ_inactive,
|
||||
/*.notify_set_n_cells =*/ NULL, /* EWMA doesn't need this */
|
||||
/*.notify_xmit_cells =*/ ewma_notify_xmit_cells,
|
||||
/*.pick_active_circuit =*/ ewma_pick_active_circuit
|
||||
/*.pick_active_circuit =*/ ewma_pick_active_circuit,
|
||||
/*.cmp_cmux =*/ ewma_cmp_cmux
|
||||
};
|
||||
|
||||
/*** EWMA method implementations using the below EWMA helper functions ***/
|
||||
@ -453,6 +457,58 @@ ewma_pick_active_circuit(circuitmux_t *cmux,
|
||||
return circ;
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two EWMA cmuxes, and return -1, 0 or 1 to indicate which should
|
||||
* be more preferred - see circuitmux_compare_muxes() of circuitmux.c.
|
||||
*/
|
||||
|
||||
static int
|
||||
ewma_cmp_cmux(circuitmux_t *cmux_1, circuitmux_policy_data_t *pol_data_1,
|
||||
circuitmux_t *cmux_2, circuitmux_policy_data_t *pol_data_2)
|
||||
{
|
||||
ewma_policy_data_t *p1 = NULL, *p2 = NULL;
|
||||
cell_ewma_t *ce1 = NULL, *ce2 = NULL;
|
||||
|
||||
tor_assert(cmux_1);
|
||||
tor_assert(pol_data_1);
|
||||
tor_assert(cmux_2);
|
||||
tor_assert(pol_data_2);
|
||||
|
||||
p1 = TO_EWMA_POL_DATA(pol_data_1);
|
||||
p2 = TO_EWMA_POL_DATA(pol_data_1);
|
||||
|
||||
if (p1 != p2) {
|
||||
/* Get the head cell_ewma_t from each queue */
|
||||
if (smartlist_len(p1->active_circuit_pqueue) > 0) {
|
||||
ce1 = smartlist_get(p1->active_circuit_pqueue, 0);
|
||||
}
|
||||
|
||||
if (smartlist_len(p2->active_circuit_pqueue) > 0) {
|
||||
ce2 = smartlist_get(p2->active_circuit_pqueue, 0);
|
||||
}
|
||||
|
||||
/* Got both of them? */
|
||||
if (ce1 != NULL && ce2 != NULL) {
|
||||
/* Pick whichever one has the better best circuit */
|
||||
return compare_cell_ewma_counts(ce1, ce2);
|
||||
} else {
|
||||
if (ce1 != NULL ) {
|
||||
/* We only have a circuit on cmux_1, so prefer it */
|
||||
return -1;
|
||||
} else if (ce2 != NULL) {
|
||||
/* We only have a circuit on cmux_2, so prefer it */
|
||||
return 1;
|
||||
} else {
|
||||
/* No circuits at all; no preference */
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
/* We got identical params */
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/** Helper for sorting cell_ewma_t values in their priority queue. */
|
||||
static int
|
||||
compare_cell_ewma_counts(const void *p1, const void *p2)
|
||||
|
@ -404,7 +404,7 @@ circuit_build_times_new_consensus_params(circuit_build_times_t *cbt,
|
||||
* distress anyway, so memory correctness here is paramount over
|
||||
* doing acrobatics to preserve the array.
|
||||
*/
|
||||
recent_circs = tor_calloc(sizeof(int8_t), num);
|
||||
recent_circs = tor_calloc(num, sizeof(int8_t));
|
||||
if (cbt->liveness.timeouts_after_firsthop &&
|
||||
cbt->liveness.num_recent_circs > 0) {
|
||||
memcpy(recent_circs, cbt->liveness.timeouts_after_firsthop,
|
||||
@ -508,7 +508,7 @@ circuit_build_times_init(circuit_build_times_t *cbt)
|
||||
cbt->liveness.num_recent_circs =
|
||||
circuit_build_times_recent_circuit_count(NULL);
|
||||
cbt->liveness.timeouts_after_firsthop =
|
||||
tor_calloc(sizeof(int8_t), cbt->liveness.num_recent_circs);
|
||||
tor_calloc(cbt->liveness.num_recent_circs, sizeof(int8_t));
|
||||
} else {
|
||||
cbt->liveness.num_recent_circs = 0;
|
||||
cbt->liveness.timeouts_after_firsthop = NULL;
|
||||
@ -873,7 +873,7 @@ circuit_build_times_parse_state(circuit_build_times_t *cbt,
|
||||
}
|
||||
|
||||
/* build_time_t 0 means uninitialized */
|
||||
loaded_times = tor_calloc(sizeof(build_time_t), state->TotalBuildTimes);
|
||||
loaded_times = tor_calloc(state->TotalBuildTimes, sizeof(build_time_t));
|
||||
|
||||
for (line = state->BuildtimeHistogram; line; line = line->next) {
|
||||
smartlist_t *args = smartlist_new();
|
||||
|
@ -200,7 +200,7 @@ circuit_is_better(const origin_circuit_t *oa, const origin_circuit_t *ob,
|
||||
return 1;
|
||||
} else {
|
||||
if (a->timestamp_dirty ||
|
||||
timercmp(&a->timestamp_began, &b->timestamp_began, >))
|
||||
timercmp(&a->timestamp_began, &b->timestamp_began, OP_GT))
|
||||
return 1;
|
||||
if (ob->build_state->is_internal)
|
||||
/* XXX023 what the heck is this internal thing doing here. I
|
||||
@ -514,7 +514,7 @@ circuit_expire_building(void)
|
||||
if (TO_ORIGIN_CIRCUIT(victim)->hs_circ_has_timed_out)
|
||||
cutoff = hs_extremely_old_cutoff;
|
||||
|
||||
if (timercmp(&victim->timestamp_began, &cutoff, >))
|
||||
if (timercmp(&victim->timestamp_began, &cutoff, OP_GT))
|
||||
continue; /* it's still young, leave it alone */
|
||||
|
||||
/* We need to double-check the opened state here because
|
||||
@ -524,7 +524,7 @@ circuit_expire_building(void)
|
||||
* aren't either. */
|
||||
if (!any_opened_circs && victim->state != CIRCUIT_STATE_OPEN) {
|
||||
/* It's still young enough that we wouldn't close it, right? */
|
||||
if (timercmp(&victim->timestamp_began, &close_cutoff, >)) {
|
||||
if (timercmp(&victim->timestamp_began, &close_cutoff, OP_GT)) {
|
||||
if (!TO_ORIGIN_CIRCUIT(victim)->relaxed_timeout) {
|
||||
int first_hop_succeeded = TO_ORIGIN_CIRCUIT(victim)->cpath->state
|
||||
== CPATH_STATE_OPEN;
|
||||
@ -672,7 +672,7 @@ circuit_expire_building(void)
|
||||
* it off at, we probably had a suspend event along this codepath,
|
||||
* and we should discard the value.
|
||||
*/
|
||||
if (timercmp(&victim->timestamp_began, &extremely_old_cutoff, <)) {
|
||||
if (timercmp(&victim->timestamp_began, &extremely_old_cutoff, OP_LT)) {
|
||||
log_notice(LD_CIRC,
|
||||
"Extremely large value for circuit build timeout: %lds. "
|
||||
"Assuming clock jump. Purpose %d (%s)",
|
||||
@ -1255,7 +1255,7 @@ circuit_expire_old_circuits_clientside(void)
|
||||
if (circ->purpose != CIRCUIT_PURPOSE_PATH_BIAS_TESTING)
|
||||
circuit_mark_for_close(circ, END_CIRC_REASON_FINISHED);
|
||||
} else if (!circ->timestamp_dirty && circ->state == CIRCUIT_STATE_OPEN) {
|
||||
if (timercmp(&circ->timestamp_began, &cutoff, <)) {
|
||||
if (timercmp(&circ->timestamp_began, &cutoff, OP_LT)) {
|
||||
if (circ->purpose == CIRCUIT_PURPOSE_C_GENERAL ||
|
||||
circ->purpose == CIRCUIT_PURPOSE_C_MEASURE_TIMEOUT ||
|
||||
circ->purpose == CIRCUIT_PURPOSE_S_ESTABLISH_INTRO ||
|
||||
@ -2324,7 +2324,7 @@ connection_ap_handshake_attach_circuit(entry_connection_t *conn)
|
||||
tor_assert(rendcirc);
|
||||
/* one is already established, attach */
|
||||
log_info(LD_REND,
|
||||
"rend joined circ %d already here. attaching. "
|
||||
"rend joined circ %u already here. attaching. "
|
||||
"(stream %d sec old)",
|
||||
(unsigned)rendcirc->base_.n_circ_id, conn_age);
|
||||
/* Mark rendezvous circuits as 'newly dirty' every time you use
|
||||
|
348
src/or/config.c
348
src/or/config.c
@ -43,6 +43,7 @@
|
||||
#include "util.h"
|
||||
#include "routerlist.h"
|
||||
#include "routerset.h"
|
||||
#include "scheduler.h"
|
||||
#include "statefile.h"
|
||||
#include "transports.h"
|
||||
#include "ext_orport.h"
|
||||
@ -262,6 +263,7 @@ static config_var_t option_vars_[] = {
|
||||
V(HashedControlPassword, LINELIST, NULL),
|
||||
V(HidServDirectoryV2, BOOL, "1"),
|
||||
VAR("HiddenServiceDir", LINELIST_S, RendConfigLines, NULL),
|
||||
VAR("HiddenServiceDirGroupReadable", LINELIST_S, RendConfigLines, NULL),
|
||||
VAR("HiddenServiceOptions",LINELIST_V, RendConfigLines, NULL),
|
||||
VAR("HiddenServicePort", LINELIST_S, RendConfigLines, NULL),
|
||||
VAR("HiddenServiceVersion",LINELIST_S, RendConfigLines, NULL),
|
||||
@ -367,6 +369,9 @@ static config_var_t option_vars_[] = {
|
||||
V(ServerDNSSearchDomains, BOOL, "0"),
|
||||
V(ServerDNSTestAddresses, CSV,
|
||||
"www.google.com,www.mit.edu,www.yahoo.com,www.slashdot.org"),
|
||||
V(SchedulerLowWaterMark__, MEMUNIT, "100 MB"),
|
||||
V(SchedulerHighWaterMark__, MEMUNIT, "101 MB"),
|
||||
V(SchedulerMaxFlushCells__, UINT, "1000"),
|
||||
V(ShutdownWaitLength, INTERVAL, "30 seconds"),
|
||||
V(SocksListenAddress, LINELIST, NULL),
|
||||
V(SocksPolicy, LINELIST, NULL),
|
||||
@ -376,7 +381,7 @@ static config_var_t option_vars_[] = {
|
||||
OBSOLETE("StrictEntryNodes"),
|
||||
OBSOLETE("StrictExitNodes"),
|
||||
V(StrictNodes, BOOL, "0"),
|
||||
V(Support022HiddenServices, AUTOBOOL, "auto"),
|
||||
OBSOLETE("Support022HiddenServices"),
|
||||
V(TestSocks, BOOL, "0"),
|
||||
V(TokenBucketRefillInterval, MSEC_INTERVAL, "100 msec"),
|
||||
V(Tor2webMode, BOOL, "0"),
|
||||
@ -510,12 +515,6 @@ static int options_transition_affects_workers(
|
||||
static int options_transition_affects_descriptor(
|
||||
const or_options_t *old_options, const or_options_t *new_options);
|
||||
static int check_nickname_list(char **lst, const char *name, char **msg);
|
||||
|
||||
static int parse_client_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only);
|
||||
|
||||
static int parse_server_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only);
|
||||
static char *get_bindaddr_from_transport_listen_line(const char *line,
|
||||
const char *transport);
|
||||
static int parse_dir_authority_line(const char *line,
|
||||
@ -829,22 +828,22 @@ add_default_trusted_dir_authorities(dirinfo_type_t type)
|
||||
"moria1 orport=9101 "
|
||||
"v3ident=D586D18309DED4CD6D57C18FDB97EFA96D330566 "
|
||||
"128.31.0.39:9131 9695 DFC3 5FFE B861 329B 9F1A B04C 4639 7020 CE31",
|
||||
"tor26 orport=443 v3ident=14C131DFC5C6F93646BE72FA1401C02A8DF2E8B4 "
|
||||
"tor26 orport=443 "
|
||||
"v3ident=14C131DFC5C6F93646BE72FA1401C02A8DF2E8B4 "
|
||||
"86.59.21.38:80 847B 1F85 0344 D787 6491 A548 92F9 0493 4E4E B85D",
|
||||
"dizum orport=443 v3ident=E8A9C45EDE6D711294FADF8E7951F4DE6CA56B58 "
|
||||
"dizum orport=443 "
|
||||
"v3ident=E8A9C45EDE6D711294FADF8E7951F4DE6CA56B58 "
|
||||
"194.109.206.212:80 7EA6 EAD6 FD83 083C 538F 4403 8BBF A077 587D D755",
|
||||
"Tonga orport=443 bridge 82.94.251.203:80 "
|
||||
"4A0C CD2D DC79 9508 3D73 F5D6 6710 0C8A 5831 F16D",
|
||||
"turtles orport=9090 "
|
||||
"v3ident=27B6B5996C426270A5C95488AA5BCEB6BCC86956 "
|
||||
"76.73.17.194:9030 F397 038A DC51 3361 35E7 B80B D99C A384 4360 292B",
|
||||
"Tonga orport=443 bridge "
|
||||
"82.94.251.203:80 4A0C CD2D DC79 9508 3D73 F5D6 6710 0C8A 5831 F16D",
|
||||
"gabelmoo orport=443 "
|
||||
"v3ident=ED03BB616EB2F60BEC80151114BB25CEF515B226 "
|
||||
"131.188.40.189:80 F204 4413 DAC2 E02E 3D6B CF47 35A1 9BCA 1DE9 7281",
|
||||
"dannenberg orport=443 "
|
||||
"v3ident=585769C78764D58426B8B52B6651A5A71137189A "
|
||||
"193.23.244.244:80 7BE6 83E6 5D48 1413 21C5 ED92 F075 C553 64AC 7123",
|
||||
"urras orport=80 v3ident=80550987E1D626E3EBA5E5E75A458DE0626D088C "
|
||||
"urras orport=80 "
|
||||
"v3ident=80550987E1D626E3EBA5E5E75A458DE0626D088C "
|
||||
"208.83.223.34:443 0AD3 FA88 4D18 F89E EA2D 89C0 1937 9E0E 7FD9 4417",
|
||||
"maatuska orport=80 "
|
||||
"v3ident=49015F787433103580E3B66A1707A00E60F2D15B "
|
||||
@ -852,6 +851,9 @@ add_default_trusted_dir_authorities(dirinfo_type_t type)
|
||||
"Faravahar orport=443 "
|
||||
"v3ident=EFCBE720AB3A82B99F9E953CD5BF50F7EEFC7B97 "
|
||||
"154.35.32.5:80 CF6D 0AAF B385 BE71 B8E1 11FC 5CFF 4B47 9237 33BC",
|
||||
"longclaw orport=443 "
|
||||
"v3ident=23D15D965BC35114467363C165C4F724B64B4F66 "
|
||||
"199.254.238.52:80 74A9 1064 6BCE EFBC D2E8 74FC 1DC9 9743 0F96 8145",
|
||||
NULL
|
||||
};
|
||||
for (i=0; authorities[i]; i++) {
|
||||
@ -1047,6 +1049,14 @@ options_act_reversible(const or_options_t *old_options, char **msg)
|
||||
if (running_tor && !libevent_initialized) {
|
||||
init_libevent(options);
|
||||
libevent_initialized = 1;
|
||||
|
||||
/*
|
||||
* Initialize the scheduler - this has to come after
|
||||
* options_init_from_torrc() sets up libevent - why yes, that seems
|
||||
* completely sensible to hide the libevent setup in the option parsing
|
||||
* code! It also needs to happen before init_keys(), so it needs to
|
||||
* happen here too. How yucky. */
|
||||
scheduler_init();
|
||||
}
|
||||
|
||||
/* Adjust the port configuration so we can launch listeners. */
|
||||
@ -1076,6 +1086,8 @@ options_act_reversible(const or_options_t *old_options, char **msg)
|
||||
"non-control network connections. Shutting down all existing "
|
||||
"connections.");
|
||||
connection_mark_all_noncontrol_connections();
|
||||
/* We can't complete circuits until the network is re-enabled. */
|
||||
note_that_we_maybe_cant_complete_circuits();
|
||||
}
|
||||
}
|
||||
|
||||
@ -1413,7 +1425,7 @@ options_act(const or_options_t *old_options)
|
||||
if (!options->DisableNetwork) {
|
||||
if (options->ClientTransportPlugin) {
|
||||
for (cl = options->ClientTransportPlugin; cl; cl = cl->next) {
|
||||
if (parse_client_transport_line(options, cl->value, 0)<0) {
|
||||
if (parse_transport_line(options, cl->value, 0, 0) < 0) {
|
||||
log_warn(LD_BUG,
|
||||
"Previously validated ClientTransportPlugin line "
|
||||
"could not be added!");
|
||||
@ -1424,7 +1436,7 @@ options_act(const or_options_t *old_options)
|
||||
|
||||
if (options->ServerTransportPlugin && server_mode(options)) {
|
||||
for (cl = options->ServerTransportPlugin; cl; cl = cl->next) {
|
||||
if (parse_server_transport_line(options, cl->value, 0)<0) {
|
||||
if (parse_transport_line(options, cl->value, 0, 1) < 0) {
|
||||
log_warn(LD_BUG,
|
||||
"Previously validated ServerTransportPlugin line "
|
||||
"could not be added!");
|
||||
@ -1526,6 +1538,12 @@ options_act(const or_options_t *old_options)
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Set up scheduler thresholds */
|
||||
scheduler_set_watermarks((uint32_t)options->SchedulerLowWaterMark__,
|
||||
(uint32_t)options->SchedulerHighWaterMark__,
|
||||
(options->SchedulerMaxFlushCells__ > 0) ?
|
||||
options->SchedulerMaxFlushCells__ : 1000);
|
||||
|
||||
/* Set up accounting */
|
||||
if (accounting_parse_options(options, 0)<0) {
|
||||
log_warn(LD_CONFIG,"Error in accounting options");
|
||||
@ -1673,7 +1691,7 @@ options_act(const or_options_t *old_options)
|
||||
|
||||
if (server_mode(options) && !server_mode(old_options)) {
|
||||
ip_address_changed(0);
|
||||
if (can_complete_circuit || !any_predicted_circuits(time(NULL)))
|
||||
if (have_completed_a_circuit() || !any_predicted_circuits(time(NULL)))
|
||||
inform_testing_reachability();
|
||||
}
|
||||
cpuworkers_rotate();
|
||||
@ -2269,8 +2287,8 @@ resolve_my_address(int warn_severity, const or_options_t *options,
|
||||
/** Return true iff <b>addr</b> is judged to be on the same network as us, or
|
||||
* on a private network.
|
||||
*/
|
||||
int
|
||||
is_local_addr(const tor_addr_t *addr)
|
||||
MOCK_IMPL(int,
|
||||
is_local_addr, (const tor_addr_t *addr))
|
||||
{
|
||||
if (tor_addr_is_internal(addr, 0))
|
||||
return 1;
|
||||
@ -2623,6 +2641,17 @@ options_validate(or_options_t *old_options, or_options_t *options,
|
||||
routerset_union(options->ExcludeExitNodesUnion_,options->ExcludeNodes);
|
||||
}
|
||||
|
||||
if (options->SchedulerLowWaterMark__ == 0 ||
|
||||
options->SchedulerLowWaterMark__ > UINT32_MAX) {
|
||||
log_warn(LD_GENERAL, "Bad SchedulerLowWaterMark__ option");
|
||||
return -1;
|
||||
} else if (options->SchedulerHighWaterMark__ <=
|
||||
options->SchedulerLowWaterMark__ ||
|
||||
options->SchedulerHighWaterMark__ > UINT32_MAX) {
|
||||
log_warn(LD_GENERAL, "Bad SchedulerHighWaterMark option");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (options->NodeFamilies) {
|
||||
options->NodeFamilySets = smartlist_new();
|
||||
for (cl = options->NodeFamilies; cl; cl = cl->next) {
|
||||
@ -3299,12 +3328,12 @@ options_validate(or_options_t *old_options, or_options_t *options,
|
||||
}
|
||||
|
||||
for (cl = options->ClientTransportPlugin; cl; cl = cl->next) {
|
||||
if (parse_client_transport_line(options, cl->value, 1)<0)
|
||||
if (parse_transport_line(options, cl->value, 1, 0) < 0)
|
||||
REJECT("Invalid client transport line. See logs for details.");
|
||||
}
|
||||
|
||||
for (cl = options->ServerTransportPlugin; cl; cl = cl->next) {
|
||||
if (parse_server_transport_line(options, cl->value, 1)<0)
|
||||
if (parse_transport_line(options, cl->value, 1, 1) < 0)
|
||||
REJECT("Invalid server transport line. See logs for details.");
|
||||
}
|
||||
|
||||
@ -4768,46 +4797,52 @@ parse_bridge_line(const char *line)
|
||||
return bridge_line;
|
||||
}
|
||||
|
||||
/** Read the contents of a ClientTransportPlugin line from
|
||||
* <b>line</b>. Return 0 if the line is well-formed, and -1 if it
|
||||
* isn't.
|
||||
/** Read the contents of a ClientTransportPlugin or ServerTransportPlugin
|
||||
* line from <b>line</b>, depending on the value of <b>server</b>. Return 0
|
||||
* if the line is well-formed, and -1 if it isn't.
|
||||
*
|
||||
* If <b>validate_only</b> is 0, the line is well-formed, and the
|
||||
* transport is needed by some bridge:
|
||||
* If <b>validate_only</b> is 0, the line is well-formed, and the transport is
|
||||
* needed by some bridge:
|
||||
* - If it's an external proxy line, add the transport described in the line to
|
||||
* our internal transport list.
|
||||
* - If it's a managed proxy line, launch the managed proxy. */
|
||||
static int
|
||||
parse_client_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only)
|
||||
* - If it's a managed proxy line, launch the managed proxy.
|
||||
*/
|
||||
|
||||
STATIC int
|
||||
parse_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only,
|
||||
int server)
|
||||
{
|
||||
|
||||
smartlist_t *items = NULL;
|
||||
int r;
|
||||
char *field2=NULL;
|
||||
|
||||
const char *transports=NULL;
|
||||
smartlist_t *transport_list=NULL;
|
||||
char *addrport=NULL;
|
||||
const char *transports = NULL;
|
||||
smartlist_t *transport_list = NULL;
|
||||
char *type = NULL;
|
||||
char *addrport = NULL;
|
||||
tor_addr_t addr;
|
||||
uint16_t port = 0;
|
||||
int socks_ver=PROXY_NONE;
|
||||
int socks_ver = PROXY_NONE;
|
||||
|
||||
/* managed proxy options */
|
||||
int is_managed=0;
|
||||
char **proxy_argv=NULL;
|
||||
char **tmp=NULL;
|
||||
int is_managed = 0;
|
||||
char **proxy_argv = NULL;
|
||||
char **tmp = NULL;
|
||||
int proxy_argc, i;
|
||||
int is_useless_proxy=1;
|
||||
int is_useless_proxy = 1;
|
||||
|
||||
int line_length;
|
||||
|
||||
/* Split the line into space-separated tokens */
|
||||
items = smartlist_new();
|
||||
smartlist_split_string(items, line, NULL,
|
||||
SPLIT_SKIP_SPACE|SPLIT_IGNORE_BLANK, -1);
|
||||
line_length = smartlist_len(items);
|
||||
|
||||
line_length = smartlist_len(items);
|
||||
if (line_length < 3) {
|
||||
log_warn(LD_CONFIG, "Too few arguments on ClientTransportPlugin line.");
|
||||
log_warn(LD_CONFIG,
|
||||
"Too few arguments on %sTransportPlugin line.",
|
||||
server ? "Server" : "Client");
|
||||
goto err;
|
||||
}
|
||||
|
||||
@ -4831,71 +4866,97 @@ parse_client_transport_line(const or_options_t *options,
|
||||
is_useless_proxy = 0;
|
||||
} SMARTLIST_FOREACH_END(transport_name);
|
||||
|
||||
/* field2 is either a SOCKS version or "exec" */
|
||||
field2 = smartlist_get(items, 1);
|
||||
|
||||
if (!strcmp(field2,"socks4")) {
|
||||
type = smartlist_get(items, 1);
|
||||
if (!strcmp(type, "exec")) {
|
||||
is_managed = 1;
|
||||
} else if (server && !strcmp(type, "proxy")) {
|
||||
/* 'proxy' syntax only with ServerTransportPlugin */
|
||||
is_managed = 0;
|
||||
} else if (!server && !strcmp(type, "socks4")) {
|
||||
/* 'socks4' syntax only with ClientTransportPlugin */
|
||||
is_managed = 0;
|
||||
socks_ver = PROXY_SOCKS4;
|
||||
} else if (!strcmp(field2,"socks5")) {
|
||||
} else if (!server && !strcmp(type, "socks5")) {
|
||||
/* 'socks5' syntax only with ClientTransportPlugin */
|
||||
is_managed = 0;
|
||||
socks_ver = PROXY_SOCKS5;
|
||||
} else if (!strcmp(field2,"exec")) {
|
||||
is_managed=1;
|
||||
} else {
|
||||
log_warn(LD_CONFIG, "Strange ClientTransportPlugin field '%s'.",
|
||||
field2);
|
||||
log_warn(LD_CONFIG,
|
||||
"Strange %sTransportPlugin type '%s'",
|
||||
server ? "Server" : "Client", type);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (is_managed && options->Sandbox) {
|
||||
log_warn(LD_CONFIG, "Managed proxies are not compatible with Sandbox mode."
|
||||
"(ClientTransportPlugin line was %s)", escaped(line));
|
||||
log_warn(LD_CONFIG,
|
||||
"Managed proxies are not compatible with Sandbox mode."
|
||||
"(%sTransportPlugin line was %s)",
|
||||
server ? "Server" : "Client", escaped(line));
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (is_managed) { /* managed */
|
||||
if (!validate_only && is_useless_proxy) {
|
||||
log_info(LD_GENERAL, "Pluggable transport proxy (%s) does not provide "
|
||||
"any needed transports and will not be launched.", line);
|
||||
if (is_managed) {
|
||||
/* managed */
|
||||
|
||||
if (!server && !validate_only && is_useless_proxy) {
|
||||
log_info(LD_GENERAL,
|
||||
"Pluggable transport proxy (%s) does not provide "
|
||||
"any needed transports and will not be launched.",
|
||||
line);
|
||||
}
|
||||
|
||||
/* If we are not just validating, use the rest of the line as the
|
||||
argv of the proxy to be launched. Also, make sure that we are
|
||||
only launching proxies that contribute useful transports. */
|
||||
if (!validate_only && !is_useless_proxy) {
|
||||
proxy_argc = line_length-2;
|
||||
/*
|
||||
* If we are not just validating, use the rest of the line as the
|
||||
* argv of the proxy to be launched. Also, make sure that we are
|
||||
* only launching proxies that contribute useful transports.
|
||||
*/
|
||||
|
||||
if (!validate_only && (server || !is_useless_proxy)) {
|
||||
proxy_argc = line_length - 2;
|
||||
tor_assert(proxy_argc > 0);
|
||||
proxy_argv = tor_calloc(sizeof(char *), (proxy_argc + 1));
|
||||
proxy_argv = tor_calloc((proxy_argc + 1), sizeof(char *));
|
||||
tmp = proxy_argv;
|
||||
for (i=0;i<proxy_argc;i++) { /* store arguments */
|
||||
|
||||
for (i = 0; i < proxy_argc; i++) {
|
||||
/* store arguments */
|
||||
*tmp++ = smartlist_get(items, 2);
|
||||
smartlist_del_keeporder(items, 2);
|
||||
}
|
||||
*tmp = NULL; /*terminated with NULL, just like execve() likes it*/
|
||||
*tmp = NULL; /* terminated with NULL, just like execve() likes it */
|
||||
|
||||
/* kickstart the thing */
|
||||
pt_kickstart_client_proxy(transport_list, proxy_argv);
|
||||
if (server) {
|
||||
pt_kickstart_server_proxy(transport_list, proxy_argv);
|
||||
} else {
|
||||
pt_kickstart_client_proxy(transport_list, proxy_argv);
|
||||
}
|
||||
}
|
||||
} else { /* external */
|
||||
} else {
|
||||
/* external */
|
||||
|
||||
/* ClientTransportPlugins connecting through a proxy is managed only. */
|
||||
if (options->Socks4Proxy || options->Socks5Proxy || options->HTTPSProxy) {
|
||||
if (!server && (options->Socks4Proxy || options->Socks5Proxy ||
|
||||
options->HTTPSProxy)) {
|
||||
log_warn(LD_CONFIG, "You have configured an external proxy with another "
|
||||
"proxy type. (Socks4Proxy|Socks5Proxy|HTTPSProxy)");
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (smartlist_len(transport_list) != 1) {
|
||||
log_warn(LD_CONFIG, "You can't have an external proxy with "
|
||||
"more than one transports.");
|
||||
log_warn(LD_CONFIG,
|
||||
"You can't have an external proxy with more than "
|
||||
"one transport.");
|
||||
goto err;
|
||||
}
|
||||
|
||||
addrport = smartlist_get(items, 2);
|
||||
|
||||
if (tor_addr_port_lookup(addrport, &addr, &port)<0) {
|
||||
log_warn(LD_CONFIG, "Error parsing transport "
|
||||
"address '%s'", addrport);
|
||||
if (tor_addr_port_lookup(addrport, &addr, &port) < 0) {
|
||||
log_warn(LD_CONFIG,
|
||||
"Error parsing transport address '%s'", addrport);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (!port) {
|
||||
log_warn(LD_CONFIG,
|
||||
"Transport address '%s' has no port.", addrport);
|
||||
@ -4903,11 +4964,15 @@ parse_client_transport_line(const or_options_t *options,
|
||||
}
|
||||
|
||||
if (!validate_only) {
|
||||
transport_add_from_config(&addr, port, smartlist_get(transport_list, 0),
|
||||
socks_ver);
|
||||
|
||||
log_info(LD_DIR, "Transport '%s' found at %s",
|
||||
log_info(LD_DIR, "%s '%s' at %s.",
|
||||
server ? "Server transport" : "Transport",
|
||||
transports, fmt_addrport(&addr, port));
|
||||
|
||||
if (!server) {
|
||||
transport_add_from_config(&addr, port,
|
||||
smartlist_get(transport_list, 0),
|
||||
socks_ver);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -5079,133 +5144,6 @@ get_options_for_server_transport(const char *transport)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/** Read the contents of a ServerTransportPlugin line from
|
||||
* <b>line</b>. Return 0 if the line is well-formed, and -1 if it
|
||||
* isn't.
|
||||
* If <b>validate_only</b> is 0, the line is well-formed, and it's a
|
||||
* managed proxy line, launch the managed proxy. */
|
||||
static int
|
||||
parse_server_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only)
|
||||
{
|
||||
smartlist_t *items = NULL;
|
||||
int r;
|
||||
const char *transports=NULL;
|
||||
smartlist_t *transport_list=NULL;
|
||||
char *type=NULL;
|
||||
char *addrport=NULL;
|
||||
tor_addr_t addr;
|
||||
uint16_t port = 0;
|
||||
|
||||
/* managed proxy options */
|
||||
int is_managed=0;
|
||||
char **proxy_argv=NULL;
|
||||
char **tmp=NULL;
|
||||
int proxy_argc,i;
|
||||
|
||||
int line_length;
|
||||
|
||||
items = smartlist_new();
|
||||
smartlist_split_string(items, line, NULL,
|
||||
SPLIT_SKIP_SPACE|SPLIT_IGNORE_BLANK, -1);
|
||||
|
||||
line_length = smartlist_len(items);
|
||||
if (line_length < 3) {
|
||||
log_warn(LD_CONFIG, "Too few arguments on ServerTransportPlugin line.");
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* Get the first line element, split it to commas into
|
||||
transport_list (in case it's multiple transports) and validate
|
||||
the transport names. */
|
||||
transports = smartlist_get(items, 0);
|
||||
transport_list = smartlist_new();
|
||||
smartlist_split_string(transport_list, transports, ",",
|
||||
SPLIT_SKIP_SPACE|SPLIT_IGNORE_BLANK, 0);
|
||||
SMARTLIST_FOREACH_BEGIN(transport_list, const char *, transport_name) {
|
||||
if (!string_is_C_identifier(transport_name)) {
|
||||
log_warn(LD_CONFIG, "Transport name is not a C identifier (%s).",
|
||||
transport_name);
|
||||
goto err;
|
||||
}
|
||||
} SMARTLIST_FOREACH_END(transport_name);
|
||||
|
||||
type = smartlist_get(items, 1);
|
||||
|
||||
if (!strcmp(type, "exec")) {
|
||||
is_managed=1;
|
||||
} else if (!strcmp(type, "proxy")) {
|
||||
is_managed=0;
|
||||
} else {
|
||||
log_warn(LD_CONFIG, "Strange ServerTransportPlugin type '%s'", type);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (is_managed && options->Sandbox) {
|
||||
log_warn(LD_CONFIG, "Managed proxies are not compatible with Sandbox mode."
|
||||
"(ServerTransportPlugin line was %s)", escaped(line));
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (is_managed) { /* managed */
|
||||
if (!validate_only) {
|
||||
proxy_argc = line_length-2;
|
||||
tor_assert(proxy_argc > 0);
|
||||
proxy_argv = tor_calloc(sizeof(char *), (proxy_argc + 1));
|
||||
tmp = proxy_argv;
|
||||
|
||||
for (i=0;i<proxy_argc;i++) { /* store arguments */
|
||||
*tmp++ = smartlist_get(items, 2);
|
||||
smartlist_del_keeporder(items, 2);
|
||||
}
|
||||
*tmp = NULL; /*terminated with NULL, just like execve() likes it*/
|
||||
|
||||
/* kickstart the thing */
|
||||
pt_kickstart_server_proxy(transport_list, proxy_argv);
|
||||
}
|
||||
} else { /* external */
|
||||
if (smartlist_len(transport_list) != 1) {
|
||||
log_warn(LD_CONFIG, "You can't have an external proxy with "
|
||||
"more than one transports.");
|
||||
goto err;
|
||||
}
|
||||
|
||||
addrport = smartlist_get(items, 2);
|
||||
|
||||
if (tor_addr_port_lookup(addrport, &addr, &port)<0) {
|
||||
log_warn(LD_CONFIG, "Error parsing transport "
|
||||
"address '%s'", addrport);
|
||||
goto err;
|
||||
}
|
||||
if (!port) {
|
||||
log_warn(LD_CONFIG,
|
||||
"Transport address '%s' has no port.", addrport);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (!validate_only) {
|
||||
log_info(LD_DIR, "Server transport '%s' at %s.",
|
||||
transports, fmt_addrport(&addr, port));
|
||||
}
|
||||
}
|
||||
|
||||
r = 0;
|
||||
goto done;
|
||||
|
||||
err:
|
||||
r = -1;
|
||||
|
||||
done:
|
||||
SMARTLIST_FOREACH(items, char*, s, tor_free(s));
|
||||
smartlist_free(items);
|
||||
if (transport_list) {
|
||||
SMARTLIST_FOREACH(transport_list, char*, s, tor_free(s));
|
||||
smartlist_free(transport_list);
|
||||
}
|
||||
|
||||
return r;
|
||||
}
|
||||
|
||||
/** Read the contents of a DirAuthority line from <b>line</b>. If
|
||||
* <b>validate_only</b> is 0, and the line is well-formed, and it
|
||||
* shares any bits with <b>required_type</b> or <b>required_type</b>
|
||||
|
@ -33,7 +33,7 @@ void reset_last_resolved_addr(void);
|
||||
int resolve_my_address(int warn_severity, const or_options_t *options,
|
||||
uint32_t *addr_out,
|
||||
const char **method_out, char **hostname_out);
|
||||
int is_local_addr(const tor_addr_t *addr);
|
||||
MOCK_DECL(int, is_local_addr, (const tor_addr_t *addr));
|
||||
void options_init(or_options_t *options);
|
||||
|
||||
#define OPTIONS_DUMP_MINIMAL 1
|
||||
@ -141,6 +141,9 @@ STATIC int options_validate(or_options_t *old_options,
|
||||
or_options_t *options,
|
||||
or_options_t *default_options,
|
||||
int from_setconf, char **msg);
|
||||
STATIC int parse_transport_line(const or_options_t *options,
|
||||
const char *line, int validate_only,
|
||||
int server);
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
@ -3839,6 +3839,8 @@ connection_handle_write_impl(connection_t *conn, int force)
|
||||
tor_tls_get_n_raw_bytes(or_conn->tls, &n_read, &n_written);
|
||||
log_debug(LD_GENERAL, "After TLS write of %d: %ld read, %ld written",
|
||||
result, (long)n_read, (long)n_written);
|
||||
or_conn->bytes_xmitted += result;
|
||||
or_conn->bytes_xmitted_by_tls += n_written;
|
||||
/* So we notice bytes were written even on error */
|
||||
/* XXXX024 This cast is safe since we can never write INT_MAX bytes in a
|
||||
* single set of TLS operations. But it looks kinda ugly. If we refactor
|
||||
|
@ -744,8 +744,17 @@ connection_ap_fail_onehop(const char *failed_digest,
|
||||
/* we don't know the digest; have to compare addr:port */
|
||||
tor_addr_t addr;
|
||||
if (!build_state || !build_state->chosen_exit ||
|
||||
!entry_conn->socks_request || !entry_conn->socks_request->address)
|
||||
!entry_conn->socks_request) {
|
||||
/* clang thinks that an array midway through a structure
|
||||
* will never have a NULL address, under either:
|
||||
* -Wpointer-bool-conversion if using !, or
|
||||
* -Wtautological-pointer-compare if using == or !=
|
||||
* It's probably right (unless pointers overflow and wrap),
|
||||
* so we just skip this check
|
||||
|| !entry_conn->socks_request->address
|
||||
*/
|
||||
continue;
|
||||
}
|
||||
if (tor_addr_parse(&addr, entry_conn->socks_request->address)<0 ||
|
||||
!tor_addr_eq(&build_state->chosen_exit->addr, &addr) ||
|
||||
build_state->chosen_exit->port != entry_conn->socks_request->port)
|
||||
@ -2461,7 +2470,7 @@ connection_exit_begin_conn(cell_t *cell, circuit_t *circ)
|
||||
|
||||
relay_header_unpack(&rh, cell->payload);
|
||||
if (rh.length > RELAY_PAYLOAD_SIZE)
|
||||
return -1;
|
||||
return -END_CIRC_REASON_TORPROTOCOL;
|
||||
|
||||
/* Note: we have to use relay_send_command_from_edge here, not
|
||||
* connection_edge_end or connection_edge_send_command, since those require
|
||||
@ -2479,7 +2488,7 @@ connection_exit_begin_conn(cell_t *cell, circuit_t *circ)
|
||||
|
||||
r = begin_cell_parse(cell, &bcell, &end_reason);
|
||||
if (r < -1) {
|
||||
return -1;
|
||||
return -END_CIRC_REASON_TORPROTOCOL;
|
||||
} else if (r == -1) {
|
||||
tor_free(bcell.address);
|
||||
relay_send_end_cell_from_edge(rh.stream_id, circ, end_reason, NULL);
|
||||
|
@ -38,6 +38,8 @@
|
||||
#include "router.h"
|
||||
#include "routerlist.h"
|
||||
#include "ext_orport.h"
|
||||
#include "scheduler.h"
|
||||
|
||||
#ifdef USE_BUFFEREVENTS
|
||||
#include <event2/bufferevent_ssl.h>
|
||||
#endif
|
||||
@ -574,48 +576,51 @@ connection_or_process_inbuf(or_connection_t *conn)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/** When adding cells to an OR connection's outbuf, keep adding until the
|
||||
* outbuf is at least this long, or we run out of cells. */
|
||||
#define OR_CONN_HIGHWATER (32*1024)
|
||||
|
||||
/** Add cells to an OR connection's outbuf whenever the outbuf's data length
|
||||
* drops below this size. */
|
||||
#define OR_CONN_LOWWATER (16*1024)
|
||||
|
||||
/** Called whenever we have flushed some data on an or_conn: add more data
|
||||
* from active circuits. */
|
||||
int
|
||||
connection_or_flushed_some(or_connection_t *conn)
|
||||
{
|
||||
size_t datalen, temp;
|
||||
ssize_t n, flushed;
|
||||
size_t cell_network_size = get_cell_network_size(conn->wide_circ_ids);
|
||||
size_t datalen;
|
||||
|
||||
/* The channel will want to update its estimated queue size */
|
||||
channel_update_xmit_queue_size(TLS_CHAN_TO_BASE(conn->chan));
|
||||
|
||||
/* If we're under the low water mark, add cells until we're just over the
|
||||
* high water mark. */
|
||||
datalen = connection_get_outbuf_len(TO_CONN(conn));
|
||||
if (datalen < OR_CONN_LOWWATER) {
|
||||
while ((conn->chan) && channel_tls_more_to_flush(conn->chan)) {
|
||||
/* Compute how many more cells we want at most */
|
||||
n = CEIL_DIV(OR_CONN_HIGHWATER - datalen, cell_network_size);
|
||||
/* Bail out if we don't want any more */
|
||||
if (n <= 0) break;
|
||||
/* We're still here; try to flush some more cells */
|
||||
flushed = channel_tls_flush_some_cells(conn->chan, n);
|
||||
/* Bail out if it says it didn't flush anything */
|
||||
if (flushed <= 0) break;
|
||||
/* How much in the outbuf now? */
|
||||
temp = connection_get_outbuf_len(TO_CONN(conn));
|
||||
/* Bail out if we didn't actually increase the outbuf size */
|
||||
if (temp <= datalen) break;
|
||||
/* Update datalen for the next iteration */
|
||||
datalen = temp;
|
||||
}
|
||||
/* Let the scheduler know */
|
||||
scheduler_channel_wants_writes(TLS_CHAN_TO_BASE(conn->chan));
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/** This is for channeltls.c to ask how many cells we could accept if
|
||||
* they were available. */
|
||||
ssize_t
|
||||
connection_or_num_cells_writeable(or_connection_t *conn)
|
||||
{
|
||||
size_t datalen, cell_network_size;
|
||||
ssize_t n = 0;
|
||||
|
||||
tor_assert(conn);
|
||||
|
||||
/*
|
||||
* If we're under the high water mark, we're potentially
|
||||
* writeable; note this is different from the calculation above
|
||||
* used to trigger when to start writing after we've stopped.
|
||||
*/
|
||||
datalen = connection_get_outbuf_len(TO_CONN(conn));
|
||||
if (datalen < OR_CONN_HIGHWATER) {
|
||||
cell_network_size = get_cell_network_size(conn->wide_circ_ids);
|
||||
n = CEIL_DIV(OR_CONN_HIGHWATER - datalen, cell_network_size);
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
/** Connection <b>conn</b> has finished writing and has no bytes left on
|
||||
* its outbuf.
|
||||
*
|
||||
@ -1169,10 +1174,10 @@ connection_or_notify_error(or_connection_t *conn,
|
||||
*
|
||||
* Return the launched conn, or NULL if it failed.
|
||||
*/
|
||||
or_connection_t *
|
||||
connection_or_connect(const tor_addr_t *_addr, uint16_t port,
|
||||
const char *id_digest,
|
||||
channel_tls_t *chan)
|
||||
|
||||
MOCK_IMPL(or_connection_t *,
|
||||
connection_or_connect, (const tor_addr_t *_addr, uint16_t port,
|
||||
const char *id_digest, channel_tls_t *chan))
|
||||
{
|
||||
or_connection_t *conn;
|
||||
const or_options_t *options = get_options();
|
||||
|
@ -24,6 +24,7 @@ void connection_or_set_bad_connections(const char *digest, int force);
|
||||
void connection_or_block_renegotiation(or_connection_t *conn);
|
||||
int connection_or_reached_eof(or_connection_t *conn);
|
||||
int connection_or_process_inbuf(or_connection_t *conn);
|
||||
ssize_t connection_or_num_cells_writeable(or_connection_t *conn);
|
||||
int connection_or_flushed_some(or_connection_t *conn);
|
||||
int connection_or_finished_flushing(or_connection_t *conn);
|
||||
int connection_or_finished_connecting(or_connection_t *conn);
|
||||
@ -36,9 +37,10 @@ void connection_or_connect_failed(or_connection_t *conn,
|
||||
int reason, const char *msg);
|
||||
void connection_or_notify_error(or_connection_t *conn,
|
||||
int reason, const char *msg);
|
||||
or_connection_t *connection_or_connect(const tor_addr_t *addr, uint16_t port,
|
||||
const char *id_digest,
|
||||
channel_tls_t *chan);
|
||||
MOCK_DECL(or_connection_t *,
|
||||
connection_or_connect,
|
||||
(const tor_addr_t *addr, uint16_t port,
|
||||
const char *id_digest, channel_tls_t *chan));
|
||||
|
||||
void connection_or_close_normally(or_connection_t *orconn, int flush);
|
||||
void connection_or_close_for_error(or_connection_t *orconn, int flush);
|
||||
|
@ -1263,6 +1263,7 @@ static const struct signal_t signal_table[] = {
|
||||
{ SIGTERM, "INT" },
|
||||
{ SIGNEWNYM, "NEWNYM" },
|
||||
{ SIGCLEARDNSCACHE, "CLEARDNSCACHE"},
|
||||
{ SIGHEARTBEAT, "HEARTBEAT"},
|
||||
{ 0, NULL },
|
||||
};
|
||||
|
||||
@ -2015,7 +2016,7 @@ getinfo_helper_events(control_connection_t *control_conn,
|
||||
/* Note that status/ is not a catch-all for events; there's only supposed
|
||||
* to be a status GETINFO if there's a corresponding STATUS event. */
|
||||
if (!strcmp(question, "status/circuit-established")) {
|
||||
*answer = tor_strdup(can_complete_circuit ? "1" : "0");
|
||||
*answer = tor_strdup(have_completed_a_circuit() ? "1" : "0");
|
||||
} else if (!strcmp(question, "status/enough-dir-info")) {
|
||||
*answer = tor_strdup(router_have_minimum_dir_info() ? "1" : "0");
|
||||
} else if (!strcmp(question, "status/good-server-descriptor") ||
|
||||
@ -4454,6 +4455,9 @@ control_event_signal(uintptr_t signal)
|
||||
case SIGCLEARDNSCACHE:
|
||||
signal_string = "CLEARDNSCACHE";
|
||||
break;
|
||||
case SIGHEARTBEAT:
|
||||
signal_string = "HEARTBEAT";
|
||||
break;
|
||||
default:
|
||||
log_warn(LD_BUG, "Unrecognized signal %lu in control_event_signal",
|
||||
(unsigned long)signal);
|
||||
@ -5096,20 +5100,30 @@ control_event_hs_descriptor_requested(const rend_data_t *rend_query,
|
||||
void
|
||||
control_event_hs_descriptor_receive_end(const char *action,
|
||||
const rend_data_t *rend_query,
|
||||
const char *id_digest)
|
||||
const char *id_digest,
|
||||
const char *reason)
|
||||
{
|
||||
char *reason_field = NULL;
|
||||
|
||||
if (!action || !rend_query || !id_digest) {
|
||||
log_warn(LD_BUG, "Called with action==%p, rend_query==%p, "
|
||||
"id_digest==%p", action, rend_query, id_digest);
|
||||
return;
|
||||
}
|
||||
|
||||
if (reason) {
|
||||
tor_asprintf(&reason_field, " REASON=%s", reason);
|
||||
}
|
||||
|
||||
send_control_event(EVENT_HS_DESC, ALL_FORMATS,
|
||||
"650 HS_DESC %s %s %s %s\r\n",
|
||||
"650 HS_DESC %s %s %s %s%s\r\n",
|
||||
action,
|
||||
rend_query->onion_address,
|
||||
rend_auth_type_to_string(rend_query->auth_type),
|
||||
node_describe_longname_by_id(id_digest));
|
||||
node_describe_longname_by_id(id_digest),
|
||||
reason_field ? reason_field : "");
|
||||
|
||||
tor_free(reason_field);
|
||||
}
|
||||
|
||||
/** send HS_DESC RECEIVED event
|
||||
@ -5125,23 +5139,27 @@ control_event_hs_descriptor_received(const rend_data_t *rend_query,
|
||||
rend_query, id_digest);
|
||||
return;
|
||||
}
|
||||
control_event_hs_descriptor_receive_end("RECEIVED", rend_query, id_digest);
|
||||
control_event_hs_descriptor_receive_end("RECEIVED", rend_query,
|
||||
id_digest, NULL);
|
||||
}
|
||||
|
||||
/** send HS_DESC FAILED event
|
||||
*
|
||||
* called when request for hidden service descriptor returned failure.
|
||||
/** Send HS_DESC event to inform controller that query <b>rend_query</b>
|
||||
* failed to retrieve hidden service descriptor identified by
|
||||
* <b>id_digest</b>. If <b>reason</b> is not NULL, add it to REASON=
|
||||
* field.
|
||||
*/
|
||||
void
|
||||
control_event_hs_descriptor_failed(const rend_data_t *rend_query,
|
||||
const char *id_digest)
|
||||
const char *id_digest,
|
||||
const char *reason)
|
||||
{
|
||||
if (!rend_query || !id_digest) {
|
||||
log_warn(LD_BUG, "Called with rend_query==%p, id_digest==%p",
|
||||
rend_query, id_digest);
|
||||
return;
|
||||
}
|
||||
control_event_hs_descriptor_receive_end("FAILED", rend_query, id_digest);
|
||||
control_event_hs_descriptor_receive_end("FAILED", rend_query,
|
||||
id_digest, reason);
|
||||
}
|
||||
|
||||
/** Free any leftover allocated memory of the control.c subsystem. */
|
||||
|
@ -108,11 +108,13 @@ void control_event_hs_descriptor_requested(const rend_data_t *rend_query,
|
||||
const char *hs_dir);
|
||||
void control_event_hs_descriptor_receive_end(const char *action,
|
||||
const rend_data_t *rend_query,
|
||||
const char *hs_dir);
|
||||
const char *hs_dir,
|
||||
const char *reason);
|
||||
void control_event_hs_descriptor_received(const rend_data_t *rend_query,
|
||||
const char *hs_dir);
|
||||
void control_event_hs_descriptor_failed(const rend_data_t *rend_query,
|
||||
const char *hs_dir);
|
||||
const char *hs_dir,
|
||||
const char *reason);
|
||||
|
||||
void control_free_all(void);
|
||||
|
||||
|
@ -510,7 +510,7 @@ spawn_cpuworker(void)
|
||||
connection_t *conn;
|
||||
int err;
|
||||
|
||||
fdarray = tor_calloc(sizeof(tor_socket_t), 2);
|
||||
fdarray = tor_calloc(2, sizeof(tor_socket_t));
|
||||
if ((err = tor_socketpair(AF_UNIX, SOCK_STREAM, 0, fdarray)) < 0) {
|
||||
log_warn(LD_NET, "Couldn't construct socketpair for cpuworker: %s",
|
||||
tor_socket_strerror(-err));
|
||||
|
@ -2073,23 +2073,25 @@ connection_dir_client_reached_eof(dir_connection_t *conn)
|
||||
}
|
||||
|
||||
if (conn->base_.purpose == DIR_PURPOSE_FETCH_RENDDESC_V2) {
|
||||
#define SEND_HS_DESC_FAILED_EVENT() ( \
|
||||
#define SEND_HS_DESC_FAILED_EVENT(reason) ( \
|
||||
control_event_hs_descriptor_failed(conn->rend_data, \
|
||||
conn->identity_digest) )
|
||||
conn->identity_digest, \
|
||||
reason) )
|
||||
tor_assert(conn->rend_data);
|
||||
log_info(LD_REND,"Received rendezvous descriptor (size %d, status %d "
|
||||
"(%s))",
|
||||
(int)body_len, status_code, escaped(reason));
|
||||
switch (status_code) {
|
||||
case 200:
|
||||
switch (rend_cache_store_v2_desc_as_client(body, conn->rend_data)) {
|
||||
switch (rend_cache_store_v2_desc_as_client(body,
|
||||
conn->requested_resource, conn->rend_data)) {
|
||||
case RCS_BADDESC:
|
||||
case RCS_NOTDIR: /* Impossible */
|
||||
log_warn(LD_REND,"Fetching v2 rendezvous descriptor failed. "
|
||||
"Retrying at another directory.");
|
||||
/* We'll retry when connection_about_to_close_connection()
|
||||
* cleans this dir conn up. */
|
||||
SEND_HS_DESC_FAILED_EVENT();
|
||||
SEND_HS_DESC_FAILED_EVENT("BAD_DESC");
|
||||
break;
|
||||
case RCS_OKAY:
|
||||
default:
|
||||
@ -2108,14 +2110,14 @@ connection_dir_client_reached_eof(dir_connection_t *conn)
|
||||
* connection_about_to_close_connection() cleans this conn up. */
|
||||
log_info(LD_REND,"Fetching v2 rendezvous descriptor failed: "
|
||||
"Retrying at another directory.");
|
||||
SEND_HS_DESC_FAILED_EVENT();
|
||||
SEND_HS_DESC_FAILED_EVENT("NOT_FOUND");
|
||||
break;
|
||||
case 400:
|
||||
log_warn(LD_REND, "Fetching v2 rendezvous descriptor failed: "
|
||||
"http status 400 (%s). Dirserver didn't like our "
|
||||
"v2 rendezvous query? Retrying at another directory.",
|
||||
escaped(reason));
|
||||
SEND_HS_DESC_FAILED_EVENT();
|
||||
SEND_HS_DESC_FAILED_EVENT("QUERY_REJECTED");
|
||||
break;
|
||||
default:
|
||||
log_warn(LD_REND, "Fetching v2 rendezvous descriptor failed: "
|
||||
@ -2124,7 +2126,7 @@ connection_dir_client_reached_eof(dir_connection_t *conn)
|
||||
"Retrying at another directory.",
|
||||
status_code, escaped(reason), conn->base_.address,
|
||||
conn->base_.port);
|
||||
SEND_HS_DESC_FAILED_EVENT();
|
||||
SEND_HS_DESC_FAILED_EVENT("UNEXPECTED");
|
||||
break;
|
||||
}
|
||||
}
|
||||
@ -2208,8 +2210,10 @@ connection_dir_process_inbuf(dir_connection_t *conn)
|
||||
}
|
||||
|
||||
if (connection_get_inbuf_len(TO_CONN(conn)) > MAX_DIRECTORY_OBJECT_SIZE) {
|
||||
log_warn(LD_HTTP, "Too much data received from directory connection: "
|
||||
"denial of service attempt, or you need to upgrade?");
|
||||
log_warn(LD_HTTP,
|
||||
"Too much data received from directory connection (%s): "
|
||||
"denial of service attempt, or you need to upgrade?",
|
||||
conn->base_.address);
|
||||
connection_mark_for_close(TO_CONN(conn));
|
||||
return -1;
|
||||
}
|
||||
|
@ -512,7 +512,7 @@ dirserv_add_multiple_descriptors(const char *desc, uint8_t purpose,
|
||||
if (!n_parsed) {
|
||||
*msg = "No descriptors found in your POST.";
|
||||
if (WRA_WAS_ADDED(r))
|
||||
r = ROUTER_WAS_NOT_NEW;
|
||||
r = ROUTER_IS_ALREADY_KNOWN;
|
||||
} else {
|
||||
*msg = "(no message)";
|
||||
}
|
||||
@ -574,7 +574,7 @@ dirserv_add_descriptor(routerinfo_t *ri, const char **msg, const char *source)
|
||||
ri->cache_info.signed_descriptor_body,
|
||||
ri->cache_info.signed_descriptor_len, *msg);
|
||||
routerinfo_free(ri);
|
||||
return ROUTER_WAS_NOT_NEW;
|
||||
return ROUTER_IS_ALREADY_KNOWN;
|
||||
}
|
||||
|
||||
/* Make a copy of desc, since router_add_to_routerlist might free
|
||||
@ -646,7 +646,7 @@ dirserv_add_extrainfo(extrainfo_t *ei, const char **msg)
|
||||
|
||||
if ((r = routerinfo_incompatible_with_extrainfo(ri, ei, NULL, msg))) {
|
||||
extrainfo_free(ei);
|
||||
return r < 0 ? ROUTER_WAS_NOT_NEW : ROUTER_BAD_EI;
|
||||
return r < 0 ? ROUTER_IS_ALREADY_KNOWN : ROUTER_BAD_EI;
|
||||
}
|
||||
router_add_extrainfo_to_routerlist(ei, msg, 0, 0);
|
||||
return ROUTER_ADDED_SUCCESSFULLY;
|
||||
@ -1369,18 +1369,18 @@ dirserv_compute_performance_thresholds(routerlist_t *rl,
|
||||
* sort them and use that to compute thresholds. */
|
||||
n_active = n_active_nonexit = 0;
|
||||
/* Uptime for every active router. */
|
||||
uptimes = tor_calloc(sizeof(uint32_t), smartlist_len(rl->routers));
|
||||
uptimes = tor_calloc(smartlist_len(rl->routers), sizeof(uint32_t));
|
||||
/* Bandwidth for every active router. */
|
||||
bandwidths_kb = tor_calloc(sizeof(uint32_t), smartlist_len(rl->routers));
|
||||
bandwidths_kb = tor_calloc(smartlist_len(rl->routers), sizeof(uint32_t));
|
||||
/* Bandwidth for every active non-exit router. */
|
||||
bandwidths_excluding_exits_kb =
|
||||
tor_calloc(sizeof(uint32_t), smartlist_len(rl->routers));
|
||||
tor_calloc(smartlist_len(rl->routers), sizeof(uint32_t));
|
||||
/* Weighted mean time between failure for each active router. */
|
||||
mtbfs = tor_calloc(sizeof(double), smartlist_len(rl->routers));
|
||||
mtbfs = tor_calloc(smartlist_len(rl->routers), sizeof(double));
|
||||
/* Time-known for each active router. */
|
||||
tks = tor_calloc(sizeof(long), smartlist_len(rl->routers));
|
||||
tks = tor_calloc(smartlist_len(rl->routers), sizeof(long));
|
||||
/* Weighted fractional uptime for each active router. */
|
||||
wfus = tor_calloc(sizeof(double), smartlist_len(rl->routers));
|
||||
wfus = tor_calloc(smartlist_len(rl->routers), sizeof(double));
|
||||
|
||||
nodelist_assert_ok();
|
||||
|
||||
|
@ -611,7 +611,7 @@ dirvote_compute_params(smartlist_t *votes, int method, int total_authorities)
|
||||
between INT32_MIN and INT32_MAX inclusive. This should be guaranteed by
|
||||
the parsing code. */
|
||||
|
||||
vals = tor_calloc(sizeof(int), n_votes);
|
||||
vals = tor_calloc(n_votes, sizeof(int));
|
||||
|
||||
SMARTLIST_FOREACH_BEGIN(votes, networkstatus_t *, v) {
|
||||
if (!v->net_params)
|
||||
@ -1258,10 +1258,10 @@ networkstatus_compute_consensus(smartlist_t *votes,
|
||||
smartlist_t *chosen_flags = smartlist_new();
|
||||
smartlist_t *versions = smartlist_new();
|
||||
smartlist_t *exitsummaries = smartlist_new();
|
||||
uint32_t *bandwidths_kb = tor_calloc(sizeof(uint32_t),
|
||||
smartlist_len(votes));
|
||||
uint32_t *measured_bws_kb = tor_calloc(sizeof(uint32_t),
|
||||
smartlist_len(votes));
|
||||
uint32_t *bandwidths_kb = tor_calloc(smartlist_len(votes),
|
||||
sizeof(uint32_t));
|
||||
uint32_t *measured_bws_kb = tor_calloc(smartlist_len(votes),
|
||||
sizeof(uint32_t));
|
||||
int num_bandwidths;
|
||||
int num_mbws;
|
||||
|
||||
@ -1281,13 +1281,13 @@ networkstatus_compute_consensus(smartlist_t *votes,
|
||||
memset(conflict, 0, sizeof(conflict));
|
||||
memset(unknown, 0xff, sizeof(conflict));
|
||||
|
||||
index = tor_calloc(sizeof(int), smartlist_len(votes));
|
||||
size = tor_calloc(sizeof(int), smartlist_len(votes));
|
||||
n_voter_flags = tor_calloc(sizeof(int), smartlist_len(votes));
|
||||
n_flag_voters = tor_calloc(sizeof(int), smartlist_len(flags));
|
||||
flag_map = tor_calloc(sizeof(int *), smartlist_len(votes));
|
||||
named_flag = tor_calloc(sizeof(int), smartlist_len(votes));
|
||||
unnamed_flag = tor_calloc(sizeof(int), smartlist_len(votes));
|
||||
index = tor_calloc(smartlist_len(votes), sizeof(int));
|
||||
size = tor_calloc(smartlist_len(votes), sizeof(int));
|
||||
n_voter_flags = tor_calloc(smartlist_len(votes), sizeof(int));
|
||||
n_flag_voters = tor_calloc(smartlist_len(flags), sizeof(int));
|
||||
flag_map = tor_calloc(smartlist_len(votes), sizeof(int *));
|
||||
named_flag = tor_calloc(smartlist_len(votes), sizeof(int));
|
||||
unnamed_flag = tor_calloc(smartlist_len(votes), sizeof(int));
|
||||
for (i = 0; i < smartlist_len(votes); ++i)
|
||||
unnamed_flag[i] = named_flag[i] = -1;
|
||||
|
||||
@ -1298,8 +1298,8 @@ networkstatus_compute_consensus(smartlist_t *votes,
|
||||
* that they're actually set before doing U64_LITERAL(1) << index with
|
||||
* them.*/
|
||||
SMARTLIST_FOREACH_BEGIN(votes, networkstatus_t *, v) {
|
||||
flag_map[v_sl_idx] = tor_calloc(sizeof(int),
|
||||
smartlist_len(v->known_flags));
|
||||
flag_map[v_sl_idx] = tor_calloc(smartlist_len(v->known_flags),
|
||||
sizeof(int));
|
||||
if (smartlist_len(v->known_flags) > MAX_KNOWN_FLAGS_IN_VOTE) {
|
||||
log_warn(LD_BUG, "Somehow, a vote has %d entries in known_flags",
|
||||
smartlist_len(v->known_flags));
|
||||
@ -1379,7 +1379,7 @@ networkstatus_compute_consensus(smartlist_t *votes,
|
||||
);
|
||||
|
||||
/* Now go through all the votes */
|
||||
flag_counts = tor_calloc(sizeof(int), smartlist_len(flags));
|
||||
flag_counts = tor_calloc(smartlist_len(flags), sizeof(int));
|
||||
while (1) {
|
||||
vote_routerstatus_t *rs;
|
||||
routerstatus_t rs_out;
|
||||
|
@ -1919,8 +1919,8 @@ bridge_resolve_conflicts(const tor_addr_t *addr, uint16_t port,
|
||||
|
||||
/** Return True if we have a bridge that uses a transport with name
|
||||
* <b>transport_name</b>. */
|
||||
int
|
||||
transport_is_needed(const char *transport_name)
|
||||
MOCK_IMPL(int,
|
||||
transport_is_needed, (const char *transport_name))
|
||||
{
|
||||
if (!bridge_list)
|
||||
return 0;
|
||||
|
@ -154,7 +154,7 @@ struct transport_t;
|
||||
int get_transport_by_bridge_addrport(const tor_addr_t *addr, uint16_t port,
|
||||
const struct transport_t **transport);
|
||||
|
||||
int transport_is_needed(const char *transport_name);
|
||||
MOCK_DECL(int, transport_is_needed, (const char *transport_name));
|
||||
int validate_pluggable_transports_config(void);
|
||||
|
||||
double pathbias_get_close_success_count(entry_guard_t *guard);
|
||||
|
@ -963,7 +963,7 @@ geoip_get_dirreq_history(dirreq_type_t type)
|
||||
/* We may have rounded 'completed' up. Here we want to use the
|
||||
* real value. */
|
||||
complete = smartlist_len(dirreq_completed);
|
||||
dltimes = tor_calloc(sizeof(uint32_t), complete);
|
||||
dltimes = tor_calloc(complete, sizeof(uint32_t));
|
||||
SMARTLIST_FOREACH_BEGIN(dirreq_completed, dirreq_map_entry_t *, ent) {
|
||||
uint32_t bytes_per_second;
|
||||
uint32_t time_diff = (uint32_t) tv_mdiff(&ent->request_time,
|
||||
@ -1033,7 +1033,7 @@ geoip_get_client_history(geoip_client_action_t action,
|
||||
if (!geoip_is_loaded(AF_INET) && !geoip_is_loaded(AF_INET6))
|
||||
return -1;
|
||||
|
||||
counts = tor_calloc(sizeof(unsigned), n_countries);
|
||||
counts = tor_calloc(n_countries, sizeof(unsigned));
|
||||
HT_FOREACH(ent, clientmap, &client_history) {
|
||||
int country;
|
||||
if ((*ent)->action != (int)action)
|
||||
|
@ -74,6 +74,7 @@ LIBTOR_A_SOURCES = \
|
||||
src/or/routerlist.c \
|
||||
src/or/routerparse.c \
|
||||
src/or/routerset.c \
|
||||
src/or/scheduler.c \
|
||||
src/or/statefile.c \
|
||||
src/or/status.c \
|
||||
src/or/onion_ntor.c \
|
||||
@ -179,6 +180,7 @@ ORHEADERS = \
|
||||
src/or/routerlist.h \
|
||||
src/or/routerset.h \
|
||||
src/or/routerparse.h \
|
||||
src/or/scheduler.h \
|
||||
src/or/statefile.h \
|
||||
src/or/status.h
|
||||
|
||||
|
@ -53,6 +53,7 @@
|
||||
#include "router.h"
|
||||
#include "routerlist.h"
|
||||
#include "routerparse.h"
|
||||
#include "scheduler.h"
|
||||
#include "statefile.h"
|
||||
#include "status.h"
|
||||
#include "util_process.h"
|
||||
@ -150,7 +151,7 @@ static int called_loop_once = 0;
|
||||
* any longer (a big time jump happened, when we notice our directory is
|
||||
* heinously out-of-date, etc.
|
||||
*/
|
||||
int can_complete_circuit=0;
|
||||
static int can_complete_circuits = 0;
|
||||
|
||||
/** How often do we check for router descriptors that we should download
|
||||
* when we have too little directory info? */
|
||||
@ -171,11 +172,11 @@ int quiet_level = 0;
|
||||
/********* END VARIABLES ************/
|
||||
|
||||
/****************************************************************************
|
||||
*
|
||||
* This section contains accessors and other methods on the connection_array
|
||||
* variables (which are global within this file and unavailable outside it).
|
||||
*
|
||||
****************************************************************************/
|
||||
*
|
||||
* This section contains accessors and other methods on the connection_array
|
||||
* variables (which are global within this file and unavailable outside it).
|
||||
*
|
||||
****************************************************************************/
|
||||
|
||||
#if 0 && defined(USE_BUFFEREVENTS)
|
||||
static void
|
||||
@ -223,6 +224,31 @@ set_buffer_lengths_to_zero(tor_socket_t s)
|
||||
}
|
||||
#endif
|
||||
|
||||
/** Return 1 if we have successfully built a circuit, and nothing has changed
|
||||
* to make us think that maybe we can't.
|
||||
*/
|
||||
int
|
||||
have_completed_a_circuit(void)
|
||||
{
|
||||
return can_complete_circuits;
|
||||
}
|
||||
|
||||
/** Note that we have successfully built a circuit, so that reachability
|
||||
* testing and introduction points and so on may be attempted. */
|
||||
void
|
||||
note_that_we_completed_a_circuit(void)
|
||||
{
|
||||
can_complete_circuits = 1;
|
||||
}
|
||||
|
||||
/** Note that something has happened (like a clock jump, or DisableNetwork) to
|
||||
* make us think that maybe we can't complete circuits. */
|
||||
void
|
||||
note_that_we_maybe_cant_complete_circuits(void)
|
||||
{
|
||||
can_complete_circuits = 0;
|
||||
}
|
||||
|
||||
/** Add <b>conn</b> to the array of connections that we can poll on. The
|
||||
* connection's socket must be set; the connection starts out
|
||||
* non-reading and non-writing.
|
||||
@ -999,7 +1025,7 @@ directory_info_has_arrived(time_t now, int from_cache)
|
||||
}
|
||||
|
||||
if (server_mode(options) && !net_is_disabled() && !from_cache &&
|
||||
(can_complete_circuit || !any_predicted_circuits(now)))
|
||||
(have_completed_a_circuit() || !any_predicted_circuits(now)))
|
||||
consider_testing_reachability(1, 1);
|
||||
}
|
||||
|
||||
@ -1436,7 +1462,7 @@ run_scheduled_events(time_t now)
|
||||
/* also, check religiously for reachability, if it's within the first
|
||||
* 20 minutes of our uptime. */
|
||||
if (is_server &&
|
||||
(can_complete_circuit || !any_predicted_circuits(now)) &&
|
||||
(have_completed_a_circuit() || !any_predicted_circuits(now)) &&
|
||||
!we_are_hibernating()) {
|
||||
if (stats_n_seconds_working < TIMEOUT_UNTIL_UNREACHABILITY_COMPLAINT) {
|
||||
consider_testing_reachability(1, dirport_reachability_count==0);
|
||||
@ -1549,7 +1575,7 @@ run_scheduled_events(time_t now)
|
||||
circuit_close_all_marked();
|
||||
|
||||
/* 7. And upload service descriptors if necessary. */
|
||||
if (can_complete_circuit && !net_is_disabled()) {
|
||||
if (have_completed_a_circuit() && !net_is_disabled()) {
|
||||
rend_consider_services_upload(now);
|
||||
rend_consider_descriptor_republication();
|
||||
}
|
||||
@ -1680,7 +1706,7 @@ second_elapsed_callback(periodic_timer_t *timer, void *arg)
|
||||
if (server_mode(options) &&
|
||||
!net_is_disabled() &&
|
||||
seconds_elapsed > 0 &&
|
||||
can_complete_circuit &&
|
||||
have_completed_a_circuit() &&
|
||||
stats_n_seconds_working / TIMEOUT_UNTIL_UNREACHABILITY_COMPLAINT !=
|
||||
(stats_n_seconds_working+seconds_elapsed) /
|
||||
TIMEOUT_UNTIL_UNREACHABILITY_COMPLAINT) {
|
||||
@ -2137,6 +2163,10 @@ process_signal(uintptr_t sig)
|
||||
addressmap_clear_transient();
|
||||
control_event_signal(sig);
|
||||
break;
|
||||
case SIGHEARTBEAT:
|
||||
log_heartbeat(time(NULL));
|
||||
control_event_signal(sig);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
@ -2553,6 +2583,7 @@ tor_free_all(int postfork)
|
||||
channel_tls_free_all();
|
||||
channel_free_all();
|
||||
connection_free_all();
|
||||
scheduler_free_all();
|
||||
buf_shrink_freelists(1);
|
||||
memarea_clear_freelist();
|
||||
nodelist_free_all();
|
||||
|
@ -12,7 +12,9 @@
|
||||
#ifndef TOR_MAIN_H
|
||||
#define TOR_MAIN_H
|
||||
|
||||
extern int can_complete_circuit;
|
||||
int have_completed_a_circuit(void);
|
||||
void note_that_we_completed_a_circuit(void);
|
||||
void note_that_we_maybe_cant_complete_circuits(void);
|
||||
|
||||
int connection_add_impl(connection_t *conn, int is_connecting);
|
||||
#define connection_add(conn) connection_add_impl((conn), 0)
|
||||
|
@ -163,19 +163,18 @@ microdescs_add_to_cache(microdesc_cache_t *cache,
|
||||
md->last_listed = listed_at);
|
||||
}
|
||||
if (requested_digests256) {
|
||||
digestmap_t *requested; /* XXXX actually we should just use a
|
||||
digest256map */
|
||||
requested = digestmap_new();
|
||||
digest256map_t *requested;
|
||||
requested = digest256map_new();
|
||||
/* Set requested[d] to DIGEST_REQUESTED for every md we requested. */
|
||||
SMARTLIST_FOREACH(requested_digests256, const char *, cp,
|
||||
digestmap_set(requested, cp, DIGEST_REQUESTED));
|
||||
SMARTLIST_FOREACH(requested_digests256, const uint8_t *, cp,
|
||||
digest256map_set(requested, cp, DIGEST_REQUESTED));
|
||||
/* Set requested[d] to DIGEST_INVALID for every md we requested which we
|
||||
* will never be able to parse. Remove the ones we didn't request from
|
||||
* invalid_digests.
|
||||
*/
|
||||
SMARTLIST_FOREACH_BEGIN(invalid_digests, char *, cp) {
|
||||
if (digestmap_get(requested, cp)) {
|
||||
digestmap_set(requested, cp, DIGEST_INVALID);
|
||||
SMARTLIST_FOREACH_BEGIN(invalid_digests, uint8_t *, cp) {
|
||||
if (digest256map_get(requested, cp)) {
|
||||
digest256map_set(requested, cp, DIGEST_INVALID);
|
||||
} else {
|
||||
tor_free(cp);
|
||||
SMARTLIST_DEL_CURRENT(invalid_digests, cp);
|
||||
@ -185,8 +184,9 @@ microdescs_add_to_cache(microdesc_cache_t *cache,
|
||||
* ones we never requested from the 'descriptors' smartlist.
|
||||
*/
|
||||
SMARTLIST_FOREACH_BEGIN(descriptors, microdesc_t *, md) {
|
||||
if (digestmap_get(requested, md->digest)) {
|
||||
digestmap_set(requested, md->digest, DIGEST_RECEIVED);
|
||||
if (digest256map_get(requested, (const uint8_t*)md->digest)) {
|
||||
digest256map_set(requested, (const uint8_t*)md->digest,
|
||||
DIGEST_RECEIVED);
|
||||
} else {
|
||||
log_fn(LOG_PROTOCOL_WARN, LD_DIR, "Received non-requested microdesc");
|
||||
microdesc_free(md);
|
||||
@ -195,14 +195,14 @@ microdescs_add_to_cache(microdesc_cache_t *cache,
|
||||
} SMARTLIST_FOREACH_END(md);
|
||||
/* Remove the ones we got or the invalid ones from requested_digests256.
|
||||
*/
|
||||
SMARTLIST_FOREACH_BEGIN(requested_digests256, char *, cp) {
|
||||
void *status = digestmap_get(requested, cp);
|
||||
SMARTLIST_FOREACH_BEGIN(requested_digests256, uint8_t *, cp) {
|
||||
void *status = digest256map_get(requested, cp);
|
||||
if (status == DIGEST_RECEIVED || status == DIGEST_INVALID) {
|
||||
tor_free(cp);
|
||||
SMARTLIST_DEL_CURRENT(requested_digests256, cp);
|
||||
}
|
||||
} SMARTLIST_FOREACH_END(cp);
|
||||
digestmap_free(requested, NULL);
|
||||
digest256map_free(requested, NULL);
|
||||
}
|
||||
|
||||
/* For every requested microdescriptor that was unparseable, mark it
|
||||
@ -794,7 +794,7 @@ microdesc_average_size(microdesc_cache_t *cache)
|
||||
* smartlist. Omit all microdescriptors whose digest appear in <b>skip</b>. */
|
||||
smartlist_t *
|
||||
microdesc_list_missing_digest256(networkstatus_t *ns, microdesc_cache_t *cache,
|
||||
int downloadable_only, digestmap_t *skip)
|
||||
int downloadable_only, digest256map_t *skip)
|
||||
{
|
||||
smartlist_t *result = smartlist_new();
|
||||
time_t now = time(NULL);
|
||||
@ -806,7 +806,7 @@ microdesc_list_missing_digest256(networkstatus_t *ns, microdesc_cache_t *cache,
|
||||
!download_status_is_ready(&rs->dl_status, now,
|
||||
get_options()->TestingMicrodescMaxDownloadTries))
|
||||
continue;
|
||||
if (skip && digestmap_get(skip, rs->descriptor_digest))
|
||||
if (skip && digest256map_get(skip, (const uint8_t*)rs->descriptor_digest))
|
||||
continue;
|
||||
if (tor_mem_is_zero(rs->descriptor_digest, DIGEST256_LEN))
|
||||
continue;
|
||||
@ -831,7 +831,7 @@ update_microdesc_downloads(time_t now)
|
||||
const or_options_t *options = get_options();
|
||||
networkstatus_t *consensus;
|
||||
smartlist_t *missing;
|
||||
digestmap_t *pending;
|
||||
digest256map_t *pending;
|
||||
|
||||
if (should_delay_dir_fetches(options, NULL))
|
||||
return;
|
||||
@ -845,14 +845,14 @@ update_microdesc_downloads(time_t now)
|
||||
if (!we_fetch_microdescriptors(options))
|
||||
return;
|
||||
|
||||
pending = digestmap_new();
|
||||
pending = digest256map_new();
|
||||
list_pending_microdesc_downloads(pending);
|
||||
|
||||
missing = microdesc_list_missing_digest256(consensus,
|
||||
get_microdesc_cache(),
|
||||
1,
|
||||
pending);
|
||||
digestmap_free(pending, NULL);
|
||||
digest256map_free(pending, NULL);
|
||||
|
||||
launch_descriptor_downloads(DIR_PURPOSE_FETCH_MICRODESC,
|
||||
missing, NULL, now);
|
||||
|
@ -37,7 +37,7 @@ size_t microdesc_average_size(microdesc_cache_t *cache);
|
||||
smartlist_t *microdesc_list_missing_digest256(networkstatus_t *ns,
|
||||
microdesc_cache_t *cache,
|
||||
int downloadable_only,
|
||||
digestmap_t *skip);
|
||||
digest256map_t *skip);
|
||||
|
||||
void microdesc_free_(microdesc_t *md, const char *fname, int line);
|
||||
#define microdesc_free(md) \
|
||||
|
@ -1123,7 +1123,7 @@ networkstatus_copy_old_consensus_info(networkstatus_t *new_c,
|
||||
rs_new->last_dir_503_at = rs_old->last_dir_503_at;
|
||||
|
||||
if (tor_memeq(rs_old->descriptor_digest, rs_new->descriptor_digest,
|
||||
DIGEST_LEN)) { /* XXXX Change this to digest256_len */
|
||||
DIGEST256_LEN)) {
|
||||
/* And the same descriptor too! */
|
||||
memcpy(&rs_new->dl_status, &rs_old->dl_status,sizeof(download_status_t));
|
||||
}
|
||||
|
@ -1562,7 +1562,7 @@ update_router_have_minimum_dir_info(void)
|
||||
* is back up and usable, and b) disable some activities that Tor
|
||||
* should only do while circuits are working, like reachability tests
|
||||
* and fetching bridge descriptors only over circuits. */
|
||||
can_complete_circuit = 0;
|
||||
note_that_we_maybe_cant_complete_circuits();
|
||||
|
||||
control_event_client_status(LOG_NOTICE, "NOT_ENOUGH_DIR_INFO");
|
||||
}
|
||||
|
57
src/or/or.h
57
src/or/or.h
@ -119,6 +119,7 @@
|
||||
* conflict with system-defined signals. */
|
||||
#define SIGNEWNYM 129
|
||||
#define SIGCLEARDNSCACHE 130
|
||||
#define SIGHEARTBEAT 131
|
||||
|
||||
#if (SIZEOF_CELL_T != 0)
|
||||
/* On Irix, stdlib.h defines a cell_t type, so we need to make sure
|
||||
@ -676,6 +677,10 @@ typedef enum {
|
||||
|
||||
/* Negative reasons are internal: we never send them in a DESTROY or TRUNCATE
|
||||
* call; they only go to the controller for tracking */
|
||||
|
||||
/* Closing introduction point that were opened in parallel. */
|
||||
#define END_CIRC_REASON_IP_NOW_REDUNDANT -4
|
||||
|
||||
/** Our post-timeout circuit time measurement period expired.
|
||||
* We must give up now */
|
||||
#define END_CIRC_REASON_MEASUREMENT_EXPIRED -3
|
||||
@ -1426,6 +1431,18 @@ typedef struct or_handshake_state_t {
|
||||
|
||||
/** Length of Extended ORPort connection identifier. */
|
||||
#define EXT_OR_CONN_ID_LEN DIGEST_LEN /* 20 */
|
||||
/*
|
||||
* OR_CONN_HIGHWATER and OR_CONN_LOWWATER moved from connection_or.c so
|
||||
* channeltls.c can see them too.
|
||||
*/
|
||||
|
||||
/** When adding cells to an OR connection's outbuf, keep adding until the
|
||||
* outbuf is at least this long, or we run out of cells. */
|
||||
#define OR_CONN_HIGHWATER (32*1024)
|
||||
|
||||
/** Add cells to an OR connection's outbuf whenever the outbuf's data length
|
||||
* drops below this size. */
|
||||
#define OR_CONN_LOWWATER (16*1024)
|
||||
|
||||
/** Subtype of connection_t for an "OR connection" -- that is, one that speaks
|
||||
* cells over TLS. */
|
||||
@ -1517,6 +1534,12 @@ typedef struct or_connection_t {
|
||||
/** Last emptied write token bucket in msec since midnight; only used if
|
||||
* TB_EMPTY events are enabled. */
|
||||
uint32_t write_emptied_time;
|
||||
|
||||
/*
|
||||
* Count the number of bytes flushed out on this orconn, and the number of
|
||||
* bytes TLS actually sent - used for overhead estimation for scheduling.
|
||||
*/
|
||||
uint64_t bytes_xmitted, bytes_xmitted_by_tls;
|
||||
} or_connection_t;
|
||||
|
||||
/** Subtype of connection_t for an "edge connection" -- that is, an entry (ap)
|
||||
@ -4225,8 +4248,18 @@ typedef struct {
|
||||
/** How long (seconds) do we keep a guard before picking a new one? */
|
||||
int GuardLifetime;
|
||||
|
||||
/** Should we send the timestamps that pre-023 hidden services want? */
|
||||
int Support022HiddenServices;
|
||||
/** Low-water mark for global scheduler - start sending when estimated
|
||||
* queued size falls below this threshold.
|
||||
*/
|
||||
uint64_t SchedulerLowWaterMark__;
|
||||
/** High-water mark for global scheduler - stop sending when estimated
|
||||
* queued size exceeds this threshold.
|
||||
*/
|
||||
uint64_t SchedulerHighWaterMark__;
|
||||
/** Flush size for global scheduler - flush this many cells at a time
|
||||
* when sending.
|
||||
*/
|
||||
int SchedulerMaxFlushCells__;
|
||||
} or_options_t;
|
||||
|
||||
/** Persistent state for an onion router, as saved to disk. */
|
||||
@ -4998,15 +5031,31 @@ typedef enum {
|
||||
|
||||
/** Return value for router_add_to_routerlist() and dirserv_add_descriptor() */
|
||||
typedef enum was_router_added_t {
|
||||
/* Router was added successfully. */
|
||||
ROUTER_ADDED_SUCCESSFULLY = 1,
|
||||
/* Router descriptor was added with warnings to submitter. */
|
||||
ROUTER_ADDED_NOTIFY_GENERATOR = 0,
|
||||
/* Extrainfo document was rejected because no corresponding router
|
||||
* descriptor was found OR router descriptor was rejected because
|
||||
* it was incompatible with its extrainfo document. */
|
||||
ROUTER_BAD_EI = -1,
|
||||
ROUTER_WAS_NOT_NEW = -2,
|
||||
/* Router descriptor was rejected because it is already known. */
|
||||
ROUTER_IS_ALREADY_KNOWN = -2,
|
||||
/* General purpose router was rejected, because it was not listed
|
||||
* in consensus. */
|
||||
ROUTER_NOT_IN_CONSENSUS = -3,
|
||||
/* Router was neither in directory consensus nor in any of
|
||||
* networkstatus documents. Caching it to access later.
|
||||
* (Applies to fetched descriptors only.) */
|
||||
ROUTER_NOT_IN_CONSENSUS_OR_NETWORKSTATUS = -4,
|
||||
/* Router was rejected by directory authority. */
|
||||
ROUTER_AUTHDIR_REJECTS = -5,
|
||||
/* Bridge descriptor was rejected because such bridge was not one
|
||||
* of the bridges we have listed in our configuration. */
|
||||
ROUTER_WAS_NOT_WANTED = -6,
|
||||
ROUTER_WAS_TOO_OLD = -7,
|
||||
/* Router descriptor was rejected because it was older than
|
||||
* OLD_ROUTER_DESC_MAX_AGE. */
|
||||
ROUTER_WAS_TOO_OLD = -7, /* note contrast with 'NOT_NEW' */
|
||||
} was_router_added_t;
|
||||
|
||||
/********************************* routerparse.c ************************/
|
||||
|
@ -39,6 +39,7 @@
|
||||
#include "router.h"
|
||||
#include "routerlist.h"
|
||||
#include "routerparse.h"
|
||||
#include "scheduler.h"
|
||||
|
||||
static edge_connection_t *relay_lookup_conn(circuit_t *circ, cell_t *cell,
|
||||
cell_direction_t cell_direction,
|
||||
@ -2591,8 +2592,8 @@ packed_cell_get_circid(const packed_cell_t *cell, int wide_circ_ids)
|
||||
* queue of the first active circuit on <b>chan</b>, and write them to
|
||||
* <b>chan</b>->outbuf. Return the number of cells written. Advance
|
||||
* the active circuit pointer to the next active circuit in the ring. */
|
||||
int
|
||||
channel_flush_from_first_active_circuit(channel_t *chan, int max)
|
||||
MOCK_IMPL(int,
|
||||
channel_flush_from_first_active_circuit, (channel_t *chan, int max))
|
||||
{
|
||||
circuitmux_t *cmux = NULL;
|
||||
int n_flushed = 0;
|
||||
@ -2868,14 +2869,8 @@ append_cell_to_circuit_queue(circuit_t *circ, channel_t *chan,
|
||||
log_debug(LD_GENERAL, "Made a circuit active.");
|
||||
}
|
||||
|
||||
if (!channel_has_queued_writes(chan)) {
|
||||
/* There is no data at all waiting to be sent on the outbuf. Add a
|
||||
* cell, so that we can notice when it gets flushed, flushed_some can
|
||||
* get called, and we can start putting more data onto the buffer then.
|
||||
*/
|
||||
log_debug(LD_GENERAL, "Primed a buffer.");
|
||||
channel_flush_from_first_active_circuit(chan, 1);
|
||||
}
|
||||
/* New way: mark this as having waiting cells for the scheduler */
|
||||
scheduler_channel_has_waiting_cells(chan);
|
||||
}
|
||||
|
||||
/** Append an encoded value of <b>addr</b> to <b>payload_out</b>, which must
|
||||
|
@ -64,7 +64,8 @@ void append_cell_to_circuit_queue(circuit_t *circ, channel_t *chan,
|
||||
cell_t *cell, cell_direction_t direction,
|
||||
streamid_t fromstream);
|
||||
void channel_unlink_all_circuits(channel_t *chan, smartlist_t *detached_out);
|
||||
int channel_flush_from_first_active_circuit(channel_t *chan, int max);
|
||||
MOCK_DECL(int, channel_flush_from_first_active_circuit,
|
||||
(channel_t *chan, int max));
|
||||
void assert_circuit_mux_okay(channel_t *chan);
|
||||
void update_circuit_on_cmux_(circuit_t *circ, cell_direction_t direction,
|
||||
const char *file, int lineno);
|
||||
|
@ -130,16 +130,6 @@ rend_client_reextend_intro_circuit(origin_circuit_t *circ)
|
||||
return result;
|
||||
}
|
||||
|
||||
/** Return true iff we should send timestamps in our INTRODUCE1 cells */
|
||||
static int
|
||||
rend_client_should_send_timestamp(void)
|
||||
{
|
||||
if (get_options()->Support022HiddenServices >= 0)
|
||||
return get_options()->Support022HiddenServices;
|
||||
|
||||
return networkstatus_get_param(NULL, "Support022HiddenServices", 1, 0, 1);
|
||||
}
|
||||
|
||||
/** Called when we're trying to connect an ap conn; sends an INTRODUCE1 cell
|
||||
* down introcirc if possible.
|
||||
*/
|
||||
@ -251,14 +241,8 @@ rend_client_send_introduction(origin_circuit_t *introcirc,
|
||||
REND_DESC_COOKIE_LEN);
|
||||
v3_shift += 2+REND_DESC_COOKIE_LEN;
|
||||
}
|
||||
if (rend_client_should_send_timestamp()) {
|
||||
uint32_t now = (uint32_t)time(NULL);
|
||||
now += 300;
|
||||
now -= now % 600;
|
||||
set_uint32(tmp+v3_shift+1, htonl(now));
|
||||
} else {
|
||||
set_uint32(tmp+v3_shift+1, 0);
|
||||
}
|
||||
/* Once this held a timestamp. */
|
||||
set_uint32(tmp+v3_shift+1, 0);
|
||||
v3_shift += 4;
|
||||
} /* if version 2 only write version number */
|
||||
else if (entry->parsed->protocols & (1<<2)) {
|
||||
@ -370,8 +354,7 @@ rend_client_rendcirc_has_opened(origin_circuit_t *circ)
|
||||
}
|
||||
|
||||
/**
|
||||
* Called to close other intro circuits we launched in parallel
|
||||
* due to timeout.
|
||||
* Called to close other intro circuits we launched in parallel.
|
||||
*/
|
||||
static void
|
||||
rend_client_close_other_intros(const char *onion_address)
|
||||
@ -388,7 +371,7 @@ rend_client_close_other_intros(const char *onion_address)
|
||||
log_info(LD_REND|LD_CIRC, "Closing introduction circuit %d that we "
|
||||
"built in parallel (Purpose %d).", oc->global_identifier,
|
||||
c->purpose);
|
||||
circuit_mark_for_close(c, END_CIRC_REASON_TIMEOUT);
|
||||
circuit_mark_for_close(c, END_CIRC_REASON_IP_NOW_REDUNDANT);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1034,10 +1034,14 @@ rend_cache_store_v2_desc_as_dir(const char *desc)
|
||||
* If the descriptor's service ID does not match
|
||||
* <b>rend_query</b>-\>onion_address, reject it.
|
||||
*
|
||||
* If the descriptor's descriptor ID doesn't match <b>desc_id_base32</b>,
|
||||
* reject it.
|
||||
*
|
||||
* Return an appropriate rend_cache_store_status_t.
|
||||
*/
|
||||
rend_cache_store_status_t
|
||||
rend_cache_store_v2_desc_as_client(const char *desc,
|
||||
const char *desc_id_base32,
|
||||
const rend_data_t *rend_query)
|
||||
{
|
||||
/*XXXX this seems to have a bit of duplicate code with
|
||||
@ -1064,10 +1068,19 @@ rend_cache_store_v2_desc_as_client(const char *desc,
|
||||
time_t now = time(NULL);
|
||||
char key[REND_SERVICE_ID_LEN_BASE32+2];
|
||||
char service_id[REND_SERVICE_ID_LEN_BASE32+1];
|
||||
char want_desc_id[DIGEST_LEN];
|
||||
rend_cache_entry_t *e;
|
||||
rend_cache_store_status_t retval = RCS_BADDESC;
|
||||
tor_assert(rend_cache);
|
||||
tor_assert(desc);
|
||||
tor_assert(desc_id_base32);
|
||||
memset(want_desc_id, 0, sizeof(want_desc_id));
|
||||
if (base32_decode(want_desc_id, sizeof(want_desc_id),
|
||||
desc_id_base32, strlen(desc_id_base32)) != 0) {
|
||||
log_warn(LD_BUG, "Couldn't decode base32 %s for descriptor id.",
|
||||
escaped_safe_str_client(desc_id_base32));
|
||||
goto err;
|
||||
}
|
||||
/* Parse the descriptor. */
|
||||
if (rend_parse_v2_service_descriptor(&parsed, desc_id, &intro_content,
|
||||
&intro_size, &encoded_size,
|
||||
@ -1086,6 +1099,12 @@ rend_cache_store_v2_desc_as_client(const char *desc,
|
||||
service_id, safe_str(rend_query->onion_address));
|
||||
goto err;
|
||||
}
|
||||
if (tor_memneq(desc_id, want_desc_id, DIGEST_LEN)) {
|
||||
log_warn(LD_REND, "Received service descriptor for %s with incorrect "
|
||||
"descriptor ID.", service_id);
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* Decode/decrypt introduction points. */
|
||||
if (intro_content) {
|
||||
int n_intro_points;
|
||||
|
@ -49,6 +49,7 @@ typedef enum {
|
||||
|
||||
rend_cache_store_status_t rend_cache_store_v2_desc_as_dir(const char *desc);
|
||||
rend_cache_store_status_t rend_cache_store_v2_desc_as_client(const char *desc,
|
||||
const char *desc_id_base32,
|
||||
const rend_data_t *rend_query);
|
||||
|
||||
int rend_encode_v2_descriptors(smartlist_t *descs_out,
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include "circuituse.h"
|
||||
#include "config.h"
|
||||
#include "directory.h"
|
||||
#include "main.h"
|
||||
#include "networkstatus.h"
|
||||
#include "nodelist.h"
|
||||
#include "rendclient.h"
|
||||
@ -95,6 +96,8 @@ typedef struct rend_service_port_config_t {
|
||||
typedef struct rend_service_t {
|
||||
/* Fields specified in config file */
|
||||
char *directory; /**< where in the filesystem it stores it */
|
||||
int dir_group_readable; /**< if 1, allow group read
|
||||
permissions on directory */
|
||||
smartlist_t *ports; /**< List of rend_service_port_config_t */
|
||||
rend_auth_type_t auth_type; /**< Client authorization type or 0 if no client
|
||||
* authorization is performed. */
|
||||
@ -359,6 +362,7 @@ rend_config_services(const or_options_t *options, int validate_only)
|
||||
rend_service_t *service = NULL;
|
||||
rend_service_port_config_t *portcfg;
|
||||
smartlist_t *old_service_list = NULL;
|
||||
int ok = 0;
|
||||
|
||||
if (!validate_only) {
|
||||
old_service_list = rend_service_list;
|
||||
@ -369,87 +373,101 @@ rend_config_services(const or_options_t *options, int validate_only)
|
||||
if (!strcasecmp(line->key, "HiddenServiceDir")) {
|
||||
if (service) { /* register the one we just finished parsing */
|
||||
if (validate_only)
|
||||
rend_service_free(service);
|
||||
else
|
||||
rend_add_service(service);
|
||||
}
|
||||
service = tor_malloc_zero(sizeof(rend_service_t));
|
||||
service->directory = tor_strdup(line->value);
|
||||
service->ports = smartlist_new();
|
||||
service->intro_period_started = time(NULL);
|
||||
service->n_intro_points_wanted = NUM_INTRO_POINTS_DEFAULT;
|
||||
continue;
|
||||
}
|
||||
if (!service) {
|
||||
log_warn(LD_CONFIG, "%s with no preceding HiddenServiceDir directive",
|
||||
line->key);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
if (!strcasecmp(line->key, "HiddenServicePort")) {
|
||||
portcfg = parse_port_config(line->value);
|
||||
if (!portcfg) {
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
smartlist_add(service->ports, portcfg);
|
||||
} else if (!strcasecmp(line->key, "HiddenServiceAuthorizeClient")) {
|
||||
/* Parse auth type and comma-separated list of client names and add a
|
||||
* rend_authorized_client_t for each client to the service's list
|
||||
* of authorized clients. */
|
||||
smartlist_t *type_names_split, *clients;
|
||||
const char *authname;
|
||||
int num_clients;
|
||||
if (service->auth_type != REND_NO_AUTH) {
|
||||
log_warn(LD_CONFIG, "Got multiple HiddenServiceAuthorizeClient "
|
||||
"lines for a single service.");
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
type_names_split = smartlist_new();
|
||||
smartlist_split_string(type_names_split, line->value, " ", 0, 2);
|
||||
if (smartlist_len(type_names_split) < 1) {
|
||||
log_warn(LD_BUG, "HiddenServiceAuthorizeClient has no value. This "
|
||||
"should have been prevented when parsing the "
|
||||
"configuration.");
|
||||
smartlist_free(type_names_split);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
authname = smartlist_get(type_names_split, 0);
|
||||
if (!strcasecmp(authname, "basic")) {
|
||||
service->auth_type = REND_BASIC_AUTH;
|
||||
} else if (!strcasecmp(authname, "stealth")) {
|
||||
service->auth_type = REND_STEALTH_AUTH;
|
||||
} else {
|
||||
log_warn(LD_CONFIG, "HiddenServiceAuthorizeClient contains "
|
||||
"unrecognized auth-type '%s'. Only 'basic' or 'stealth' "
|
||||
"are recognized.",
|
||||
(char *) smartlist_get(type_names_split, 0));
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
service->clients = smartlist_new();
|
||||
if (smartlist_len(type_names_split) < 2) {
|
||||
log_warn(LD_CONFIG, "HiddenServiceAuthorizeClient contains "
|
||||
"auth-type '%s', but no client names.",
|
||||
service->auth_type == REND_BASIC_AUTH ? "basic" : "stealth");
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
continue;
|
||||
}
|
||||
clients = smartlist_new();
|
||||
smartlist_split_string(clients, smartlist_get(type_names_split, 1),
|
||||
",", SPLIT_SKIP_SPACE, 0);
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
/* Remove duplicate client names. */
|
||||
num_clients = smartlist_len(clients);
|
||||
smartlist_sort_strings(clients);
|
||||
smartlist_uniq_strings(clients);
|
||||
if (smartlist_len(clients) < num_clients) {
|
||||
rend_service_free(service);
|
||||
else
|
||||
rend_add_service(service);
|
||||
}
|
||||
service = tor_malloc_zero(sizeof(rend_service_t));
|
||||
service->directory = tor_strdup(line->value);
|
||||
service->ports = smartlist_new();
|
||||
service->intro_period_started = time(NULL);
|
||||
service->n_intro_points_wanted = NUM_INTRO_POINTS_DEFAULT;
|
||||
continue;
|
||||
}
|
||||
if (!service) {
|
||||
log_warn(LD_CONFIG, "%s with no preceding HiddenServiceDir directive",
|
||||
line->key);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
if (!strcasecmp(line->key, "HiddenServicePort")) {
|
||||
portcfg = parse_port_config(line->value);
|
||||
if (!portcfg) {
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
smartlist_add(service->ports, portcfg);
|
||||
} else if (!strcasecmp(line->key,
|
||||
"HiddenServiceDirGroupReadable")) {
|
||||
service->dir_group_readable = (int)tor_parse_long(line->value,
|
||||
10, 0, 1, &ok, NULL);
|
||||
if (!ok) {
|
||||
log_warn(LD_CONFIG,
|
||||
"HiddenServiceDirGroupReadable should be 0 or 1, not %s",
|
||||
line->value);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
log_info(LD_CONFIG,
|
||||
"HiddenServiceDirGroupReadable=%d for %s",
|
||||
service->dir_group_readable, service->directory);
|
||||
} else if (!strcasecmp(line->key, "HiddenServiceAuthorizeClient")) {
|
||||
/* Parse auth type and comma-separated list of client names and add a
|
||||
* rend_authorized_client_t for each client to the service's list
|
||||
* of authorized clients. */
|
||||
smartlist_t *type_names_split, *clients;
|
||||
const char *authname;
|
||||
int num_clients;
|
||||
if (service->auth_type != REND_NO_AUTH) {
|
||||
log_warn(LD_CONFIG, "Got multiple HiddenServiceAuthorizeClient "
|
||||
"lines for a single service.");
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
type_names_split = smartlist_new();
|
||||
smartlist_split_string(type_names_split, line->value, " ", 0, 2);
|
||||
if (smartlist_len(type_names_split) < 1) {
|
||||
log_warn(LD_BUG, "HiddenServiceAuthorizeClient has no value. This "
|
||||
"should have been prevented when parsing the "
|
||||
"configuration.");
|
||||
smartlist_free(type_names_split);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
authname = smartlist_get(type_names_split, 0);
|
||||
if (!strcasecmp(authname, "basic")) {
|
||||
service->auth_type = REND_BASIC_AUTH;
|
||||
} else if (!strcasecmp(authname, "stealth")) {
|
||||
service->auth_type = REND_STEALTH_AUTH;
|
||||
} else {
|
||||
log_warn(LD_CONFIG, "HiddenServiceAuthorizeClient contains "
|
||||
"unrecognized auth-type '%s'. Only 'basic' or 'stealth' "
|
||||
"are recognized.",
|
||||
(char *) smartlist_get(type_names_split, 0));
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
rend_service_free(service);
|
||||
return -1;
|
||||
}
|
||||
service->clients = smartlist_new();
|
||||
if (smartlist_len(type_names_split) < 2) {
|
||||
log_warn(LD_CONFIG, "HiddenServiceAuthorizeClient contains "
|
||||
"auth-type '%s', but no client names.",
|
||||
service->auth_type == REND_BASIC_AUTH ? "basic" : "stealth");
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
continue;
|
||||
}
|
||||
clients = smartlist_new();
|
||||
smartlist_split_string(clients, smartlist_get(type_names_split, 1),
|
||||
",", SPLIT_SKIP_SPACE, 0);
|
||||
SMARTLIST_FOREACH(type_names_split, char *, cp, tor_free(cp));
|
||||
smartlist_free(type_names_split);
|
||||
/* Remove duplicate client names. */
|
||||
num_clients = smartlist_len(clients);
|
||||
smartlist_sort_strings(clients);
|
||||
smartlist_uniq_strings(clients);
|
||||
if (smartlist_len(clients) < num_clients) {
|
||||
log_info(LD_CONFIG, "HiddenServiceAuthorizeClient contains %d "
|
||||
"duplicate client name(s); removing.",
|
||||
num_clients - smartlist_len(clients));
|
||||
@ -513,10 +531,21 @@ rend_config_services(const or_options_t *options, int validate_only)
|
||||
}
|
||||
}
|
||||
if (service) {
|
||||
if (validate_only)
|
||||
cpd_check_t check_opts = CPD_CHECK_MODE_ONLY;
|
||||
if (service->dir_group_readable) {
|
||||
check_opts |= CPD_GROUP_READ;
|
||||
}
|
||||
|
||||
if (check_private_dir(service->directory, check_opts, options->User) < 0) {
|
||||
rend_service_free(service);
|
||||
else
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (validate_only) {
|
||||
rend_service_free(service);
|
||||
} else {
|
||||
rend_add_service(service);
|
||||
}
|
||||
}
|
||||
|
||||
/* If this is a reload and there were hidden services configured before,
|
||||
@ -693,10 +722,23 @@ rend_service_load_keys(rend_service_t *s)
|
||||
{
|
||||
char fname[512];
|
||||
char buf[128];
|
||||
cpd_check_t check_opts = CPD_CREATE;
|
||||
|
||||
if (s->dir_group_readable) {
|
||||
check_opts |= CPD_GROUP_READ;
|
||||
}
|
||||
/* Check/create directory */
|
||||
if (check_private_dir(s->directory, CPD_CREATE, get_options()->User) < 0)
|
||||
if (check_private_dir(s->directory, check_opts, get_options()->User) < 0) {
|
||||
return -1;
|
||||
}
|
||||
#ifndef _WIN32
|
||||
if (s->dir_group_readable) {
|
||||
/* Only new dirs created get new opts, also enforce group read. */
|
||||
if (chmod(s->directory, 0750)) {
|
||||
log_warn(LD_FS,"Unable to make %s group-readable.", s->directory);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Load key */
|
||||
if (strlcpy(fname,s->directory,sizeof(fname)) >= sizeof(fname) ||
|
||||
@ -706,7 +748,7 @@ rend_service_load_keys(rend_service_t *s)
|
||||
s->directory);
|
||||
return -1;
|
||||
}
|
||||
s->private_key = init_key_from_file(fname, 1, LOG_ERR);
|
||||
s->private_key = init_key_from_file(fname, 1, LOG_ERR, 0);
|
||||
if (!s->private_key)
|
||||
return -1;
|
||||
|
||||
@ -733,6 +775,15 @@ rend_service_load_keys(rend_service_t *s)
|
||||
memwipe(buf, 0, sizeof(buf));
|
||||
return -1;
|
||||
}
|
||||
#ifndef _WIN32
|
||||
if (s->dir_group_readable) {
|
||||
/* Also verify hostname file created with group read. */
|
||||
if (chmod(fname, 0640))
|
||||
log_warn(LD_FS,"Unable to make hidden hostname file %s group-readable.",
|
||||
fname);
|
||||
}
|
||||
#endif
|
||||
|
||||
memwipe(buf, 0, sizeof(buf));
|
||||
|
||||
/* If client authorization is configured, load or generate keys. */
|
||||
@ -3028,15 +3079,20 @@ rend_services_introduce(void)
|
||||
int intro_point_set_changed, prev_intro_nodes;
|
||||
unsigned int n_intro_points_unexpired;
|
||||
unsigned int n_intro_points_to_open;
|
||||
smartlist_t *intro_nodes;
|
||||
time_t now;
|
||||
const or_options_t *options = get_options();
|
||||
/* List of nodes we need to _exclude_ when choosing a new node to establish
|
||||
* an intro point to. */
|
||||
smartlist_t *exclude_nodes;
|
||||
|
||||
intro_nodes = smartlist_new();
|
||||
if (!have_completed_a_circuit())
|
||||
return;
|
||||
|
||||
exclude_nodes = smartlist_new();
|
||||
now = time(NULL);
|
||||
|
||||
for (i=0; i < smartlist_len(rend_service_list); ++i) {
|
||||
smartlist_clear(intro_nodes);
|
||||
smartlist_clear(exclude_nodes);
|
||||
service = smartlist_get(rend_service_list, i);
|
||||
|
||||
tor_assert(service);
|
||||
@ -3135,8 +3191,10 @@ rend_services_introduce(void)
|
||||
if (intro != NULL && intro->time_expiring == -1)
|
||||
++n_intro_points_unexpired;
|
||||
|
||||
/* Add the valid node to the exclusion list so we don't try to establish
|
||||
* an introduction point to it again. */
|
||||
if (node)
|
||||
smartlist_add(intro_nodes, (void*)node);
|
||||
smartlist_add(exclude_nodes, (void*)node);
|
||||
} SMARTLIST_FOREACH_END(intro);
|
||||
|
||||
if (!intro_point_set_changed &&
|
||||
@ -3172,7 +3230,7 @@ rend_services_introduce(void)
|
||||
router_crn_flags_t flags = CRN_NEED_UPTIME|CRN_NEED_DESC;
|
||||
if (get_options()->AllowInvalid_ & ALLOW_INVALID_INTRODUCTION)
|
||||
flags |= CRN_ALLOW_INVALID;
|
||||
node = router_choose_random_node(intro_nodes,
|
||||
node = router_choose_random_node(exclude_nodes,
|
||||
options->ExcludeNodes, flags);
|
||||
if (!node) {
|
||||
log_warn(LD_REND,
|
||||
@ -3183,7 +3241,9 @@ rend_services_introduce(void)
|
||||
break;
|
||||
}
|
||||
intro_point_set_changed = 1;
|
||||
smartlist_add(intro_nodes, (void*)node);
|
||||
/* Add the choosen node to the exclusion list in order to avoid to pick
|
||||
* it again in the next iteration. */
|
||||
smartlist_add(exclude_nodes, (void*)node);
|
||||
intro = tor_malloc_zero(sizeof(rend_intro_point_t));
|
||||
intro->extend_info = extend_info_from_node(node, 0);
|
||||
intro->intro_key = crypto_pk_new();
|
||||
@ -3212,7 +3272,7 @@ rend_services_introduce(void)
|
||||
}
|
||||
}
|
||||
}
|
||||
smartlist_free(intro_nodes);
|
||||
smartlist_free(exclude_nodes);
|
||||
}
|
||||
|
||||
/** Regenerate and upload rendezvous service descriptors for all
|
||||
|
@ -391,13 +391,15 @@ log_new_relay_greeting(void)
|
||||
already_logged = 1;
|
||||
}
|
||||
|
||||
/** Try to read an RSA key from <b>fname</b>. If <b>fname</b> doesn't exist,
|
||||
* or is empty, and <b>generate</b> is true, create a new RSA key and save it
|
||||
* in <b>fname</b>. Return the read/created key, or NULL on error. Log all
|
||||
* errors at level <b>severity</b>.
|
||||
/** Try to read an RSA key from <b>fname</b>. If <b>fname</b> doesn't exist
|
||||
* and <b>generate</b> is true, create a new RSA key and save it in
|
||||
* <b>fname</b>. Return the read/created key, or NULL on error. Log all
|
||||
* errors at level <b>severity</b>. If <b>log_greeting</b> is non-zero and a
|
||||
* new key was created, log_new_relay_greeting() is called.
|
||||
*/
|
||||
crypto_pk_t *
|
||||
init_key_from_file(const char *fname, int generate, int severity)
|
||||
init_key_from_file(const char *fname, int generate, int severity,
|
||||
int log_greeting)
|
||||
{
|
||||
crypto_pk_t *prkey = NULL;
|
||||
|
||||
@ -439,7 +441,9 @@ init_key_from_file(const char *fname, int generate, int severity)
|
||||
goto error;
|
||||
}
|
||||
log_info(LD_GENERAL, "Generated key seems valid");
|
||||
log_new_relay_greeting();
|
||||
if (log_greeting) {
|
||||
log_new_relay_greeting();
|
||||
}
|
||||
if (crypto_pk_write_private_key_to_filename(prkey, fname)) {
|
||||
tor_log(severity, LD_FS,
|
||||
"Couldn't write generated key to \"%s\".", fname);
|
||||
@ -554,7 +558,7 @@ load_authority_keyset(int legacy, crypto_pk_t **key_out,
|
||||
|
||||
fname = get_datadir_fname2("keys",
|
||||
legacy ? "legacy_signing_key" : "authority_signing_key");
|
||||
signing_key = init_key_from_file(fname, 0, LOG_INFO);
|
||||
signing_key = init_key_from_file(fname, 0, LOG_INFO, 0);
|
||||
if (!signing_key) {
|
||||
log_warn(LD_DIR, "No version 3 directory key found in %s", fname);
|
||||
goto done;
|
||||
@ -837,7 +841,7 @@ init_keys(void)
|
||||
/* 1b. Read identity key. Make it if none is found. */
|
||||
keydir = get_datadir_fname2("keys", "secret_id_key");
|
||||
log_info(LD_GENERAL,"Reading/making identity key \"%s\"...",keydir);
|
||||
prkey = init_key_from_file(keydir, 1, LOG_ERR);
|
||||
prkey = init_key_from_file(keydir, 1, LOG_ERR, 1);
|
||||
tor_free(keydir);
|
||||
if (!prkey) return -1;
|
||||
set_server_identity_key(prkey);
|
||||
@ -860,7 +864,7 @@ init_keys(void)
|
||||
/* 2. Read onion key. Make it if none is found. */
|
||||
keydir = get_datadir_fname2("keys", "secret_onion_key");
|
||||
log_info(LD_GENERAL,"Reading/making onion key \"%s\"...",keydir);
|
||||
prkey = init_key_from_file(keydir, 1, LOG_ERR);
|
||||
prkey = init_key_from_file(keydir, 1, LOG_ERR, 1);
|
||||
tor_free(keydir);
|
||||
if (!prkey) return -1;
|
||||
set_onion_key(prkey);
|
||||
@ -887,7 +891,7 @@ init_keys(void)
|
||||
if (!lastonionkey && file_status(keydir) == FN_FILE) {
|
||||
/* Load keys from non-empty files only.
|
||||
* Missing old keys won't be replaced with freshly generated keys. */
|
||||
prkey = init_key_from_file(keydir, 0, LOG_ERR);
|
||||
prkey = init_key_from_file(keydir, 0, LOG_ERR, 0);
|
||||
if (prkey)
|
||||
lastonionkey = prkey;
|
||||
}
|
||||
|
@ -29,7 +29,7 @@ crypto_pk_t *get_my_v3_legacy_signing_key(void);
|
||||
void dup_onion_keys(crypto_pk_t **key, crypto_pk_t **last);
|
||||
void rotate_onion_key(void);
|
||||
crypto_pk_t *init_key_from_file(const char *fname, int generate,
|
||||
int severity);
|
||||
int severity, int log_greeting);
|
||||
void v3_authority_check_key_expiry(void);
|
||||
|
||||
di_digest256_map_t *construct_ntor_key_map(void);
|
||||
|
@ -79,6 +79,7 @@ static const char *signed_descriptor_get_body_impl(
|
||||
const signed_descriptor_t *desc,
|
||||
int with_annotations);
|
||||
static void list_pending_downloads(digestmap_t *result,
|
||||
digest256map_t *result256,
|
||||
int purpose, const char *prefix);
|
||||
static void list_pending_fpsk_downloads(fp_pair_map_t *result);
|
||||
static void launch_dummy_descriptor_download_as_needed(time_t now,
|
||||
@ -717,7 +718,8 @@ authority_certs_fetch_missing(networkstatus_t *status, time_t now)
|
||||
* First, we get the lists of already pending downloads so we don't
|
||||
* duplicate effort.
|
||||
*/
|
||||
list_pending_downloads(pending_id, DIR_PURPOSE_FETCH_CERTIFICATE, "fp/");
|
||||
list_pending_downloads(pending_id, NULL,
|
||||
DIR_PURPOSE_FETCH_CERTIFICATE, "fp/");
|
||||
list_pending_fpsk_downloads(pending_cert);
|
||||
|
||||
/*
|
||||
@ -1535,7 +1537,7 @@ dirserver_choose_by_weight(const smartlist_t *servers, double authority_weight)
|
||||
u64_dbl_t *weights;
|
||||
const dir_server_t *ds;
|
||||
|
||||
weights = tor_calloc(sizeof(u64_dbl_t), n);
|
||||
weights = tor_calloc(n, sizeof(u64_dbl_t));
|
||||
for (i = 0; i < n; ++i) {
|
||||
ds = smartlist_get(servers, i);
|
||||
weights[i].dbl = ds->weight;
|
||||
@ -2029,9 +2031,10 @@ compute_weighted_bandwidths(const smartlist_t *sl,
|
||||
if (Wg < 0 || Wm < 0 || We < 0 || Wd < 0 || Wgb < 0 || Wmb < 0 || Wdb < 0
|
||||
|| Web < 0) {
|
||||
log_debug(LD_CIRC,
|
||||
"Got negative bandwidth weights. Defaulting to old selection"
|
||||
"Got negative bandwidth weights. Defaulting to naive selection"
|
||||
" algorithm.");
|
||||
return -1; // Use old algorithm.
|
||||
Wg = Wm = We = Wd = weight_scale;
|
||||
Wgb = Wmb = Web = Wdb = weight_scale;
|
||||
}
|
||||
|
||||
Wg /= weight_scale;
|
||||
@ -2044,9 +2047,10 @@ compute_weighted_bandwidths(const smartlist_t *sl,
|
||||
Web /= weight_scale;
|
||||
Wdb /= weight_scale;
|
||||
|
||||
bandwidths = tor_calloc(sizeof(u64_dbl_t), smartlist_len(sl));
|
||||
bandwidths = tor_calloc(smartlist_len(sl), sizeof(u64_dbl_t));
|
||||
|
||||
// Cycle through smartlist and total the bandwidth.
|
||||
static int warned_missing_bw = 0;
|
||||
SMARTLIST_FOREACH_BEGIN(sl, const node_t *, node) {
|
||||
int is_exit = 0, is_guard = 0, is_dir = 0, this_bw = 0;
|
||||
double weight = 1;
|
||||
@ -2055,15 +2059,18 @@ compute_weighted_bandwidths(const smartlist_t *sl,
|
||||
is_dir = node_is_dir(node);
|
||||
if (node->rs) {
|
||||
if (!node->rs->has_bandwidth) {
|
||||
tor_free(bandwidths);
|
||||
/* This should never happen, unless all the authorites downgrade
|
||||
* to 0.2.0 or rogue routerstatuses get inserted into our consensus. */
|
||||
log_warn(LD_BUG,
|
||||
"Consensus is not listing bandwidths. Defaulting back to "
|
||||
"old router selection algorithm.");
|
||||
return -1;
|
||||
if (! warned_missing_bw) {
|
||||
log_warn(LD_BUG,
|
||||
"Consensus is missing some bandwidths. Using a naive "
|
||||
"router selection algorithm");
|
||||
warned_missing_bw = 1;
|
||||
}
|
||||
this_bw = 30000; /* Chosen arbitrarily */
|
||||
} else {
|
||||
this_bw = kb_to_bytes(node->rs->bandwidth_kb);
|
||||
}
|
||||
this_bw = kb_to_bytes(node->rs->bandwidth_kb);
|
||||
} else if (node->ri) {
|
||||
/* bridge or other descriptor not in our consensus */
|
||||
this_bw = bridge_get_advertised_bandwidth_bounded(node->ri);
|
||||
@ -2140,226 +2147,13 @@ frac_nodes_with_descriptors(const smartlist_t *sl,
|
||||
return present / total;
|
||||
}
|
||||
|
||||
/** Helper function:
|
||||
* choose a random node_t element of smartlist <b>sl</b>, weighted by
|
||||
* the advertised bandwidth of each element.
|
||||
*
|
||||
* If <b>rule</b>==WEIGHT_FOR_EXIT. we're picking an exit node: consider all
|
||||
* nodes' bandwidth equally regardless of their Exit status, since there may
|
||||
* be some in the list because they exit to obscure ports. If
|
||||
* <b>rule</b>==NO_WEIGHTING, we're picking a non-exit node: weight
|
||||
* exit-node's bandwidth less depending on the smallness of the fraction of
|
||||
* Exit-to-total bandwidth. If <b>rule</b>==WEIGHT_FOR_GUARD, we're picking a
|
||||
* guard node: consider all guard's bandwidth equally. Otherwise, weight
|
||||
* guards proportionally less.
|
||||
*/
|
||||
static const node_t *
|
||||
smartlist_choose_node_by_bandwidth(const smartlist_t *sl,
|
||||
bandwidth_weight_rule_t rule)
|
||||
{
|
||||
unsigned int i;
|
||||
u64_dbl_t *bandwidths;
|
||||
int is_exit;
|
||||
int is_guard;
|
||||
int is_fast;
|
||||
double total_nonexit_bw = 0, total_exit_bw = 0;
|
||||
double total_nonguard_bw = 0, total_guard_bw = 0;
|
||||
double exit_weight;
|
||||
double guard_weight;
|
||||
int n_unknown = 0;
|
||||
bitarray_t *fast_bits;
|
||||
bitarray_t *exit_bits;
|
||||
bitarray_t *guard_bits;
|
||||
|
||||
// This function does not support WEIGHT_FOR_DIR
|
||||
// or WEIGHT_FOR_MID
|
||||
if (rule == WEIGHT_FOR_DIR || rule == WEIGHT_FOR_MID) {
|
||||
rule = NO_WEIGHTING;
|
||||
}
|
||||
|
||||
/* Can't choose exit and guard at same time */
|
||||
tor_assert(rule == NO_WEIGHTING ||
|
||||
rule == WEIGHT_FOR_EXIT ||
|
||||
rule == WEIGHT_FOR_GUARD);
|
||||
|
||||
if (smartlist_len(sl) == 0) {
|
||||
log_info(LD_CIRC,
|
||||
"Empty routerlist passed in to old node selection for rule %s",
|
||||
bandwidth_weight_rule_to_string(rule));
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* First count the total bandwidth weight, and make a list
|
||||
* of each value. We use UINT64_MAX to indicate "unknown". */
|
||||
bandwidths = tor_calloc(sizeof(u64_dbl_t), smartlist_len(sl));
|
||||
fast_bits = bitarray_init_zero(smartlist_len(sl));
|
||||
exit_bits = bitarray_init_zero(smartlist_len(sl));
|
||||
guard_bits = bitarray_init_zero(smartlist_len(sl));
|
||||
|
||||
/* Iterate over all the routerinfo_t or routerstatus_t, and */
|
||||
SMARTLIST_FOREACH_BEGIN(sl, const node_t *, node) {
|
||||
/* first, learn what bandwidth we think i has */
|
||||
int is_known = 1;
|
||||
uint32_t this_bw = 0;
|
||||
i = node_sl_idx;
|
||||
|
||||
is_exit = node_is_good_exit(node);
|
||||
is_guard = node->is_possible_guard;
|
||||
if (node->rs) {
|
||||
if (node->rs->has_bandwidth) {
|
||||
this_bw = kb_to_bytes(node->rs->bandwidth_kb);
|
||||
} else { /* guess */
|
||||
is_known = 0;
|
||||
}
|
||||
} else if (node->ri) {
|
||||
/* Must be a bridge if we're willing to use it */
|
||||
this_bw = bridge_get_advertised_bandwidth_bounded(node->ri);
|
||||
}
|
||||
|
||||
if (is_exit)
|
||||
bitarray_set(exit_bits, i);
|
||||
if (is_guard)
|
||||
bitarray_set(guard_bits, i);
|
||||
if (node->is_fast)
|
||||
bitarray_set(fast_bits, i);
|
||||
|
||||
if (is_known) {
|
||||
bandwidths[i].dbl = this_bw;
|
||||
if (is_guard)
|
||||
total_guard_bw += this_bw;
|
||||
else
|
||||
total_nonguard_bw += this_bw;
|
||||
if (is_exit)
|
||||
total_exit_bw += this_bw;
|
||||
else
|
||||
total_nonexit_bw += this_bw;
|
||||
} else {
|
||||
++n_unknown;
|
||||
bandwidths[i].dbl = -1.0;
|
||||
}
|
||||
} SMARTLIST_FOREACH_END(node);
|
||||
|
||||
#define EPSILON .1
|
||||
|
||||
/* Now, fill in the unknown values. */
|
||||
if (n_unknown) {
|
||||
int32_t avg_fast, avg_slow;
|
||||
if (total_exit_bw+total_nonexit_bw < EPSILON) {
|
||||
/* if there's some bandwidth, there's at least one known router,
|
||||
* so no worries about div by 0 here */
|
||||
int n_known = smartlist_len(sl)-n_unknown;
|
||||
avg_fast = avg_slow = (int32_t)
|
||||
((total_exit_bw+total_nonexit_bw)/((uint64_t) n_known));
|
||||
} else {
|
||||
avg_fast = 40000;
|
||||
avg_slow = 20000;
|
||||
}
|
||||
for (i=0; i<(unsigned)smartlist_len(sl); ++i) {
|
||||
if (bandwidths[i].dbl >= 0.0)
|
||||
continue;
|
||||
is_fast = bitarray_is_set(fast_bits, i);
|
||||
is_exit = bitarray_is_set(exit_bits, i);
|
||||
is_guard = bitarray_is_set(guard_bits, i);
|
||||
bandwidths[i].dbl = is_fast ? avg_fast : avg_slow;
|
||||
if (is_exit)
|
||||
total_exit_bw += bandwidths[i].dbl;
|
||||
else
|
||||
total_nonexit_bw += bandwidths[i].dbl;
|
||||
if (is_guard)
|
||||
total_guard_bw += bandwidths[i].dbl;
|
||||
else
|
||||
total_nonguard_bw += bandwidths[i].dbl;
|
||||
}
|
||||
}
|
||||
|
||||
/* If there's no bandwidth at all, pick at random. */
|
||||
if (total_exit_bw+total_nonexit_bw < EPSILON) {
|
||||
tor_free(bandwidths);
|
||||
tor_free(fast_bits);
|
||||
tor_free(exit_bits);
|
||||
tor_free(guard_bits);
|
||||
return smartlist_choose(sl);
|
||||
}
|
||||
|
||||
/* Figure out how to weight exits and guards */
|
||||
{
|
||||
double all_bw = U64_TO_DBL(total_exit_bw+total_nonexit_bw);
|
||||
double exit_bw = U64_TO_DBL(total_exit_bw);
|
||||
double guard_bw = U64_TO_DBL(total_guard_bw);
|
||||
/*
|
||||
* For detailed derivation of this formula, see
|
||||
* http://archives.seul.org/or/dev/Jul-2007/msg00056.html
|
||||
*/
|
||||
if (rule == WEIGHT_FOR_EXIT || total_exit_bw<EPSILON)
|
||||
exit_weight = 1.0;
|
||||
else
|
||||
exit_weight = 1.0 - all_bw/(3.0*exit_bw);
|
||||
|
||||
if (rule == WEIGHT_FOR_GUARD || total_guard_bw<EPSILON)
|
||||
guard_weight = 1.0;
|
||||
else
|
||||
guard_weight = 1.0 - all_bw/(3.0*guard_bw);
|
||||
|
||||
if (exit_weight <= 0.0)
|
||||
exit_weight = 0.0;
|
||||
|
||||
if (guard_weight <= 0.0)
|
||||
guard_weight = 0.0;
|
||||
|
||||
for (i=0; i < (unsigned)smartlist_len(sl); i++) {
|
||||
tor_assert(bandwidths[i].dbl >= 0.0);
|
||||
|
||||
is_exit = bitarray_is_set(exit_bits, i);
|
||||
is_guard = bitarray_is_set(guard_bits, i);
|
||||
if (is_exit && is_guard)
|
||||
bandwidths[i].dbl *= exit_weight * guard_weight;
|
||||
else if (is_guard)
|
||||
bandwidths[i].dbl *= guard_weight;
|
||||
else if (is_exit)
|
||||
bandwidths[i].dbl *= exit_weight;
|
||||
}
|
||||
}
|
||||
|
||||
#if 0
|
||||
log_debug(LD_CIRC, "Total weighted bw = "U64_FORMAT
|
||||
", exit bw = "U64_FORMAT
|
||||
", nonexit bw = "U64_FORMAT", exit weight = %f "
|
||||
"(for exit == %d)"
|
||||
", guard bw = "U64_FORMAT
|
||||
", nonguard bw = "U64_FORMAT", guard weight = %f "
|
||||
"(for guard == %d)",
|
||||
U64_PRINTF_ARG(total_bw),
|
||||
U64_PRINTF_ARG(total_exit_bw), U64_PRINTF_ARG(total_nonexit_bw),
|
||||
exit_weight, (int)(rule == WEIGHT_FOR_EXIT),
|
||||
U64_PRINTF_ARG(total_guard_bw), U64_PRINTF_ARG(total_nonguard_bw),
|
||||
guard_weight, (int)(rule == WEIGHT_FOR_GUARD));
|
||||
#endif
|
||||
|
||||
scale_array_elements_to_u64(bandwidths, smartlist_len(sl), NULL);
|
||||
|
||||
{
|
||||
int idx = choose_array_element_by_weight(bandwidths,
|
||||
smartlist_len(sl));
|
||||
tor_free(bandwidths);
|
||||
tor_free(fast_bits);
|
||||
tor_free(exit_bits);
|
||||
tor_free(guard_bits);
|
||||
return idx < 0 ? NULL : smartlist_get(sl, idx);
|
||||
}
|
||||
}
|
||||
|
||||
/** Choose a random element of status list <b>sl</b>, weighted by
|
||||
* the advertised bandwidth of each node */
|
||||
const node_t *
|
||||
node_sl_choose_by_bandwidth(const smartlist_t *sl,
|
||||
bandwidth_weight_rule_t rule)
|
||||
{ /*XXXX MOVE */
|
||||
const node_t *ret;
|
||||
if ((ret = smartlist_choose_node_by_bandwidth_weights(sl, rule))) {
|
||||
return ret;
|
||||
} else {
|
||||
return smartlist_choose_node_by_bandwidth(sl, rule);
|
||||
}
|
||||
return smartlist_choose_node_by_bandwidth_weights(sl, rule);
|
||||
}
|
||||
|
||||
/** Return a random running node from the nodelist. Never
|
||||
@ -2944,6 +2738,7 @@ MOCK_IMPL(STATIC was_router_added_t,
|
||||
extrainfo_insert,(routerlist_t *rl, extrainfo_t *ei))
|
||||
{
|
||||
was_router_added_t r;
|
||||
const char *compatibility_error_msg;
|
||||
routerinfo_t *ri = rimap_get(rl->identity_map,
|
||||
ei->cache_info.identity_digest);
|
||||
signed_descriptor_t *sd =
|
||||
@ -2960,9 +2755,14 @@ extrainfo_insert,(routerlist_t *rl, extrainfo_t *ei))
|
||||
r = ROUTER_NOT_IN_CONSENSUS;
|
||||
goto done;
|
||||
}
|
||||
if (routerinfo_incompatible_with_extrainfo(ri, ei, sd, NULL)) {
|
||||
if (routerinfo_incompatible_with_extrainfo(ri, ei, sd,
|
||||
&compatibility_error_msg)) {
|
||||
r = (ri->cache_info.extrainfo_is_bogus) ?
|
||||
ROUTER_BAD_EI : ROUTER_NOT_IN_CONSENSUS;
|
||||
|
||||
log_warn(LD_DIR,"router info incompatible with extra info (reason: %s)",
|
||||
compatibility_error_msg);
|
||||
|
||||
goto done;
|
||||
}
|
||||
|
||||
@ -3375,7 +3175,7 @@ router_add_to_routerlist(routerinfo_t *router, const char **msg,
|
||||
router_describe(router));
|
||||
*msg = "Router descriptor was not new.";
|
||||
routerinfo_free(router);
|
||||
return ROUTER_WAS_NOT_NEW;
|
||||
return ROUTER_IS_ALREADY_KNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
@ -3460,7 +3260,7 @@ router_add_to_routerlist(routerinfo_t *router, const char **msg,
|
||||
&routerlist->desc_store);
|
||||
routerlist_insert_old(routerlist, router);
|
||||
*msg = "Router descriptor was not new.";
|
||||
return ROUTER_WAS_NOT_NEW;
|
||||
return ROUTER_IS_ALREADY_KNOWN;
|
||||
} else {
|
||||
/* Same key, and either new, or listed in the consensus. */
|
||||
log_debug(LD_DIR, "Replacing entry for router %s",
|
||||
@ -3583,9 +3383,9 @@ routerlist_remove_old_cached_routers_with_id(time_t now,
|
||||
n_extra = n - mdpr;
|
||||
}
|
||||
|
||||
lifespans = tor_calloc(sizeof(struct duration_idx_t), n);
|
||||
rmv = tor_calloc(sizeof(uint8_t), n);
|
||||
must_keep = tor_calloc(sizeof(uint8_t), n);
|
||||
lifespans = tor_calloc(n, sizeof(struct duration_idx_t));
|
||||
rmv = tor_calloc(n, sizeof(uint8_t));
|
||||
must_keep = tor_calloc(n, sizeof(uint8_t));
|
||||
/* Set lifespans to contain the lifespan and index of each server. */
|
||||
/* Set rmv[i-lo]=1 if we're going to remove a server for being too old. */
|
||||
for (i = lo; i <= hi; ++i) {
|
||||
@ -4269,7 +4069,7 @@ clear_dir_servers(void)
|
||||
* corresponding elements of <b>result</b> to a nonzero value.
|
||||
*/
|
||||
static void
|
||||
list_pending_downloads(digestmap_t *result,
|
||||
list_pending_downloads(digestmap_t *result, digest256map_t *result256,
|
||||
int purpose, const char *prefix)
|
||||
{
|
||||
const size_t p_len = strlen(prefix);
|
||||
@ -4279,7 +4079,7 @@ list_pending_downloads(digestmap_t *result,
|
||||
if (purpose == DIR_PURPOSE_FETCH_MICRODESC)
|
||||
flags = DSR_DIGEST256|DSR_BASE64;
|
||||
|
||||
tor_assert(result);
|
||||
tor_assert(result || result256);
|
||||
|
||||
SMARTLIST_FOREACH_BEGIN(conns, connection_t *, conn) {
|
||||
if (conn->type == CONN_TYPE_DIR &&
|
||||
@ -4292,11 +4092,19 @@ list_pending_downloads(digestmap_t *result,
|
||||
}
|
||||
} SMARTLIST_FOREACH_END(conn);
|
||||
|
||||
SMARTLIST_FOREACH(tmp, char *, d,
|
||||
if (result) {
|
||||
SMARTLIST_FOREACH(tmp, char *, d,
|
||||
{
|
||||
digestmap_set(result, d, (void*)1);
|
||||
tor_free(d);
|
||||
});
|
||||
} else if (result256) {
|
||||
SMARTLIST_FOREACH(tmp, uint8_t *, d,
|
||||
{
|
||||
digest256map_set(result256, d, (void*)1);
|
||||
tor_free(d);
|
||||
});
|
||||
}
|
||||
smartlist_free(tmp);
|
||||
}
|
||||
|
||||
@ -4308,20 +4116,16 @@ list_pending_descriptor_downloads(digestmap_t *result, int extrainfo)
|
||||
{
|
||||
int purpose =
|
||||
extrainfo ? DIR_PURPOSE_FETCH_EXTRAINFO : DIR_PURPOSE_FETCH_SERVERDESC;
|
||||
list_pending_downloads(result, purpose, "d/");
|
||||
list_pending_downloads(result, NULL, purpose, "d/");
|
||||
}
|
||||
|
||||
/** For every microdescriptor we are currently downloading by descriptor
|
||||
* digest, set result[d] to (void*)1. (Note that microdescriptor digests
|
||||
* are 256-bit, and digestmap_t only holds 160-bit digests, so we're only
|
||||
* getting the first 20 bytes of each digest here.)
|
||||
*
|
||||
* XXXX Let there be a digestmap256_t, and use that instead.
|
||||
* digest, set result[d] to (void*)1.
|
||||
*/
|
||||
void
|
||||
list_pending_microdesc_downloads(digestmap_t *result)
|
||||
list_pending_microdesc_downloads(digest256map_t *result)
|
||||
{
|
||||
list_pending_downloads(result, DIR_PURPOSE_FETCH_MICRODESC, "d/");
|
||||
list_pending_downloads(NULL, result, DIR_PURPOSE_FETCH_MICRODESC, "d/");
|
||||
}
|
||||
|
||||
/** For every certificate we are currently downloading by (identity digest,
|
||||
|
@ -118,7 +118,7 @@ WRA_WAS_ADDED(was_router_added_t s) {
|
||||
static INLINE int WRA_WAS_OUTDATED(was_router_added_t s)
|
||||
{
|
||||
return (s == ROUTER_WAS_TOO_OLD ||
|
||||
s == ROUTER_WAS_NOT_NEW ||
|
||||
s == ROUTER_IS_ALREADY_KNOWN ||
|
||||
s == ROUTER_NOT_IN_CONSENSUS ||
|
||||
s == ROUTER_NOT_IN_CONSENSUS_OR_NETWORKSTATUS);
|
||||
}
|
||||
@ -196,7 +196,7 @@ int hid_serv_get_responsible_directories(smartlist_t *responsible_dirs,
|
||||
int hid_serv_acting_as_directory(void);
|
||||
int hid_serv_responsible_for_desc_id(const char *id);
|
||||
|
||||
void list_pending_microdesc_downloads(digestmap_t *result);
|
||||
void list_pending_microdesc_downloads(digest256map_t *result);
|
||||
void launch_descriptor_downloads(int purpose,
|
||||
smartlist_t *downloadable,
|
||||
const routerstatus_t *source,
|
||||
|
709
src/or/scheduler.c
Normal file
709
src/or/scheduler.c
Normal file
@ -0,0 +1,709 @@
|
||||
/* * Copyright (c) 2013, The Tor Project, Inc. */
|
||||
/* See LICENSE for licensing information */
|
||||
|
||||
/**
|
||||
* \file scheduler.c
|
||||
* \brief Relay scheduling system
|
||||
**/
|
||||
|
||||
#include "or.h"
|
||||
|
||||
#define TOR_CHANNEL_INTERNAL_ /* For channel_flush_some_cells() */
|
||||
#include "channel.h"
|
||||
|
||||
#include "compat_libevent.h"
|
||||
#define SCHEDULER_PRIVATE_
|
||||
#include "scheduler.h"
|
||||
|
||||
#ifdef HAVE_EVENT2_EVENT_H
|
||||
#include <event2/event.h>
|
||||
#else
|
||||
#include <event.h>
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Scheduler high/low watermarks
|
||||
*/
|
||||
|
||||
static uint32_t sched_q_low_water = 16384;
|
||||
static uint32_t sched_q_high_water = 32768;
|
||||
|
||||
/*
|
||||
* Maximum cells to flush in a single call to channel_flush_some_cells();
|
||||
* setting this low means more calls, but too high and we could overshoot
|
||||
* sched_q_high_water.
|
||||
*/
|
||||
|
||||
static uint32_t sched_max_flush_cells = 16;
|
||||
|
||||
/*
|
||||
* Write scheduling works by keeping track of which channels can
|
||||
* accept cells, and have cells to write. From the scheduler's perspective,
|
||||
* a channel can be in four possible states:
|
||||
*
|
||||
* 1.) Not open for writes, no cells to send
|
||||
* - Not much to do here, and the channel will have scheduler_state ==
|
||||
* SCHED_CHAN_IDLE
|
||||
* - Transitions from:
|
||||
* - Open for writes/has cells by simultaneously draining all circuit
|
||||
* queues and filling the output buffer.
|
||||
* - Transitions to:
|
||||
* - Not open for writes/has cells by arrival of cells on an attached
|
||||
* circuit (this would be driven from append_cell_to_circuit_queue())
|
||||
* - Open for writes/no cells by a channel type specific path;
|
||||
* driven from connection_or_flushed_some() for channel_tls_t.
|
||||
*
|
||||
* 2.) Open for writes, no cells to send
|
||||
* - Not much here either; this will be the state an idle but open channel
|
||||
* can be expected to settle in. It will have scheduler_state ==
|
||||
* SCHED_CHAN_WAITING_FOR_CELLS
|
||||
* - Transitions from:
|
||||
* - Not open for writes/no cells by flushing some of the output
|
||||
* buffer.
|
||||
* - Open for writes/has cells by the scheduler moving cells from
|
||||
* circuit queues to channel output queue, but not having enough
|
||||
* to fill the output queue.
|
||||
* - Transitions to:
|
||||
* - Open for writes/has cells by arrival of new cells on an attached
|
||||
* circuit, in append_cell_to_circuit_queue()
|
||||
*
|
||||
* 3.) Not open for writes, cells to send
|
||||
* - This is the state of a busy circuit limited by output bandwidth;
|
||||
* cells have piled up in the circuit queues waiting to be relayed.
|
||||
* The channel will have scheduler_state == SCHED_CHAN_WAITING_TO_WRITE.
|
||||
* - Transitions from:
|
||||
* - Not open for writes/no cells by arrival of cells on an attached
|
||||
* circuit
|
||||
* - Open for writes/has cells by filling an output buffer without
|
||||
* draining all cells from attached circuits
|
||||
* - Transitions to:
|
||||
* - Opens for writes/has cells by draining some of the output buffer
|
||||
* via the connection_or_flushed_some() path (for channel_tls_t).
|
||||
*
|
||||
* 4.) Open for writes, cells to send
|
||||
* - This connection is ready to relay some cells and waiting for
|
||||
* the scheduler to choose it. The channel will have scheduler_state ==
|
||||
* SCHED_CHAN_PENDING.
|
||||
* - Transitions from:
|
||||
* - Not open for writes/has cells by the connection_or_flushed_some()
|
||||
* path
|
||||
* - Open for writes/no cells by the append_cell_to_circuit_queue()
|
||||
* path
|
||||
* - Transitions to:
|
||||
* - Not open for writes/no cells by draining all circuit queues and
|
||||
* simultaneously filling the output buffer.
|
||||
* - Not open for writes/has cells by writing enough cells to fill the
|
||||
* output buffer
|
||||
* - Open for writes/no cells by draining all attached circuit queues
|
||||
* without also filling the output buffer
|
||||
*
|
||||
* Other event-driven parts of the code move channels between these scheduling
|
||||
* states by calling scheduler functions; the scheduler only runs on open-for-
|
||||
* writes/has-cells channels and is the only path for those to transition to
|
||||
* other states. The scheduler_run() function gives us the opportunity to do
|
||||
* scheduling work, and is called from other scheduler functions whenever a
|
||||
* state transition occurs, and periodically from the main event loop.
|
||||
*/
|
||||
|
||||
/* Scheduler global data structures */
|
||||
|
||||
/*
|
||||
* We keep a list of channels that are pending - i.e, have cells to write
|
||||
* and can accept them to send. The enum scheduler_state in channel_t
|
||||
* is reserved for our use.
|
||||
*/
|
||||
|
||||
/* Pqueue of channels that can write and have cells (pending work) */
|
||||
STATIC smartlist_t *channels_pending = NULL;
|
||||
|
||||
/*
|
||||
* This event runs the scheduler from its callback, and is manually
|
||||
* activated whenever a channel enters open for writes/cells to send.
|
||||
*/
|
||||
|
||||
STATIC struct event *run_sched_ev = NULL;
|
||||
|
||||
/*
|
||||
* Queue heuristic; this is not the queue size, but an 'effective queuesize'
|
||||
* that ages out contributions from stalled channels.
|
||||
*/
|
||||
|
||||
STATIC uint64_t queue_heuristic = 0;
|
||||
|
||||
/*
|
||||
* Timestamp for last queue heuristic update
|
||||
*/
|
||||
|
||||
STATIC time_t queue_heuristic_timestamp = 0;
|
||||
|
||||
/* Scheduler static function declarations */
|
||||
|
||||
static void scheduler_evt_callback(evutil_socket_t fd,
|
||||
short events, void *arg);
|
||||
static int scheduler_more_work(void);
|
||||
static void scheduler_retrigger(void);
|
||||
#if 0
|
||||
static void scheduler_trigger(void);
|
||||
#endif
|
||||
|
||||
/* Scheduler function implementations */
|
||||
|
||||
/** Free everything and shut down the scheduling system */
|
||||
|
||||
void
|
||||
scheduler_free_all(void)
|
||||
{
|
||||
log_debug(LD_SCHED, "Shutting down scheduler");
|
||||
|
||||
if (run_sched_ev) {
|
||||
event_del(run_sched_ev);
|
||||
tor_event_free(run_sched_ev);
|
||||
run_sched_ev = NULL;
|
||||
}
|
||||
|
||||
if (channels_pending) {
|
||||
smartlist_free(channels_pending);
|
||||
channels_pending = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Comparison function to use when sorting pending channels
|
||||
*/
|
||||
|
||||
MOCK_IMPL(STATIC int,
|
||||
scheduler_compare_channels, (const void *c1_v, const void *c2_v))
|
||||
{
|
||||
channel_t *c1 = NULL, *c2 = NULL;
|
||||
/* These are a workaround for -Wbad-function-cast throwing a fit */
|
||||
const circuitmux_policy_t *p1, *p2;
|
||||
uintptr_t p1_i, p2_i;
|
||||
|
||||
tor_assert(c1_v);
|
||||
tor_assert(c2_v);
|
||||
|
||||
c1 = (channel_t *)(c1_v);
|
||||
c2 = (channel_t *)(c2_v);
|
||||
|
||||
tor_assert(c1);
|
||||
tor_assert(c2);
|
||||
|
||||
if (c1 != c2) {
|
||||
if (circuitmux_get_policy(c1->cmux) ==
|
||||
circuitmux_get_policy(c2->cmux)) {
|
||||
/* Same cmux policy, so use the mux comparison */
|
||||
return circuitmux_compare_muxes(c1->cmux, c2->cmux);
|
||||
} else {
|
||||
/*
|
||||
* Different policies; not important to get this edge case perfect
|
||||
* because the current code never actually gives different channels
|
||||
* different cmux policies anyway. Just use this arbitrary but
|
||||
* definite choice.
|
||||
*/
|
||||
p1 = circuitmux_get_policy(c1->cmux);
|
||||
p2 = circuitmux_get_policy(c2->cmux);
|
||||
p1_i = (uintptr_t)p1;
|
||||
p2_i = (uintptr_t)p2;
|
||||
|
||||
return (p1_i < p2_i) ? -1 : 1;
|
||||
}
|
||||
} else {
|
||||
/* c1 == c2, so always equal */
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Scheduler event callback; this should get triggered once per event loop
|
||||
* if any scheduling work was created during the event loop.
|
||||
*/
|
||||
|
||||
static void
|
||||
scheduler_evt_callback(evutil_socket_t fd, short events, void *arg)
|
||||
{
|
||||
(void)fd;
|
||||
(void)events;
|
||||
(void)arg;
|
||||
log_debug(LD_SCHED, "Scheduler event callback called");
|
||||
|
||||
tor_assert(run_sched_ev);
|
||||
|
||||
/* Run the scheduler */
|
||||
scheduler_run();
|
||||
|
||||
/* Do we have more work to do? */
|
||||
if (scheduler_more_work()) scheduler_retrigger();
|
||||
}
|
||||
|
||||
/** Mark a channel as no longer ready to accept writes */
|
||||
|
||||
MOCK_IMPL(void,
|
||||
scheduler_channel_doesnt_want_writes,(channel_t *chan))
|
||||
{
|
||||
tor_assert(chan);
|
||||
|
||||
tor_assert(channels_pending);
|
||||
|
||||
/* If it's already in pending, we can put it in waiting_to_write */
|
||||
if (chan->scheduler_state == SCHED_CHAN_PENDING) {
|
||||
/*
|
||||
* It's in channels_pending, so it shouldn't be in any of
|
||||
* the other lists. It can't write any more, so it goes to
|
||||
* channels_waiting_to_write.
|
||||
*/
|
||||
smartlist_pqueue_remove(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_TO_WRITE;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p went from pending "
|
||||
"to waiting_to_write",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
} else {
|
||||
/*
|
||||
* It's not in pending, so it can't become waiting_to_write; it's
|
||||
* either not in any of the lists (nothing to do) or it's already in
|
||||
* waiting_for_cells (remove it, can't write any more).
|
||||
*/
|
||||
if (chan->scheduler_state == SCHED_CHAN_WAITING_FOR_CELLS) {
|
||||
chan->scheduler_state = SCHED_CHAN_IDLE;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p left waiting_for_cells",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/** Mark a channel as having waiting cells */
|
||||
|
||||
MOCK_IMPL(void,
|
||||
scheduler_channel_has_waiting_cells,(channel_t *chan))
|
||||
{
|
||||
int became_pending = 0;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(channels_pending);
|
||||
|
||||
/* First, check if this one also writeable */
|
||||
if (chan->scheduler_state == SCHED_CHAN_WAITING_FOR_CELLS) {
|
||||
/*
|
||||
* It's in channels_waiting_for_cells, so it shouldn't be in any of
|
||||
* the other lists. It has waiting cells now, so it goes to
|
||||
* channels_pending.
|
||||
*/
|
||||
chan->scheduler_state = SCHED_CHAN_PENDING;
|
||||
smartlist_pqueue_add(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p went from waiting_for_cells "
|
||||
"to pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
became_pending = 1;
|
||||
} else {
|
||||
/*
|
||||
* It's not in waiting_for_cells, so it can't become pending; it's
|
||||
* either not in any of the lists (we add it to waiting_to_write)
|
||||
* or it's already in waiting_to_write or pending (we do nothing)
|
||||
*/
|
||||
if (!(chan->scheduler_state == SCHED_CHAN_WAITING_TO_WRITE ||
|
||||
chan->scheduler_state == SCHED_CHAN_PENDING)) {
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_TO_WRITE;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p entered waiting_to_write",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If we made a channel pending, we potentially have scheduling work
|
||||
* to do.
|
||||
*/
|
||||
if (became_pending) scheduler_retrigger();
|
||||
}
|
||||
|
||||
/** Set up the scheduling system */
|
||||
|
||||
void
|
||||
scheduler_init(void)
|
||||
{
|
||||
log_debug(LD_SCHED, "Initting scheduler");
|
||||
|
||||
tor_assert(!run_sched_ev);
|
||||
run_sched_ev = tor_event_new(tor_libevent_get_base(), -1,
|
||||
0, scheduler_evt_callback, NULL);
|
||||
|
||||
channels_pending = smartlist_new();
|
||||
queue_heuristic = 0;
|
||||
queue_heuristic_timestamp = approx_time();
|
||||
}
|
||||
|
||||
/** Check if there's more scheduling work */
|
||||
|
||||
static int
|
||||
scheduler_more_work(void)
|
||||
{
|
||||
tor_assert(channels_pending);
|
||||
|
||||
return ((scheduler_get_queue_heuristic() < sched_q_low_water) &&
|
||||
((smartlist_len(channels_pending) > 0))) ? 1 : 0;
|
||||
}
|
||||
|
||||
/** Retrigger the scheduler in a way safe to use from the callback */
|
||||
|
||||
static void
|
||||
scheduler_retrigger(void)
|
||||
{
|
||||
tor_assert(run_sched_ev);
|
||||
event_active(run_sched_ev, EV_TIMEOUT, 1);
|
||||
}
|
||||
|
||||
/** Notify the scheduler of a channel being closed */
|
||||
|
||||
MOCK_IMPL(void,
|
||||
scheduler_release_channel,(channel_t *chan))
|
||||
{
|
||||
tor_assert(chan);
|
||||
tor_assert(channels_pending);
|
||||
|
||||
if (chan->scheduler_state == SCHED_CHAN_PENDING) {
|
||||
smartlist_pqueue_remove(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
}
|
||||
|
||||
chan->scheduler_state = SCHED_CHAN_IDLE;
|
||||
}
|
||||
|
||||
/** Run the scheduling algorithm if necessary */
|
||||
|
||||
MOCK_IMPL(void,
|
||||
scheduler_run, (void))
|
||||
{
|
||||
int n_cells, n_chans_before, n_chans_after;
|
||||
uint64_t q_len_before, q_heur_before, q_len_after, q_heur_after;
|
||||
ssize_t flushed, flushed_this_time;
|
||||
smartlist_t *to_readd = NULL;
|
||||
channel_t *chan = NULL;
|
||||
|
||||
log_debug(LD_SCHED, "We have a chance to run the scheduler");
|
||||
|
||||
if (scheduler_get_queue_heuristic() < sched_q_low_water) {
|
||||
n_chans_before = smartlist_len(channels_pending);
|
||||
q_len_before = channel_get_global_queue_estimate();
|
||||
q_heur_before = scheduler_get_queue_heuristic();
|
||||
|
||||
while (scheduler_get_queue_heuristic() <= sched_q_high_water &&
|
||||
smartlist_len(channels_pending) > 0) {
|
||||
/* Pop off a channel */
|
||||
chan = smartlist_pqueue_pop(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx));
|
||||
tor_assert(chan);
|
||||
|
||||
/* Figure out how many cells we can write */
|
||||
n_cells = channel_num_cells_writeable(chan);
|
||||
if (n_cells > 0) {
|
||||
log_debug(LD_SCHED,
|
||||
"Scheduler saw pending channel " U64_FORMAT " at %p with "
|
||||
"%d cells writeable",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan, n_cells);
|
||||
|
||||
flushed = 0;
|
||||
while (flushed < n_cells &&
|
||||
scheduler_get_queue_heuristic() <= sched_q_high_water) {
|
||||
flushed_this_time =
|
||||
channel_flush_some_cells(chan,
|
||||
MIN(sched_max_flush_cells,
|
||||
(size_t) n_cells - flushed));
|
||||
if (flushed_this_time <= 0) break;
|
||||
flushed += flushed_this_time;
|
||||
}
|
||||
|
||||
if (flushed < n_cells) {
|
||||
/* We ran out of cells to flush */
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_FOR_CELLS;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p "
|
||||
"entered waiting_for_cells from pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
} else {
|
||||
/* The channel may still have some cells */
|
||||
if (channel_more_to_flush(chan)) {
|
||||
/* The channel goes to either pending or waiting_to_write */
|
||||
if (channel_num_cells_writeable(chan) > 0) {
|
||||
/* Add it back to pending later */
|
||||
if (!to_readd) to_readd = smartlist_new();
|
||||
smartlist_add(to_readd, chan);
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p "
|
||||
"is still pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
} else {
|
||||
/* It's waiting to be able to write more */
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_TO_WRITE;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p "
|
||||
"entered waiting_to_write from pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
}
|
||||
} else {
|
||||
/* No cells left; it can go to idle or waiting_for_cells */
|
||||
if (channel_num_cells_writeable(chan) > 0) {
|
||||
/*
|
||||
* It can still accept writes, so it goes to
|
||||
* waiting_for_cells
|
||||
*/
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_FOR_CELLS;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p "
|
||||
"entered waiting_for_cells from pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
} else {
|
||||
/*
|
||||
* We exactly filled up the output queue with all available
|
||||
* cells; go to idle.
|
||||
*/
|
||||
chan->scheduler_state = SCHED_CHAN_IDLE;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p "
|
||||
"become idle from pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log_debug(LD_SCHED,
|
||||
"Scheduler flushed %d cells onto pending channel "
|
||||
U64_FORMAT " at %p",
|
||||
(int)flushed, U64_PRINTF_ARG(chan->global_identifier),
|
||||
chan);
|
||||
} else {
|
||||
log_info(LD_SCHED,
|
||||
"Scheduler saw pending channel " U64_FORMAT " at %p with "
|
||||
"no cells writeable",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
/* Put it back to WAITING_TO_WRITE */
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_TO_WRITE;
|
||||
}
|
||||
}
|
||||
|
||||
/* Readd any channels we need to */
|
||||
if (to_readd) {
|
||||
SMARTLIST_FOREACH_BEGIN(to_readd, channel_t *, chan) {
|
||||
chan->scheduler_state = SCHED_CHAN_PENDING;
|
||||
smartlist_pqueue_add(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
} SMARTLIST_FOREACH_END(chan);
|
||||
smartlist_free(to_readd);
|
||||
}
|
||||
|
||||
n_chans_after = smartlist_len(channels_pending);
|
||||
q_len_after = channel_get_global_queue_estimate();
|
||||
q_heur_after = scheduler_get_queue_heuristic();
|
||||
log_debug(LD_SCHED,
|
||||
"Scheduler handled %d of %d pending channels, queue size from "
|
||||
U64_FORMAT " to " U64_FORMAT ", queue heuristic from "
|
||||
U64_FORMAT " to " U64_FORMAT,
|
||||
n_chans_before - n_chans_after, n_chans_before,
|
||||
U64_PRINTF_ARG(q_len_before), U64_PRINTF_ARG(q_len_after),
|
||||
U64_PRINTF_ARG(q_heur_before), U64_PRINTF_ARG(q_heur_after));
|
||||
}
|
||||
}
|
||||
|
||||
/** Trigger the scheduling event so we run the scheduler later */
|
||||
|
||||
#if 0
|
||||
static void
|
||||
scheduler_trigger(void)
|
||||
{
|
||||
log_debug(LD_SCHED, "Triggering scheduler event");
|
||||
|
||||
tor_assert(run_sched_ev);
|
||||
|
||||
event_add(run_sched_ev, EV_TIMEOUT, 1);
|
||||
}
|
||||
#endif
|
||||
|
||||
/** Mark a channel as ready to accept writes */
|
||||
|
||||
void
|
||||
scheduler_channel_wants_writes(channel_t *chan)
|
||||
{
|
||||
int became_pending = 0;
|
||||
|
||||
tor_assert(chan);
|
||||
tor_assert(channels_pending);
|
||||
|
||||
/* If it's already in waiting_to_write, we can put it in pending */
|
||||
if (chan->scheduler_state == SCHED_CHAN_WAITING_TO_WRITE) {
|
||||
/*
|
||||
* It can write now, so it goes to channels_pending.
|
||||
*/
|
||||
smartlist_pqueue_add(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
chan->scheduler_state = SCHED_CHAN_PENDING;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p went from waiting_to_write "
|
||||
"to pending",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
became_pending = 1;
|
||||
} else {
|
||||
/*
|
||||
* It's not in SCHED_CHAN_WAITING_TO_WRITE, so it can't become pending;
|
||||
* it's either idle and goes to WAITING_FOR_CELLS, or it's a no-op.
|
||||
*/
|
||||
if (!(chan->scheduler_state == SCHED_CHAN_WAITING_FOR_CELLS ||
|
||||
chan->scheduler_state == SCHED_CHAN_PENDING)) {
|
||||
chan->scheduler_state = SCHED_CHAN_WAITING_FOR_CELLS;
|
||||
log_debug(LD_SCHED,
|
||||
"Channel " U64_FORMAT " at %p entered waiting_for_cells",
|
||||
U64_PRINTF_ARG(chan->global_identifier), chan);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If we made a channel pending, we potentially have scheduling work
|
||||
* to do.
|
||||
*/
|
||||
if (became_pending) scheduler_retrigger();
|
||||
}
|
||||
|
||||
/**
|
||||
* Notify the scheduler that a channel's position in the pqueue may have
|
||||
* changed
|
||||
*/
|
||||
|
||||
void
|
||||
scheduler_touch_channel(channel_t *chan)
|
||||
{
|
||||
tor_assert(chan);
|
||||
|
||||
if (chan->scheduler_state == SCHED_CHAN_PENDING) {
|
||||
/* Remove and re-add it */
|
||||
smartlist_pqueue_remove(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
smartlist_pqueue_add(channels_pending,
|
||||
scheduler_compare_channels,
|
||||
STRUCT_OFFSET(channel_t, sched_heap_idx),
|
||||
chan);
|
||||
}
|
||||
/* else no-op, since it isn't in the queue */
|
||||
}
|
||||
|
||||
/**
|
||||
* Notify the scheduler of a queue size adjustment, to recalculate the
|
||||
* queue heuristic.
|
||||
*/
|
||||
|
||||
void
|
||||
scheduler_adjust_queue_size(channel_t *chan, char dir, uint64_t adj)
|
||||
{
|
||||
time_t now = approx_time();
|
||||
|
||||
log_debug(LD_SCHED,
|
||||
"Queue size adjustment by %s" U64_FORMAT " for channel "
|
||||
U64_FORMAT,
|
||||
(dir >= 0) ? "+" : "-",
|
||||
U64_PRINTF_ARG(adj),
|
||||
U64_PRINTF_ARG(chan->global_identifier));
|
||||
|
||||
/* Get the queue heuristic up to date */
|
||||
scheduler_update_queue_heuristic(now);
|
||||
|
||||
/* Adjust as appropriate */
|
||||
if (dir >= 0) {
|
||||
/* Increasing it */
|
||||
queue_heuristic += adj;
|
||||
} else {
|
||||
/* Decreasing it */
|
||||
if (queue_heuristic > adj) queue_heuristic -= adj;
|
||||
else queue_heuristic = 0;
|
||||
}
|
||||
|
||||
log_debug(LD_SCHED,
|
||||
"Queue heuristic is now " U64_FORMAT,
|
||||
U64_PRINTF_ARG(queue_heuristic));
|
||||
}
|
||||
|
||||
/**
|
||||
* Query the current value of the queue heuristic
|
||||
*/
|
||||
|
||||
STATIC uint64_t
|
||||
scheduler_get_queue_heuristic(void)
|
||||
{
|
||||
time_t now = approx_time();
|
||||
|
||||
scheduler_update_queue_heuristic(now);
|
||||
|
||||
return queue_heuristic;
|
||||
}
|
||||
|
||||
/**
|
||||
* Adjust the queue heuristic value to the present time
|
||||
*/
|
||||
|
||||
STATIC void
|
||||
scheduler_update_queue_heuristic(time_t now)
|
||||
{
|
||||
time_t diff;
|
||||
|
||||
if (queue_heuristic_timestamp == 0) {
|
||||
/*
|
||||
* Nothing we can sensibly do; must not have been initted properly.
|
||||
* Oh well.
|
||||
*/
|
||||
queue_heuristic_timestamp = now;
|
||||
} else if (queue_heuristic_timestamp < now) {
|
||||
diff = now - queue_heuristic_timestamp;
|
||||
/*
|
||||
* This is a simple exponential age-out; the other proposed alternative
|
||||
* was a linear age-out using the bandwidth history in rephist.c; I'm
|
||||
* going with this out of concern that if an adversary can jam the
|
||||
* scheduler long enough, it would cause the bandwidth to drop to
|
||||
* zero and render the aging mechanism ineffective thereafter.
|
||||
*/
|
||||
if (0 <= diff && diff < 64) queue_heuristic >>= diff;
|
||||
else queue_heuristic = 0;
|
||||
|
||||
queue_heuristic_timestamp = now;
|
||||
|
||||
log_debug(LD_SCHED,
|
||||
"Queue heuristic is now " U64_FORMAT,
|
||||
U64_PRINTF_ARG(queue_heuristic));
|
||||
}
|
||||
/* else no update needed, or time went backward */
|
||||
}
|
||||
|
||||
/**
|
||||
* Set scheduler watermarks and flush size
|
||||
*/
|
||||
|
||||
void
|
||||
scheduler_set_watermarks(uint32_t lo, uint32_t hi, uint32_t max_flush)
|
||||
{
|
||||
/* Sanity assertions - caller should ensure these are true */
|
||||
tor_assert(lo > 0);
|
||||
tor_assert(hi > lo);
|
||||
tor_assert(max_flush > 0);
|
||||
|
||||
sched_q_low_water = lo;
|
||||
sched_q_high_water = hi;
|
||||
sched_max_flush_cells = max_flush;
|
||||
}
|
||||
|
50
src/or/scheduler.h
Normal file
50
src/or/scheduler.h
Normal file
@ -0,0 +1,50 @@
|
||||
/* * Copyright (c) 2013, The Tor Project, Inc. */
|
||||
/* See LICENSE for licensing information */
|
||||
|
||||
/**
|
||||
* \file scheduler.h
|
||||
* \brief Header file for scheduler.c
|
||||
**/
|
||||
|
||||
#ifndef TOR_SCHEDULER_H
|
||||
#define TOR_SCHEDULER_H
|
||||
|
||||
#include "or.h"
|
||||
#include "channel.h"
|
||||
#include "testsupport.h"
|
||||
|
||||
/* Global-visibility scheduler functions */
|
||||
|
||||
/* Set up and shut down the scheduler from main.c */
|
||||
void scheduler_free_all(void);
|
||||
void scheduler_init(void);
|
||||
MOCK_DECL(void, scheduler_run, (void));
|
||||
|
||||
/* Mark channels as having cells or wanting/not wanting writes */
|
||||
MOCK_DECL(void,scheduler_channel_doesnt_want_writes,(channel_t *chan));
|
||||
MOCK_DECL(void,scheduler_channel_has_waiting_cells,(channel_t *chan));
|
||||
void scheduler_channel_wants_writes(channel_t *chan);
|
||||
|
||||
/* Notify the scheduler of a channel being closed */
|
||||
MOCK_DECL(void,scheduler_release_channel,(channel_t *chan));
|
||||
|
||||
/* Notify scheduler of queue size adjustments */
|
||||
void scheduler_adjust_queue_size(channel_t *chan, char dir, uint64_t adj);
|
||||
|
||||
/* Notify scheduler that a channel's queue position may have changed */
|
||||
void scheduler_touch_channel(channel_t *chan);
|
||||
|
||||
/* Adjust the watermarks from config file*/
|
||||
void scheduler_set_watermarks(uint32_t lo, uint32_t hi, uint32_t max_flush);
|
||||
|
||||
/* Things only scheduler.c and its test suite should see */
|
||||
|
||||
#ifdef SCHEDULER_PRIVATE_
|
||||
MOCK_DECL(STATIC int, scheduler_compare_channels,
|
||||
(const void *c1_v, const void *c2_v));
|
||||
STATIC uint64_t scheduler_get_queue_heuristic(void);
|
||||
STATIC void scheduler_update_queue_heuristic(time_t now);
|
||||
#endif
|
||||
|
||||
#endif /* !defined(TOR_SCHEDULER_H) */
|
||||
|
@ -326,9 +326,9 @@ transport_add(transport_t *t)
|
||||
/** Remember a new pluggable transport proxy at <b>addr</b>:<b>port</b>.
|
||||
* <b>name</b> is set to the name of the protocol this proxy uses.
|
||||
* <b>socks_ver</b> is set to the SOCKS version of the proxy. */
|
||||
int
|
||||
transport_add_from_config(const tor_addr_t *addr, uint16_t port,
|
||||
const char *name, int socks_ver)
|
||||
MOCK_IMPL(int,
|
||||
transport_add_from_config, (const tor_addr_t *addr, uint16_t port,
|
||||
const char *name, int socks_ver))
|
||||
{
|
||||
transport_t *t = transport_new(addr, port, name, socks_ver, NULL);
|
||||
|
||||
@ -1456,9 +1456,9 @@ managed_proxy_create(const smartlist_t *transport_list,
|
||||
* Requires that proxy_argv be a NULL-terminated array of command-line
|
||||
* elements, containing at least one element.
|
||||
**/
|
||||
void
|
||||
pt_kickstart_proxy(const smartlist_t *transport_list,
|
||||
char **proxy_argv, int is_server)
|
||||
MOCK_IMPL(void,
|
||||
pt_kickstart_proxy, (const smartlist_t *transport_list,
|
||||
char **proxy_argv, int is_server))
|
||||
{
|
||||
managed_proxy_t *mp=NULL;
|
||||
transport_t *old_transport = NULL;
|
||||
|
@ -32,14 +32,16 @@ typedef struct transport_t {
|
||||
|
||||
void mark_transport_list(void);
|
||||
void sweep_transport_list(void);
|
||||
int transport_add_from_config(const tor_addr_t *addr, uint16_t port,
|
||||
const char *name, int socks_ver);
|
||||
MOCK_DECL(int, transport_add_from_config,
|
||||
(const tor_addr_t *addr, uint16_t port,
|
||||
const char *name, int socks_ver));
|
||||
void transport_free(transport_t *transport);
|
||||
|
||||
transport_t *transport_get_by_name(const char *name);
|
||||
|
||||
void pt_kickstart_proxy(const smartlist_t *transport_list, char **proxy_argv,
|
||||
int is_server);
|
||||
MOCK_DECL(void, pt_kickstart_proxy,
|
||||
(const smartlist_t *transport_list, char **proxy_argv,
|
||||
int is_server));
|
||||
|
||||
#define pt_kickstart_client_proxy(tl, pa) \
|
||||
pt_kickstart_proxy(tl, pa, 0)
|
||||
|
@ -11,11 +11,12 @@ LIBS = ..\..\..\build-alpha\lib\libevent.lib \
|
||||
ws2_32.lib advapi32.lib shell32.lib \
|
||||
crypt32.lib gdi32.lib user32.lib
|
||||
|
||||
TEST_OBJECTS = test.obj test_addr.obj test_containers.obj \
|
||||
test_controller_events.ogj test_crypto.obj test_data.obj test_dir.obj \
|
||||
test_microdesc.obj test_pt.obj test_util.obj test_config.obj \
|
||||
test_cell_formats.obj test_replay.obj test_introduce.obj tinytest.obj \
|
||||
test_hs.obj
|
||||
TEST_OBJECTS = test.obj test_addr.obj test_channel.obj test_channeltls.obj \
|
||||
test_containers.obj \
|
||||
test_controller_events.obj test_crypto.obj test_data.obj test_dir.obj \
|
||||
test_checkdir.obj test_microdesc.obj test_pt.obj test_util.obj test_config.obj \
|
||||
test_cell_formats.obj test_relay.obj test_replay.obj \
|
||||
test_scheduler.obj test_introduce.obj test_hs.obj tinytest.obj
|
||||
|
||||
tinytest.obj: ..\ext\tinytest.c
|
||||
$(CC) $(CFLAGS) /D snprintf=_snprintf /c ..\ext\tinytest.c
|
||||
|
25
src/test/fakechans.h
Normal file
25
src/test/fakechans.h
Normal file
@ -0,0 +1,25 @@
|
||||
/* Copyright (c) 2014, The Tor Project, Inc. */
|
||||
/* See LICENSE for licensing information */
|
||||
|
||||
#ifndef TOR_FAKECHANS_H
|
||||
#define TOR_FAKECHANS_H
|
||||
|
||||
/**
|
||||
* \file fakechans.h
|
||||
* \brief Declarations for fake channels for test suite use
|
||||
*/
|
||||
|
||||
void make_fake_cell(cell_t *c);
|
||||
void make_fake_var_cell(var_cell_t *c);
|
||||
channel_t * new_fake_channel(void);
|
||||
|
||||
/* Also exposes some a mock used by both test_channel.c and test_relay.c */
|
||||
void scheduler_channel_has_waiting_cells_mock(channel_t *ch);
|
||||
void scheduler_release_channel_mock(channel_t *ch);
|
||||
|
||||
/* Query some counters used by the exposed mocks */
|
||||
int get_mock_scheduler_has_waiting_cells_count(void);
|
||||
int get_mock_scheduler_release_channel_count(void);
|
||||
|
||||
#endif /* !defined(TOR_FAKECHANS_H) */
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user