Otherwise, it could mung the thing that came over the net on windows,
which would defeat the purpose of recording the unparseable thing.
Fixes bug 11342; bugfix on 0.2.2.1-alpha.
This is a fix for 9963. I say this is a feature, but if it's a
bugfix, it's a bugfix on 0.2.4.18-rc.
Old behavior:
Mar 27 11:02:19.000 [notice] Bootstrapped 50%: Loading relay descriptors.
Mar 27 11:02:20.000 [notice] Bootstrapped 51%: Loading relay descriptors.
Mar 27 11:02:20.000 [notice] Bootstrapped 52%: Loading relay descriptors.
... [Many lines omitted] ...
Mar 27 11:02:29.000 [notice] Bootstrapped 78%: Loading relay descriptors.
Mar 27 11:02:33.000 [notice] We now have enough directory information to build circuits.
New behavior:
Mar 27 11:16:17.000 [notice] Bootstrapped 50%: Loading relay descriptors
Mar 27 11:16:19.000 [notice] Bootstrapped 55%: Loading relay descriptors
Mar 27 11:16:21.000 [notice] Bootstrapped 60%: Loading relay descriptors
Mar 27 11:16:21.000 [notice] Bootstrapped 65%: Loading relay descriptors
Mar 27 11:16:21.000 [notice] Bootstrapped 70%: Loading relay descriptors
Mar 27 11:16:21.000 [notice] Bootstrapped 75%: Loading relay descriptors
Mar 27 11:16:21.000 [notice] We now have enough directory information to build circuits.
Most of these are simple. The only nontrivial part is that our
pattern for using ENUM_BF was confusing doxygen by making declarations
that didn't look like declarations.
In circuitlist_free_all, we free all the circuits, removing them from
the map as we go, but we weren't actually freeing the placeholder
entries that we use to indicate pending DESTROY cells.
Fix for bug 11278; bugfix on the 7912 code that was merged in
0.2.5.1-alpha
There are still quite a few 0.2.3.2x relays running for x<5, and while I
agree they should upgrade, I don't think cutting them out of the network
is a net win on either side.
ubsan doesn't like us to do (1u<<32) when 32 is wider than
unsigned. Fortunately, we already special-case
addr_mask_get_bits(0), so we can just change the loop bounds.
In digestmap_set/get benchmarks, doing unaligned access on x86
doesn't save more than a percent or so in the fast case. In the
slow case (where we cross a cache line), it could be pretty
expensive. It also makes ubsan unhappy.
This make clang's memory sanitizer happier that we aren't reading
off the end of a char[1]. We hadn't replaced the char[1] with a
char[FLEXIBLE_ARRAY_MEMBER] before because we were doing a union
trick to force alignment. Now we use __attribute__(aligned) where
available, and we do the union trick elsewhere.
Most of this patch is just replacing accesses to (x)->u.mem with
(x)->U_MEM, where U_MEM is defined as "u.mem" or "mem" depending on
our implementation.