e3e0a2432c Add benchmark to write JSON into a string (Martin Ankerl)
Pull request description:
The benchmark `BlockToJsonVerbose` only tests generating (and destroying)
the JSON data structure, but serializing into a string is also a
performance critical aspect of the RPC calls.
Extracts test setup into a `struct TestBlockAndIndex`, and uses it in
both `BlockToJsonVerbose` and `BlockToJsonVerboseWrite`.
Also, use `ankerl::nanobench::doNotOptimizeAway` to make sure the compiler
can't optimize the result of the calls away.
Here are benchmark results on my Intel i7-8700:
| ns/op | op/s | err% | ins/op | cyc/op | IPC | bra/op | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 71,807,017.00 | 13.93 | 0.4% | 555,782,961.00 | 220,788,645.00 | 2.517 | 102,279,341.00 | 0.4% | 0.80 | `BlockToJsonVerbose`
| 27,916,835.00 | 35.82 | 0.1% | 235,084,034.00 | 89,033,525.00 | 2.640 | 42,911,139.00 | 0.3% | 0.32 | `BlockToJsonVerboseWrite`
ACKs for top commit:
laanwj:
Code review ACK e3e0a2432c
Tree-SHA512: bc4d6d1588d47d4bd7af8e7908e44b8561bc391a2d73eccd7c0aa37fc40e8a9ce1fa1f3c29b416eef24a73c6bce3036839c0bbfe1b8dbd6d1bba3718b7ca5383
So that help texts include "i2p" in:
* `./bitcoind -help` (in `-onlynet` description)
* `getpeerinfo` RPC
* `getnetworkinfo` RPC
Co-authored-by: Jon Atack <jon@atack.com>
Introduce two new options to reach the I2P network:
* `-i2psam=<ip:port>` point to the I2P SAM proxy. If this is set then
the I2P network is considered reachable and we can make outgoing
connections to I2P peers via that proxy. We listen for and accept
incoming connections from I2P peers if the below is set in addition to
`-i2psam=<ip:port>`
* `-i2pacceptincoming` if this is set together with `-i2psam=<ip:port>`
then we accept incoming I2P connections via the I2P SAM proxy.
Implement the following commands from the I2P SAM protocol:
* HELLO: needed for all of the remaining ones
* DEST GENERATE: to generate our private key and destination
* NAMING LOOKUP: to convert .i2p addresses to destinations
* SESSION CREATE: needed for STREAM CONNECT and STREAM ACCEPT
* STREAM CONNECT: to make outgoing connections
* STREAM ACCEPT: to accept incoming connections
Introduce two high level, convenience methods in the `Sock` class:
* `SendComplete()`: keep trying to send the specified data until either
successfully sent all of it, timeout or interrupted.
* `RecvUntilTerminator()`: read until a terminator is encountered (never
after it), timeout or interrupted.
These will be convenient in the I2P SAM implementation.
`SendComplete()` can also be used in the SOCKS5 implementation instead
of calling `send()` directly.
Previously `Sock::Wait()` would not have signaled to the caller whether
a timeout or one of the requested events occurred since that was not
needed by any of the callers.
Such functionality will be needed in the I2P implementation, thus extend
the `Sock::Wait()` method.
Recognize also I2P addresses in the form `base32hashofpublickey.b32.i2p`
from `CNetAddr::SetSpecial()`.
This makes `Lookup()` support them, which in turn makes it possible to
manually connect to an I2P node by using
`-proxy=i2p_socks5_proxy:port -addnode=i2p_address.b32.i2p:port`
Co-authored-by: Lucas Ontivero <lucasontivero@gmail.com>
Our local (bind) address is already saved in `CNode::addrBind` and there
is no need to re-retrieve it again with `GetBindAddress()`.
Also, for I2P connections `CNode::addrBind` would contain our I2P
address, but `GetBindAddress()` would return something like
`127.0.0.1:RANDOM_PORT`.
Isolate the second half of `CConnman::AcceptConnection()` into a new
separate method, which could be reused if we accept incoming connections
by other means than `accept()` (first half of
`CConnman::AcceptConnection()`).
Call `GetBindAddress()` earlier in `CConnman::AcceptConnection()`. That
is specific to the TCP protocol and makes the code below it reusable for
other protocols, if the caller provides `addr_bind`, retrieved by other
means.
This check is related to an `accept()` failure. So do the check earlier,
closer to the `accept()` call.
This will allow to isolate the `accept()`-specific code at the beginning
of `CConnman::AcceptConnection()` and reuse the code that follows it.
`fclose()` is flushing any buffered data to disk, so if it fails then
that could mean that the data was not completely written to disk.
Thus, check if `fclose()` succeeds and only then claim success from
`WriteBinaryFile()`.
If an error occurs and `fread()` returns `0` (nothing was read) then the
code before this patch would have returned "success" with a partially
read contents of the file.
Extract `ReadBinaryFile()` and `WriteBinaryFile()` from `torcontrol.cpp`
to its own `readwritefile.{h,cpp}` files, so that it can be reused from
other modules.
Currently, our cross-compile of libnatpmp for macOS doesn't work at all.
The wrong archiver is used, which produces an archive the linker doesn't like.
This becomes clear when configuring:
```bash
configure:25722: checking for initnatpmp in -lnatpmp
configure:25747: env -u C_INCLUDE_PATH -u CPLUS_INCLUDE_PATH -u OBJC_INCLUDE_PATH -u OBJCPLUS_INCLUDE_PATH -u CPATH -u LIBRARY_PATH /home/ubuntu/bitcoin/depends/x86_64-apple-darwin18/native/bin/clang++ --target=x86_64-apple-darwin18 <trim> -Wl,-headerpad_max_install_names -Wl,-dead_strip -Wl,-dead_strip_dylibs conftest.cpp -lnatpmp >&5
ld: archive has no table of contents for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
Fix this by using the right `ar` (we do the same for upnp).
While we're at it, we fixe the build so that we are using our c/ppflags.
This means building with `-O2` rather than `-Os`.
Note that this fixes an issue that is also fixed by #21209.
However, given there are reservations about updating to use a newer libnatpmp source, we should just fix this for now.
It turned out that the "sync up" procedure of repeatedly generating a
block and waiting for a notification once with timeout is too naive in
its current form, as the following scenario could happen:
- generate block A
- receive notification, timeout happens -> repeat procedure
- generate block B
- node publishes block A notification
- receive notification, we receive the one caused by block A
-> sync-up procedure is completed
- node publishes block B
- the actual test starts
- on the first notification reception, one caused by block B is received,
rather than the one actually caused by test code, leading to failure
This change ensures that after each test block generation, we wait for
the notification that is actually caused by that block and ignore others
from possibly earlier blocks.
Co-authored-by: Jon Atack <jon@atack.com>
faa06ecc9c build: Bump minimum Qt version to 5.9.5 (Hennadii Stepanov)
Pull request description:
Close#20104.
ACKs for top commit:
laanwj:
Code review ACK faa06ecc9c
jarolrod:
ACK faa06ecc9c
fanquake:
ACK faa06ecc9c - this should be ok to do now.
Tree-SHA512: 7295472b5fd37ffb30f044e88c39d375a5a5187d3f2d44d4e73d0eb0c7fd923cf9949c2ddab6cddd8c5da7e375fff38112b6ea9779da4fecce6f024d05ba9c08
88c4b9b761 test: remove unneeded node from feature_blockfilterindex_prune.py (Jon Atack)
ace3f4cbdf test: improve assertions in feature_blockfilterindex_prune.py (Jon Atack)
Pull request description:
- improves the assertions
- removes an unneeded node, reducing from two to one, and some unneeded `extra_arg` code
ACKs for top commit:
MarcoFalke:
ACK 88c4b9b761
brunoerg:
Tested ACK 88c4b9b761
Tree-SHA512: 295700da3a5f583ee02ae2d184db93cce0e13aba69115d5db07f83e96a66b4b850adaff2c3725b6585799565b9ee654b1fee8a6245eaba8c21e1cb5ce524eb2b
13a9fd11a5 guix: Passthrough SDK_PATH into container (Carl Dong)
Pull request description:
This is a usability improvement for Guix builders so that they don't have to extract the Xcode tarball into `depends/SDKs` every time.
Inspiration: https://github.com/bitcoin/bitcoin/pull/21089#issuecomment-778639698
ACKs for top commit:
laanwj:
Tested ACK 13a9fd11a5
Tree-SHA512: 63392d537e48a0da9f0ee04a929613b139bef1ac5643187871c9ea5376afd2a3d95df0f5e0950ae0eccd2813b166667be98401e5a248ae9c187fe4e84e54d427
Collects all the orphan handling globals into a single member var in
net_processing, and ensures access is encapuslated into the interface
functions. Also adds doxygen comments for methods.
All the interesting functionality of AddOrphanTx is already in other
functions, so call those functions directly in the one place that
AddOrphanTx was used.
Rather than checking net_processing's internal implementation of
AddOrphanTx, test txorphanage's exported AddTx interface. Note that
this means AddToCompactExtraTransactions is no longer tested here.
EraseOrphansFor was called both with and without g_cs_orphans held,
correct that so that it's always called with it already held.
LimitOrphanTxSize was always called with g_cs_orphans held, so
add annotations and don't lock it a second time.