3d0a5c37e9 use 'byte'/'bytes' for bech32(m) validation error (Reese Russell)
Pull request description:
This PR rectifies a linguistic inconsistency found in merged PR [27727](https://github.com/bitcoin/bitcoin/pull/27727). It addresses the improper usage of the term 'byte' in error reports. As it stands, PR [27727](https://github.com/bitcoin/bitcoin/pull/27727) exclusively utilizes 'byte' in error messages, regardless of the context, as demonstrated below:
Currently: ```Invalid Bech32 v0 address program size (16 byte), per BIP141```
This modification enhances the accuracy of error reporting in most scenarios users are likely to encounter by checking for a plural or singular number of bytes.
This PR
**16 Bytes program size error** :
```
(
"BC1QR508D6QEJXTDG4Y5R3ZARVARYV98GJ9P",
"Invalid Bech32 v0 address program size (16 bytes), per BIP141",
[],
)
```
**1 Byte program size error**
```
(
"bc1pw5dgrnzv",
"Invalid Bech32 address program size (1 byte)",
[]
),
```
Thank you
ACKs for top commit:
MarcoFalke:
lgtm ACK 3d0a5c37e9
Tree-SHA512: 55069194a6a33a37559cf14b59b6ac05b1160f57f14d1415aef8e76c916c7c7f343310916ae85b3fa895937802449c1dddb2f653340d0f39203f06aee10f6fce
eeee55f928 rpc: Fix invalid bech32 handling (MarcoFalke)
Pull request description:
Currently the handling of invalid bech32(m) addresses over RPC has many issues:
* No error for invalid addresses is reported, leading to internal bugs via `CHECK_NONFATAL`, see https://github.com/bitcoin/bitcoin/issues/27723
* The error messages use "data size" (the meaning of which is unclear to the user, because the witness program data and bech32 section data are related but different) when they mean "program size"
Fix all issues. Also, use the BIP 173 and BIP 350 test vectors.
ACKs for top commit:
achow101:
ACK eeee55f928
brunoerg:
crACK eeee55f928
Tree-SHA512: c8639ee49e2a54b740b72d66bc4a40352dd553a6e3220dea9f94e48e33124f21f597a2817cb405d0a4c88d21df1013c0a4877a01370a2d326aa2cff1f9c381a8
d7f359b35e Add tests for parallel compact block downloads (Greg Sanders)
03423f8bd1 Support up to 3 parallel compact block txn fetchings (Greg Sanders)
13f9b20b4c Only request full blocks from the peer we thought had the block in-flight (Greg Sanders)
cce96182ba Convert mapBlocksInFlight to a multimap (Greg Sanders)
a90595478d Remove nBlocksInFlight (Greg Sanders)
86cff8bf18 alias BlockDownloadMap for mapBlocksInFlight (Greg Sanders)
Pull request description:
This is an attempt at mitigating https://github.com/bitcoin/bitcoin/issues/25258 , which is a revival of https://github.com/bitcoin/bitcoin/pull/10984, which is a revival of https://github.com/bitcoin/bitcoin/pull/9447.
This PR attempts to mitigate a single case, where high bandwidth peers can bail us out of a flakey
peer not completing blocks for us. We allow up to 2 additional getblocktxns requests per unique block.
This would hopefully allow the chance for an honest high bandwidth peer to hand us the transactions
even if the first in flight peer stalls out.
In contrast to previous effort:
1) it will not help if subsequent peers send block headers only, so only high-bandwidth peers this time. See: https://github.com/bitcoin/bitcoin/pull/10984/files#diff-6875de769e90cec84d2e8a9c1b962cdbcda44d870d42e4215827e599e11e90e3R1411
2) `MAX_GETBLOCKTXN_TXN_AFTER_FIRST_IN_FLIGHT` is removed, in favor of aiding recovery during turbulent mempools
3) We require one of the 3 block fetching slots to be an outbound peer. This can be the original offering peer, or subsequent compact blocks given by high bandwidth peers.
ACKs for top commit:
sdaftuar:
ACK d7f359b35e
mzumsande:
Code Review ACK d7f359b35e
Tree-SHA512: 54980eac179e30f12a0bd49df147b2c3d63cd8f9401abb23c7baf02f76eeb59f2cfaaa155227990d0d39384de9fa38663f88774e891600a3837ae927f04f0db3
A single outbound slot is required, so if the first two slots
are taken by inbound in-flights, the node will reject additional
unless they are coming from outbound.
This means in the case where a fast sybil peer is attempting to
stall out a node, a single high bandwidth outbound peer can
mitigate the attack.
1bce12acd3 test: add test for `descriptorprocesspsbt` RPC (ishaanam)
fb2a3a70e8 rpc: add descriptorprocesspsbt rpc (ishaanam)
Pull request description:
This PR implements an RPC called `descriptorprocesspsbt`. This RPC is based off of `walletprocesspsbt`, but instead of interacting with the wallet to update, sign and finalize a psbt, it instead accepts an array of output descriptors and uses that information along with information from the mempool, txindex, and the utxo set to do so. `utxoupdatepsbt` also updates a psbt in this manner, but doesn't sign or finalize it. Because of this overlap, a helper function that is added in this PR is called by both `utxoupdatepsbt` and `descriptorprocesspsbt`. Whether or not the helper function signs a psbt is dictated by if the HidingSigningProvider passed to it contains any private information. There is also a test added in this PR for this new RPC that uses p2wsh, p2wpkh, and legacy outputs.
Edit: see https://github.com/bitcoin/bitcoin/pull/25796#issuecomment-1228830963
ACKs for top commit:
achow101:
re-ACK 1bce12acd3
instagibbs:
reACK 1bce12acd3
Tree-SHA512: e1d0334739943e71f2ee68b4db7637ebe725da62e7aa4be071f71c7196d2a5970a31ece96d91e372d34454cde8509e95ab0eebd2c8edb94f7d5a781a84f8fc5d
fa1b3abc83 ci: Log qa-assets repo last commit (MarcoFalke)
fa22966f33 fuzz: Print error message when FUZZ is missing (MarcoFalke)
Pull request description:
Some trivial UX improvements.
* Change the exit code for `PRINT_ALL_FUZZ_TARGETS_AND_ABORT` and `WRITE_ALL_FUZZ_TARGETS_AND_ABORT` to `EXIT_SUCCESS` instead of `Aborted (core dumped)`.
* Print readable error message when `FUZZ` is missing instead of `Aborted (core dumped)`.
* Clarify that a fuzz target needs to be compiled into the executable.
ACKs for top commit:
dergoegge:
ACK fa1b3abc83
Tree-SHA512: 065ef8920449c64b3516f89a61cb397b505eccf531318c4f3830895d5ff6cd7ae2525cb857320481e3d0ed0b2f8a522cd8f7835e69f021241b6ec297a6102fc8
This requires a linux kernel of 3.17.0+, which seems entirely
reasonable. 3.17 went EOL in 2015, and the last supported 3.x kernel
(3.16) went EOL > 4 years ago, in 2020. For reference, the current
oldest maintained kernel is 4.14 (released 2017, EOL Jan 2024).
Support for `getrandom()` (and `getentropy()`) was added to
glibc 2.25, https://sourceware.org/legacy-ml/libc-alpha/2017-02/msg00079.html,
and we already require 2.27+.
All that being said, I don't think you would encounter a current day
system, running with kernel headers older than 3.17 (released 2014) but
also having a glibc of 2.27+ (released 2018).
Remove it. Make this change, so in a future commit, we can
combine #ifdefs, and avoid duplicate <sys/random.h> includes once we
switch to using getrandom directly.
Also remove the comment about macOS 10.12. We already require macOS >
10.15, so it is redundant.
6b605b91c1 [fuzz] Add MiniMiner target + diff fuzz against BlockAssembler (glozow)
3f3f2d59ea [unit test] GatherClusters and MiniMiner unit tests (glozow)
59afcc8354 Implement Mini version of BlockAssembler to calculate mining scores (glozow)
56484f0fdc [mempool] find connected mempool entries with GatherClusters(…) (glozow)
Pull request description:
Implement Mini version of BlockAssembler to calculate mining scores
Run the mining algorithm on a subset of the mempool, only disturbing the
mempool to copy out fee information for relevant entries. Intended to be
used by wallet to calculate amounts needed for fee-bumping unconfirmed
transactions.
From comments of sipa and glozow below:
> > In what way does the code added here differ from the real block assembly code?
>
> * Only operates on the relevant transactions rather than full mempool
> * Has the ability to remove transactions that will be replaced so they don't impact their ancestors
> * Does not hold mempool lock outside of the constructor, makes copies of the entries it needs instead (though I'm not sure if this has an effect in practice)
> * Doesn't do the sanity checks like keeping weight within max block weight and `IsFinalTx()`
> * After the block template is built, additionally calculates fees to bump remaining ancestor packages to target feerate
ACKs for top commit:
achow101:
ACK 6b605b91c1
Xekyo:
> ACK [6b605b9](6b605b91c1) modulo `miniminer_overlap` test.
furszy:
ACK 6b605b91 modulo `miniminer_overlap` test.
theStack:
Code-review ACK 6b605b91c1
Tree-SHA512: f86a8b4ae0506858a7b15d90f417ebceea5038b395c05c825e3796123ad3b6cb8a98ebb948521316802a4c6d60ebd7041093356b1e2c2922a06b3b96b3b8acb6
fa953f15bf build: Bump minimum supported GCC to g++-9 (MarcoFalke)
fa69955e74 ci: Bump centos:stream8 to centos:stream9 (MarcoFalke)
fa6a755d9f ci: Document the false positive error for g++-9 (MarcoFalke)
Pull request description:
It is a bit frustrating to write valid C++ code only to realize that g++-8 fails to parse it later on.
The only non-EOL operating system still shipping with g++-8 is CentOS Stream 8. I think it is reasonable for users of affected Linux distributions to:
* Upgrade their operating system, or compiler to a supported version.
* Alternatively, stay with a previous release of Bitcoin Core as long as it is supported.
Fixes https://github.com/bitcoin/bitcoin/issues/27537
ACKs for top commit:
hebasto:
ACK fa953f15bf
fanquake:
ACK fa953f15bf
Tree-SHA512: b9cf7e763d3071e1e008c5010de19601d4773afe46d58cf869d3f59285c53240c739a1cd7235a5525ede1bbdf6b6cb6fb091c8fc314864a28d5b27a400bb7632
69d43905b7 test: add coverage for wallet read write db deadlock (furszy)
12daf6fcdc walletdb: scope bdb::EraseRecords under a single db txn (furszy)
043fcb0b05 wallet: bugfix, GetNewCursor() misses to provide batch ptr to BerkeleyCursor (furszy)
Pull request description:
Decoupled from #26644 so it can closed in favor of #26715.
Basically, with bdb, we can't make a write operation while we are traversing the db with the same db handler. These two operations are performed in different txn contexts and cause a deadlock.
Added coverage by using `EraseRecords()` which is the simplest function that executes this process.
To replicate it, need bdb support and drop the first commit. The test will run forever.
ACKs for top commit:
achow101:
ACK 69d43905b7
hebasto:
re-ACK 69d43905b7
Tree-SHA512: b3773be78925f674e962f4a5c54b398a9d0cfe697148c01c3ec0d68281cc5c1444b38165960d219ef3cf1a57c8ce6427f44a876275958d49bbc0808486e19d7d
This is a change in behavior so that if for some reason we request a block from a peer, we don't allow an unsolicited CMPCT_BLOCK announcement for that same block to cause a request for a full block from the uninvited peer (as some type of request is already outstanding from the original peer)
7014e08015 doc: remove mention of glibc 2.10+ (fanquake)
Pull request description:
We already require glibc 2.27+, so mentioning a much older version here is redundant.
ACKs for top commit:
TheCharlatan:
ACK 7014e08015
Tree-SHA512: 883a566a80cabe34bfb5d902990f3eca08d0e11438e6c128d311e558f373ec232b0934deb85d12d796baacfeae590af8c73aa1b2faef07f27ffa9011270ffd96
This is achieved by letting the index sync thread wait until
reindex-chainstate is finished.
This also disables the pruning check when reindexing the chainstate (which is
incompatible with prune mode) because there would be no chain at this point
in init.
The index sync code has logic to go back the chain to the forking point, while
also updating index-specific state, which is necessary to prevent
possible corruption of the coinstatsindex.
Also add a test for this (a reorg happens while the index is deactivated)
that would not pass before this change.
1b1ffbd014 Build: Log when test -f fails in Makefile (TheCharlatan)
541012e621 Build: Use AM_V_GEN in Makefiles where appropriate (TheCharlatan)
Pull request description:
This PR triages some behavior around Makefile recipe echoing suppression.
When generating new files as part of the Makefile the recipe is sometimes suppressed with $(AM_V_GEN) and sometimes with `@`. We should prefer $(AM_V_GEN), since this also prints the lines in silent mode. This is arguably more in style with the current recipe echoing.
Before:
`Generated test/data/script_tests.json.h`
Now:
` GEN test/data/script_tests.json.h`
A side effect of this change is that the recipe for generating build.h is now echoed on each make run. Arguably this makes its generation more transparent.
Sometimes the error emitted by `test -f` is currently thrown without any logging. This makes it a bit harder to debug. Instead, print a helpful log message to point the developer in the right direction.
Alternatively this could have been implemented by just removing the recipe echo suppression (@), but the subsequent make output became too noisy.
ACKs for top commit:
fanquake:
ACK 1b1ffbd014
Tree-SHA512: e31869fab25e72802b692ce6735f9561912caea903c1577101b64c9cb115c98de01a59300e8ffe7b05b998345c1b64a79226231d7d1453236ac338c62dc9fbb3
so we erase all the records atomically or abort the entire
procedure.
and, at the same time, we can share the same db txn context
for the db cursor and the erase functionality.
extra note from the Db.cursor doc:
"If transaction protection is enabled, cursors must be
opened and closed within the context of a transaction"
thus why added a `CloseCursor` call before calling to
`TxnAbort/TxnCommit`.
If the batch ptr is not passed, the cursor will not use the db active
txn context which could lead to a deadlock if the code tries to modify
the db while it is traversing it.
E.g. the 'EraseRecords()' function.
33e2b82a4f wallet, bench: Remove unused database options from WalletBenchLoading (Andrew Chow)
80ace042d8 tests: Modify records directly in wallet ckey loading test (Andrew Chow)
b3bb17d5d0 tests: Update DuplicateMockDatabase for MockableDatabase (Andrew Chow)
f0eecf5e40 scripted-diff: Replace CreateMockWalletDB with CreateMockableWalletDB (Andrew Chow)
075962bc25 wallet, tests: Include wallet/test/util.h (Andrew Chow)
14aa4cb1e4 wallet: Move DummyDatabase to salvage (Andrew Chow)
f67a385556 wallet, tests: Replace usage of dummy db with mockable db (Andrew Chow)
33c6245ac1 Introduce MockableDatabase for wallet unit tests (Andrew Chow)
Pull request description:
For the wallet's unit tests, we currently use either `DummyDatabase` or memory-only versions of either BDB or SQLite. The tests that use `DummyDatabase` just need a `WalletDatabase` so that the `CWallet` can be constructed, while the tests using the memory-only databases just need a backing data store. There is also a `FailDatabase` that is similar to `DummyDatabase` except it fails be default or can have a configured return value. Having all of these different database types can make it difficult to write tests, particularly tests that work when either BDB or SQLite is disabled.
This PR unifies all of these different unit test database classes into a single `MockableDatabase`. Like `DummyDatabase`, most functions do nothing and just return true. Like `FailDatabase`, the return value of some functions can be configured on the fly to test various failure cases. Like the memory-only databases, records can actually be written to the `MockableDatabase` and be retrieved later, but all of this is still held in memory. Using `MockableDatabase` completely removes the need for having BDB or SQLite backed wallets in the unit tests for the tests that are not actually testing specific database behaviors.
Because `MockableDatabase`s can be created by each unit test, we can also control what records are stored in the database. Records can be added and removed externally from the typical database modification functions. This will give us greater ability to test failure conditions, particularly those involving corrupted records.
Possible alternative to #26644
ACKs for top commit:
furszy:
ACK 33e2b82
TheCharlatan:
ACK 33e2b82a4f
Tree-SHA512: c2b09eff9728d063d2d4aea28a0f0e64e40b76483e75dc53f08667df23bd25834d52656cd4eafb02e552db0b9e619cfdb1b1c65b26b5436ee2c971d804768bcc
5b3406094f net_processing: Boost inv trickle rate (Anthony Towns)
228e9201ef txmempool: have CompareDepthAndScore sort missing txs first (Anthony Towns)
Pull request description:
Couple of performance improvements when draining the inventory-to-send queue:
* drop txs that have already been evicted from the mempool (or included in a block) immediately, rather than at the end of processing
* marginally increase outgoing trickle rate during spikes in tx volume
ACKs for top commit:
willcl-ark:
ACK 5b34060
instagibbs:
ACK 5b3406094f
darosior:
utACK 5b3406094f
glozow:
code review ACK 5b3406094f
dergoegge:
utACK 5b3406094f
Tree-SHA512: 155cd3b5d150ba3417c1cd126f2be734497742e85358a19c9d365f4f97c555ff9e846405bbeada13c3575b3713c3a7eb2f780879a828cbbf032ad9a6e5416b30
5ff63a09a9 refactor, blockstorage: Replace stopafterblockimport arg (TheCharlatan)
18e5ba7c80 refactor, blockstorage: Replace blocksdir arg (TheCharlatan)
02a0899527 refactor, BlockManager: Replace fastprune from arg with options (TheCharlatan)
a498d699e3 refactor/iwyu: Complete includes for blockmanager_args (TheCharlatan)
f0bb1021f0 refactor: Move functions to BlockManager methods (TheCharlatan)
cfbb212493 zmq: Pass lambda to zmq's ZMQPublishRawBlockNotifier (TheCharlatan)
8ed4ff8e05 refactor: Declare g_zmq_notification_interface as unique_ptr (TheCharlatan)
Pull request description:
The libbitcoin_kernel library should not rely on the `ArgsManager`, but rather use option structs that can be passed to the various classes it uses. This PR removes reliance on the `ArgsManager` from the `blockstorage.*` files. Like similar prior work, it uses the options struct in the `BlockManager` that can be populated with `ArgsManager` values.
Some related prior work: https://github.com/bitcoin/bitcoin/pull/26889https://github.com/bitcoin/bitcoin/pull/25862https://github.com/bitcoin/bitcoin/pull/25527https://github.com/bitcoin/bitcoin/pull/25487
Related PR removing blockstorage globals: https://github.com/bitcoin/bitcoin/pull/25781
ACKs for top commit:
ryanofsky:
Code review ACK 5ff63a09a9. Since last ACK just added std::move and fixed commit title. Sorry for the noise!
mzumsande:
Code Review ACK 5ff63a09a9
Tree-SHA512: 4bde8fd140a40b97eca923e9016d85dcea6fad6fd199731f158376294af59c3e8b163a0725aa47b4be3519b61828044e0a042deea005e0c28de21d8b6c3e1ea7
72efc26439 util: improve streams.h:FindByte() performance (Larry Ruane)
604df63f6c [bench] add streams findbyte (gzhao408)
Pull request description:
This PR is strictly a performance improvement; there is no functional change. The `CBufferedFile::FindByte()` method searches for the next occurrence of the given byte in the file. Currently, this is done by explicitly inspecting each byte in turn. This PR takes advantage of `std::find()` to do the same more efficiently, improving its CPU runtime by a factor of about 25 in typical use.
ACKs for top commit:
achow101:
re-ACK 72efc26439
stickies-v:
re-ACK 72efc26439
Tree-SHA512: ddf0bff335cc8aa34f911aa4e0558fa77ce35d963d602e4ab1c63090b4a386faf074548daf06ee829c7f2c760d06eed0125cf4c34e981c6129cea1804eb3b719
Add a stop_after_block_import field to the BlockManager options. Use
this field instead of the global gArgs.
This should allow users of the BlockManager to not rely on the global
Args.
Add a blocks_dir field to the BlockManager options. Move functions
relying on the global gArgs to get the blocks_dir into the BlockManager
class.
This should eventually allow users of the BlockManager to not rely on
the global Args and instead pass in their own options.
Remove access to the global gArgs for the fastprune argument and
replace it by adding a field to the existing BlockManager Options
struct.
When running `clang-tidy-diff` on this commit, there is a diagnostic
error: `unknown type name 'uint64_t' [clang-diagnostic-error] uint64_t
prune_target{0};`, which is fixed by including cstdint.
This should eventually allow users of the BlockManager to not rely on
the global gArgs and instead pass in their own options.
This is a commit in preparation for the next few commits. The functions
are moved to methods to avoid their re-declaration for the purpose of
passing in BlockManager options.
The functions that were now moved into the BlockManager should no longer
use the params as an argument, but instead use the member variable.
In the moved ReadBlockFromDisk and UndoReadFromDisk, change
the function signature to accept a reference to a CBlockIndex instead of
a raw pointer. The pointer is expected to be non-null, so reflect that
in the type.
To allow for the move of functions to BlockManager methods all call
sites require an instantiated BlockManager, or a callback to one.
The lambda captures a reference to the chainman unique_ptr to retrieve
block data. An assert is added on the chainman to ensure that the lambda
is not used while the chainman is uninitialized.
This is done in preparation for the following commits where blockstorage
functions are made BlockManager methods.
fa266c4bbf Temporarily work around gcc-13 warning bug in interfaces_tests (MarcoFalke)
fa28850562 Fix clang-tidy performance-unnecessary-copy-initialization warnings (MarcoFalke)
faaa60a30e Remove unused find_value global function (MarcoFalke)
fa422aeec2 scripted-diff: Use UniValue::find_value method (MarcoFalke)
fa548ac872 Add UniValue::find_value method (MarcoFalke)
Pull request description:
The global function has issues:
* It causes gcc-13 warnings, see https://github.com/bitcoin/bitcoin/issues/26926
* There is no rationale for it being a global function, when it acts like a member function
* `performance-unnecessary-copy-initialization` clang-tidy isn't run on it
Fix all issues by making it a member function.
ACKs for top commit:
achow101:
ACK fa266c4bbf
hebasto:
re-ACK fa266c4bbf
Tree-SHA512: 6c4e25da3122cd3b91c376bef73ea94fb3beb7bf8ef5cb3853c5128d95bfbacbcbfb16cc843eb7b1a7ebd350c2b6311f8085eeacf9aeeab3366987037d209e44
Ensures better memory safety for this global. This came up during
discussion of the following commit, but is not strictly required for its
implementation.
If transactions are being added to the mempool at a rate faster than 7tx/s
(INVENTORY_BROADCAST_PER_SECOND) then peers' inventory_to_send queue can
become relatively large. If this happens, increase the number of txids
we include in an INV message (normally capped at 35) by 5 for each 1000
txids in the queue.
This will tend to clear a temporary excess out reasonably quickly; an
excess of 4000 invs to send will be cleared down to 1000 in about 30
minutes, while an excess of 20000 invs would be cleared down to 1000 in
about 60 minutes.
We use CompareDepthAndScore to choose an order of txs to inv. Rather
than sorting txs that have been evicted from the mempool at the end
of the list, sort them at the beginning so they are removed from
the queue immediately.