Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more mining stats #471

Draft
wants to merge 55 commits into
base: feature/coordinated-mining_v5
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
f62b093
WIP coordinated mining
vird Jan 23, 2023
f774146
WIP DEBUG; Block accepted from CM exit node
vird Apr 19, 2023
78ab540
WIP; supplied checkpoints works
vird Apr 28, 2023
e7557ee
WIP; fix partition table for non-default size; more supplied checkpoi…
vird May 10, 2023
c488dd1
WIP CM fixes
vird May 15, 2023
17a660a
WIP 2-chunk block produced but not accepted by network
vird May 18, 2023
a282c15
remove debug code
vird Jun 7, 2023
f8592a1
fix ar_mining_server test
vird Jun 12, 2023
0dbd09f
Fix the mining_rate metric collection
Jun 20, 2023
8a80147
fixup two-chunk non-coordinated mining
Jun 20, 2023
b2a68f2
Force GC in ar_storage
Jun 20, 2023
7f1b6a0
fixup reduce task queue memory footprint
Jun 20, 2023
7062c1a
fixup! Force GC in ar_storage
Jun 21, 2023
69aa180
polishing misc TODO
vird Jun 21, 2023
898d288
fixup coordinated mining tests
Jun 29, 2023
2ed705b
fixup
Jun 30, 2023
411c7fc
fix CM when chunk2 present on same node
vird Jun 30, 2023
e4f0387
Ensure the cm_miner processes are killed when the bin/test run completes
JamesPiechota Jul 14, 2023
9e5158a
fix the CORE_TEST_MODS values (ar_coordinated_mining does not exist,
JamesPiechota Jul 14, 2023
b290704
fix single_node_coordinated_mining_test_
JamesPiechota Jul 17, 2023
a1219dd
WIP
JamesPiechota Jul 28, 2023
59e714b
Single Node One and Two- Chunk tests pass
JamesPiechota Jul 31, 2023
defad35
All ar_coordinated_mining_tests pass.
JamesPiechota Aug 1, 2023
850f25a
Add additional two chunk tests. All basic coordinated mining tests pass
JamesPiechota Aug 3, 2023
6798d49
Remove ar_test_fork.erl - something broke with the recent changes to …
JamesPiechota Aug 4, 2023
08d90ad
Fix a regression in ar_test_node(). I'd used slave_peer() instead of …
JamesPiechota Aug 4, 2023
fd84693
fixup! Fix a regression in ar_test_node(). I'd used slave_peer() inst…
JamesPiechota Aug 4, 2023
a03adf2
Add ar_serialize tests for Solutio, Candidate, H2 Inputs
JamesPiechota Aug 4, 2023
44016c2
Add ar_mining_io tests
JamesPiechota Aug 6, 2023
0918ba6
fixup! Add ar_mining_io tests Fix off-by-one error when selecting par…
JamesPiechota Aug 7, 2023
7dce0eb
When miner_server_chunk_cache_size_limit is not set in the config it
JamesPiechota Aug 7, 2023
28db7e0
Add tests for cache_size
JamesPiechota Aug 8, 2023
c9e415f
Fix bugs in the chunk_cache_size handling. Add tests for chunk_cache_…
JamesPiechota Aug 11, 2023
2d7c17a
Fix ar_test_node startup:
JamesPiechota Aug 11, 2023
47f1695
fixup! Fix ar_test_node startup: 1. copy genesis data with packing `a…
JamesPiechota Aug 11, 2023
923b3a6
fixup! Fix ar_test_node startup: 1. copy genesis data with packing `a…
JamesPiechota Aug 11, 2023
4f378f3
fixup! fixup! Fix ar_test_node startup: 1. copy genesis data with pac…
JamesPiechota Aug 11, 2023
c7a64ca
Revert some earlier changes. We *should* exclude the last partition
JamesPiechota Aug 12, 2023
4b3ccbc
don't validate SolutionHash (we didn't before)
JamesPiechota Aug 12, 2023
329d1ef
revert to a ?PARTITION_SIZE of 1800000 during tests.
JamesPiechota Aug 13, 2023
9d5fd50
disable ar_coordinated_mining_tests for now until I can figure
JamesPiechota Aug 14, 2023
ba70d93
Revert to a storage module size of 20MB during tests (as several tests
JamesPiechota Aug 14, 2023
a5be8a7
Extract mining performance report to its own gen server to remove
JamesPiechota Aug 15, 2023
8437dda
Jp/merge master to cm5 (#470)
JamesPiechota Sep 27, 2023
5d06b66
Fix up some merge conflicts
JamesPiechota Sep 27, 2023
63b7b3b
fixup! Fix up some merge conflicts
JamesPiechota Sep 27, 2023
5c8c11f
fixup! Fix up some merge conflicts
JamesPiechota Sep 27, 2023
9a077b2
fixup! Fix up some merge conflicts
JamesPiechota Sep 27, 2023
03dc21c
fixup! Fix up some merge conflicts
JamesPiechota Sep 28, 2023
bae3853
fixup! Fix up some merge conflicts
JamesPiechota Sep 28, 2023
7402d53
fixup! Fix up some merge conflicts
JamesPiechota Sep 28, 2023
b01e469
fixup! Fix up some merge conflicts
JamesPiechota Sep 28, 2023
f3cd620
fixup! Fix up some merge conflicts
JamesPiechota Sep 28, 2023
7d10818
Try reeanabling ar_coordinated_mining_tests
JamesPiechota Sep 28, 2023
f56d094
more mining stats
vird Sep 29, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
WIP; fix partition table for non-default size; more supplied checkpoi…
…nts fixes
  • Loading branch information
vird authored and JamesPiechota committed Sep 27, 2023
commit e7557ee0625681dec0d48e67961d2d6bf7c59b70
6 changes: 3 additions & 3 deletions apps/arweave/src/ar_coordination.erl
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ handle_cast({reset_mining_session, _MiningSession}, State) ->

handle_cast({compute_h2, Peer, H2Materials}, State) ->
ar_mining_server:remote_compute_h2(Peer, H2Materials),
{_Diff, _Addr, _H0, _PartitionNumber, _PartitionUpperBound, ReqList} = H2Materials,
{_Diff, _Addr, _H0, _PartitionNumber, _PartitionUpperBound, _Seed, _NextSeed, _StartIntervalNumber, _StepNumber, _NonceLimiterOutput, _SuppliedCheckpoints, ReqList} = H2Materials,
OldStat = maps:get(Peer, State#state.peer_io_stat, #peer_io_stat{}),
H1Count = length(ReqList),
NewStat = OldStat#peer_io_stat{
Expand All @@ -297,9 +297,9 @@ handle_cast({compute_h2, Peer, H2Materials}, State) ->
},
{noreply, NewState};

handle_cast({computed_h2, {Diff, Addr, H0, H1, Nonce, PartitionNumber, PartitionUpperBound, PoA2, H2, Preimage, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, Peer}}, State) ->
handle_cast({computed_h2, {Diff, Addr, H0, H1, Nonce, PartitionNumber, PartitionUpperBound, PoA2, H2, Preimage, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints, Peer}}, State) ->
io:format("DEBUG computed_h2~n"),
ar_http_iface_client:cm_h2_send(Peer, {Diff, Addr, H0, H1, Nonce, PartitionNumber, PartitionUpperBound, PoA2, H2, Preimage, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput}),
ar_http_iface_client:cm_h2_send(Peer, {Diff, Addr, H0, H1, Nonce, PartitionNumber, PartitionUpperBound, PoA2, H2, Preimage, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints}),
OldStat = maps:get(Peer, State#state.peer_io_stat, #peer_io_stat{}),
NewStat = OldStat#peer_io_stat{
h2_out_counter = OldStat#peer_io_stat.h2_out_counter + 1
Expand Down
42 changes: 32 additions & 10 deletions apps/arweave/src/ar_http_iface_middleware.erl
Original file line number Diff line number Diff line change
Expand Up @@ -2988,16 +2988,38 @@ handle_get_vdf2(Req, Call) ->

handle_coordinated_mining_partition_table(Req) ->
{ok, Config} = application:get_env(arweave, config),
Table = lists:map(
fun ({BucketSize, Bucket, {spora_2_6, Addr}}) ->
{[
{bucket, Bucket},
{bucketsize, BucketSize},
{addr, ar_util:encode(Addr)}
]}
end,
Config#config.storage_modules
),
Partition_dict = lists:foldl(fun(Module, Dict) ->
{BucketSize, Bucket, {spora_2_6, Addr}} = Module,
MinPartitionId = Bucket*BucketSize div ?PARTITION_SIZE,
MaxPartitionId = (Bucket+1)*BucketSize div ?PARTITION_SIZE,
PartitionList = lists:seq(MinPartitionId, MaxPartitionId),
lists:foldl(fun(PartitionId, Dict2) ->
AddrDict = maps:get(PartitionId, Dict2, #{}),
AddrDict2 = maps:put(Addr, true, AddrDict),
maps:put(PartitionId, AddrDict2, Dict2)
end, Dict, PartitionList)
end, #{}, Config#config.storage_modules),
Right_bound = lists:max(lists:map(fun({BucketSize, Bucket, {spora_2_6, _Addr}}) ->
(Bucket+1)*BucketSize
end, Config#config.storage_modules)),
Right_bound_partition_id = Right_bound div ?PARTITION_SIZE,
CheckPartitionList = lists:seq(0, Right_bound_partition_id),
Table = lists:foldr(fun(PartitionId, Acc) ->
case maps:find(PartitionId, Partition_dict) of
{ok, AddrDict} ->
maps:fold(fun(Addr, _, Acc2) ->
NewEntity = {[
{bucket, PartitionId},
{bucketsize, ?PARTITION_SIZE},
{addr, ar_util:encode(Addr)}
% TODO range start, end for less requests (better check before send)
]},
[NewEntity | Acc2]
end, Acc, AddrDict);
_ ->
Acc
end
end, [], CheckPartitionList),
{200, #{}, ar_serialize:jsonify(Table), Req}.

read_complete_body(Req, Pid) ->
Expand Down
15 changes: 11 additions & 4 deletions apps/arweave/src/ar_mining_server.erl
Original file line number Diff line number Diff line change
Expand Up @@ -285,7 +285,7 @@ handle_cast({pause_performance_reports, Time}, State) ->
pause_performance_reports_timeout = Timeout }};

handle_cast({remote_compute_h2, Peer, H2Materials}, State) ->
{Diff, Addr, H0, PartitionNumber, PartitionUpperBound, NonceLimiterOutput,
{Diff, Addr, H0, PartitionNumber, PartitionUpperBound, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints,
ReqList} = H2Materials,
{_RecallRange1Start, RecallRange2Start} = ar_block:get_recall_range(H0,
PartitionNumber, PartitionUpperBound),
Expand All @@ -300,7 +300,7 @@ handle_cast({remote_compute_h2, Peer, H2Materials}, State) ->
reserve_cache_space(),
CorrelationRef = {PartitionNumber2, PartitionUpperBound, make_ref()},
Session = {remote, Diff, Addr, H0, PartitionNumber, PartitionUpperBound,
RecallRange2Start, NonceLimiterOutput, ReqList, Peer},
RecallRange2Start, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints, ReqList, Peer},
Thread ! {remote_read_recall_range2, self(), Session, CorrelationRef}
end,
{noreply, State};
Expand Down Expand Up @@ -475,6 +475,7 @@ handle_info({remote_io_thread_recall_range2_chunk,
Session}}, State) ->
%% Prevent an accidental pattern match of _H0, _PartitionNumber.
{remote, _Diff, _Addr, _H0_, _PartitionNumber_, _PartitionUpperBound, _RecallByte2Start,
_Seed, _NextSeed, _StartIntervalNumber, _StepNumber, _NonceLimiterOutput2, _SuppliedCheckpoints,
ReqList, _Peer } = Session,
#state{ hashing_threads = Threads } = State,
%% The accumulator is in fact the un-accumulator here.
Expand Down Expand Up @@ -616,6 +617,7 @@ io_thread(PartitionNumber, ReplicaID, StoreID, SessionRef) ->
io_thread(PartitionNumber, ReplicaID, StoreID, SessionRef);
{remote_read_recall_range2, From, Session, CorrelationRef} ->
{remote, _Diff, Addr, H0, PartitionNumber2, _PartitionUpperBound, RecallRangeStart,
_Seed, _NextSeed, _StartIntervalNumber, _StepNumber, _NonceLimiterOutput, _SuppliedCheckpoints,
_ReqList, _Peer} = Session,
case ReplicaID of
Addr ->
Expand Down Expand Up @@ -854,7 +856,8 @@ hashing_thread(SessionRef) ->
%% Important: here we make http requests inside the hashing thread
%% to reduce the latency.
{remote, Diff, ReplicaID, H0, PartitionNumber, PartitionUpperBound,
_RecallByte2Start, NonceLimiterOutput, _ReqList, Peer } = Session,
_RecallByte2Start, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints,
_ReqList, Peer } = Session,
{H2, Preimage2} = ar_block:compute_h2(H1, Chunk2, H0),
case binary:decode_unsigned(H2, big) > Diff of
true ->
Expand Down Expand Up @@ -885,9 +888,13 @@ hashing_thread(SessionRef) ->
{mining_address, ar_util:encode(ReplicaID)}]),
ok;
_ ->
[{_, TipNonceLimiterInfo}] = ets:lookup(node_state, nonce_limiter_info),
#nonce_limiter_info{ next_seed = PrevNextSeed,
global_step_number = PrevStepNumber } = TipNonceLimiterInfo,
SuppliedCheckpoints = ar_nonce_limiter:get_checkpoints(PrevStepNumber, StepNumber, PrevNextSeed),
ar_coordination:computed_h2({Diff, ReplicaID, H0, H1, Nonce,
PartitionNumber, PartitionUpperBound, PoA2, H2, Preimage2,
NonceLimiterOutput, Peer})
Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints, Peer})
end;
false ->
ok
Expand Down
7 changes: 5 additions & 2 deletions apps/arweave/src/ar_serialize.erl
Original file line number Diff line number Diff line change
Expand Up @@ -1573,9 +1573,11 @@ json_map_to_remote_h2_materials(JSON) ->
Nonce = maps:get(<<"nonce">>, JsonElement),
{H1, Nonce}
end, maps:get(<<"req_list">>, JSON, [])),
{Diff, Addr, H0, PartitionNumber, PartitionUpperBound, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, ReqList}.
SuppliedCheckpointsEncoded = maps:get(<<"vdf_checkpoints">>, JSON),
SuppliedCheckpoints = parse_checkpoints(ar_util:decode(SuppliedCheckpointsEncoded), 1),
{Diff, Addr, H0, PartitionNumber, PartitionUpperBound, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints, ReqList}.

remote_h2_materials_to_json_map({Diff, Addr, H0, PartitionNumber, PartitionUpperBound, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, ReqList}) ->
remote_h2_materials_to_json_map({Diff, Addr, H0, PartitionNumber, PartitionUpperBound, Seed, NextSeed, StartIntervalNumber, StepNumber, NonceLimiterOutput, SuppliedCheckpoints, ReqList}) ->
ReqList2 = lists:map(fun ({H1, Nonce}) ->
{[
{h1, ar_util:encode(H1)},
Expand All @@ -1594,6 +1596,7 @@ remote_h2_materials_to_json_map({Diff, Addr, H0, PartitionNumber, PartitionUpper
{start_interval_number, integer_to_binary(StartIntervalNumber)},
{step_number, integer_to_binary(StepNumber)},
{nonce_limiter_output, ar_util:encode(NonceLimiterOutput)},
{vdf_checkpoints, ar_util:encode(iolist_to_binary(SuppliedCheckpoints))},
{req_list, ReqList2}
].

Expand Down