Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pickhardt payments simulation #11

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

renepickhardt
Copy link
Owner

@renepickhardt renepickhardt commented Apr 12, 2022

in this PR I introduce a notebook that showcases how one may do Pickhardt Payments (which are according to the notebook defined as: Using probabilistic payment delivery in a round based payment loop that updeates our belief of the remote liquidity in the uncertainty network and generates reliable and cheap payment flows in every round by solving a piece wise linearized min integer cost flow problem with a seperable cost function. The notebook contains an extensive glossary of terms and derives quite a bit of the theory around probabilistic payment deliver, min cost flow problems, piecewise linearized cost functions and already addresses missing points like proper feature engineering or pruning (Where it currently even ueses arbitrarily chosen numbers to showcase a point)

It consists mainly of 7 classes and is currently best compatible to use with the output of lightning-cli listchannels.

  • Channel
  • UncertaintyChanel
  • OracleChannel
  • ChannelGraph
  • UncertaintyNetwork
  • OracleLightningNetwork
  • PaymentSession

The runtime of the solver on the pruned problem is quite fast (consistantly below 100ms) as demanded by some lightning developers. However the code itself is quite slow as it uses networkx as a third party library to store the ChannelGraph, the UncertaintyNetwork and the OracleLightningNetwork. The only other third party dependency that is being used is the google ortools library that ships an highly efficient c++ based min cost flow solver for min integer cost flow problems via a cost scaling algorithm.

The main questions that I have for reviewers:

  • Does the data flow and class design make sense?
  • Is the naming reasonable?
  • should there anything be refactored?
  • What would be needed to provide these classes as a module?
  • Can one understand what is happening or should one actually communicate this differently? What additional material would help you?
  • and of course most importantly semantic errors in case I messed up the conditional probabilities while handling inflight htlcs when maintaining the uncertainty network and making payments. (just realizing one thing that I ignore is to update the uncertainty of the backwords channels. Technically in followup rounds or with successive payments they should play an important role)

Thanks to anyone who will be helping

…le can review it. Please do not share this code yet or copy widely from it. please give me feedback so that I can merge a refactored and cleaner version to the main branch
…, channel, uncertaintychannel, oraclechannel, ChannelGraph, UncertaintyNetwork, OracleLightningNetwork and PaymentSession with a clean speration of concerns. this is still WIP and needs review
Copy link
Contributor

@vv01f vv01f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some typos, as a diff

9c9
<     "This is an educational simulation to demonstrate how to implement the main concepts of [probabilistic payment delivery with the additional maintainance of the uncertainty network](https://arxiv.org/abs/2103.08576) and [optimal splitting of payments in MPP which is also known as optimally realiable payment flows](https://arxiv.org/abs/2107.05322)\n",
---
>     "This is an educational simulation to demonstrate how to implement the main concepts of [probabilistic payment delivery with the additional maintenance of the uncertainty network](https://arxiv.org/abs/2103.08576) and [optimal splitting of payments in MPP which is also known as optimally reliable payment flows](https://arxiv.org/abs/2107.05322)\n",
21,22c21,22
<     "* concurrent handeling of onions and maintanance of the uncertainty network\n",
<     "* proper handling of non zero base fee channels is needed and in particular handeling their cost properly at least as good as it is possible\n",
---
>     "* concurrent handling of onions and maintenance of the uncertainty network\n",
>     "* proper handling of non zero base fee channels is needed and in particular handling their cost properly at least as good as it is possible\n",
24,26c24,26
<     "* learning how to weight various features needs to be conducted or at least configureably be in place or...\n",
<     "* ...proper feature Engineering. Combining the linearized integer uncertainty cost and the routing cost just via the weight $\\mu$ seens a bit arbitrary. I think centering, scaling and especially for channels log transformation etc as [described in this article](https://www.kaggle.com/code/milankalkenings/comprehensive-tutorial-feature-engineering/notebook) might work better\n",
<     "* Mechanics of making learnt information persistant over some time\n",
---
>     "* learning how to weight various features needs to be conducted or at least configurable be in place or...\n",
>     "* ...proper feature Engineering. Combining the linearized integer uncertainty cost and the routing cost just via the weight $\\mu$ seems a bit arbitrary. I think centering, scaling and especially for channels log transformation etc as [described in this article](https://www.kaggle.com/code/milankalkenings/comprehensive-tutorial-feature-engineering/notebook) might work better\n",
>     "* Mechanics of making learnt information persistent over some time\n",
28c28
<     "* One should Prune the graph before invoking the solver (I tested but excluded a totally arbitrary pruning mechanism which produced a speedup of 10x for the solver making it run constantly in less than 100ms). I am very sure with centering, scaling and proper feature engineering we can get a much better pruner with errror guarantees and similar or even better speedup)\n",
---
>     "* One should Prune the graph before invoking the solver (I tested but excluded a totally arbitrary pruning mechanism which produced a speedup of 10x for the solver making it run constantly in less than 100ms). I am very sure with centering, scaling and proper feature engineering we can get a much better pruner with error guarantees and similar or even better speedup)\n",
38c38
<     "* Showcase how strategies similar to the BOTL 14 proposal may be used\n",
---
>     "* Showcase how strategies similar to the BOLT 14 proposal may be used\n",
41c41
<     "* Demonstrate that the min cost flow approach is feasbale. \n",
---
>     "* Demonstrate that the min cost flow approach is feasible. \n",
53c53
<     "Maybe it might make sense to make entropy hints not only as a foaf query but as an n-bit network wide gossip message. Unless we have some prior belief about the channel the 2 bit gossip intervalls are\n",
---
>     "Maybe it might make sense to make entropy hints not only as a foaf query but as an n-bit network wide gossip message. Unless we have some prior belief about the channel the 2 bit gossip intervals are\n",
65c65
<     "* make bolt14 experiemtns more realistic and depend on actual `send_onions` calls\n",
---
>     "* make bolt14 experiments more realistic and depend on actual `send_onions` calls\n",
67c67
<     "* fix some fixme's most notably the data model that produces poor probability computation in outputs as it ignors the prior knowlege which means all displayed probabilities are too low (though the flow is computed properly)\n",
---
>     "* fix some fixme's most notably the data model that produces poor probability computation in outputs as it ignores the prior knowledge which means all displayed probabilities are too low (though the flow is computed properly)\n",
70c70
<     "* provide a few more examples / simulations and in particular derrive some nice diagrams sumarizing some results\n",
---
>     "* provide a few more examples / simulations and in particular derive some nice diagrams summarizing some results\n",
76c76
<     "Special Thanks to Stefan Richter with whom we implemented the first mainnet tests on top of lnd which certainly helped me to clarify many details and get the maintanance of the uncertainty network straight. Thanks to Michael Ziegler and Carsten Otto for an initial code review that made me rewrite the entire code base from scratch."
---
>     "Special Thanks to Stefan Richter with whom we implemented the first mainnet tests on top of lnd which certainly helped me to clarify many details and get the maintenance of the uncertainty network straight. Thanks to Michael Ziegler and Carsten Otto for an initial code review that made me rewrite the entire code base from scratch."
85c85
<     "As many similar terms are being used for similar concepts and things and since I have been studying these topics for three years now I will follow the good practice of [BOLT0](https://github.com/lightning/bolts/blob/master/00-introduction.md) and put a glossary of used terms up here with the hope that I finally have some good, useful and precise defintions that will also bee picked up by others.\n",
---
>     "As many similar terms are being used for similar concepts and things and since I have been studying these topics for three years now I will follow the good practice of [BOLT0](https://github.com/lightning/bolts/blob/master/00-introduction.md) and put a glossary of used terms up here with the hope that I finally have some good, useful and precise definitions that will also bee picked up by others.\n",
89c89
<     "Payment `channel`s are either announced or unnannounced but known to the sender and recpient of a payment. They have meta data like the `capacity` or `routing fees` as well as some other config values which we carry around but ignore for simplicity here.\n",
---
>     "Payment `channel`s are either announced or unannounced but known to the sender and recipient of a payment. They have meta data like the `capacity` or `routing fees` as well as some other config values which we carry around but ignore for simplicity here.\n",
91c91
<     "The `Channel` is the base class in this code and identifies a channel by the trippled `(source_node_id, destionation_node_id, short_channel_id)` in this way we make the direction explicit and don't encode about lexicographical DER-encoding of `node_ids` and the direction field as done by the bolts.\n",
---
>     "The `Channel` is the base class in this code and identifies a channel by the tripled `(source_node_id, destination_node_id, short_channel_id)` in this way we make the direction explicit and don't encode about lexicographical DER-encoding of `node_ids` and the direction field as done by the bolts.\n",
97c97
<     "eges are the things between nodes of a graph or network. In the code this is just the native name of the networkx library. Edges usually map to `Channels`, `UncertaintyChannels` or `OracleChannels` and usually have one of those attached to it in the `channel`-field.\n",
---
>     "edges are the things between nodes of a graph or network. In the code this is just the native name of the networkx library. Edges usually map to `Channels`, `UncertaintyChannels` or `OracleChannels` and usually have one of those attached to it in the `channel`-field.\n",
103c103
<     "**Important**: It seems to be an unavoidable collision that the term capacity is also used for `arcs` in min cost flow solvers. As I introduce the piece wise linearization the `cost function` the arcs in the min cost flow solve will usually have a lower `capacity` than the cannels they represent. I try to distinguish this by talking either about the `channel capacity` or the `arc capacity` if absolutely necessary.\n",
---
>     "**Important**: It seems to be an unavoidable collision that the term capacity is also used for `arcs` in min cost flow solvers. As I introduce the piece wise linearization the `cost function` the arcs in the min cost flow solve will usually have a lower `capacity` than the channels they represent. I try to distinguish this by talking either about the `channel capacity` or the `arc capacity` if absolutely necessary.\n",
111c111
<     "assuming a prior probability distribution we can quantify the `uncertainty` about the channels `liquidity` by measureing the `entropy` of the probability function. Notably if a channel as a `capacity` of `c` satoshis the entropy is `log(c+1)`\n",
---
>     "assuming a prior probability distribution we can quantify the `uncertainty` about the channels `liquidity` by measuring the `entropy` of the probability function. Notably if a channel as a `capacity` of `c` satoshis the entropy is `log(c+1)`\n",
113c113
<     "## Unertainty Network\n",
---
>     "## Uncertainty Network\n",
117c117
<     "1. Minimum `liquidity` which initialy for remote channel is `0`. \n",
---
>     "1. Minimum `liquidity` which initially for remote channel is `0`. \n",
122c122
<     "We use `inflight` to encode how many satoshis we have currently allocated or plan to allocate to a channel. Note that this is not the same as the number of inflight htlcs that the peer co-owning the channel will observe as some of the onions that we have outstanding may not have been delivered to the channel yet or may have failed but not returend back to us yet.\n",
---
>     "We use `inflight` to encode how many satoshis we have currently allocated or plan to allocate to a channel. Note that this is not the same as the number of inflight htlcs that the peer co-owning the channel will observe as some of the onions that we have outstanding may not have been delivered to the channel yet or may have failed but not returned back to us yet.\n",
136,137c136,137
<     "This is the `feature` introduced in the [`probablistic payment delivery` paper](https://arxiv.org/abs/2103.08576) it puts a cost on an amount `a` to be send over a (`path`) of `payment channels`.\n",
<     "It is comuted by taking the negative logarithm of the `success probability` if we have no prior belief of the channel the `uncertainty cost` will be computed as: `-log((c+1-a)/(c+1))`. Note that the uncertainty cost is a convex gowing function takin the value `0 = -log((c+1-0)/(c+1)) = -log(1)` if no sats are to be allocated on the channel. The maximum possible `uncertainty cost` occurs if `a=c` and takes the value `log(c+1)` which is exactly the `entropy` of the channel which can be seen by the following calculation:\n",
---
>     "This is the `feature` introduced in the [`probabilistic payment delivery` paper](https://arxiv.org/abs/2103.08576) it puts a cost on an amount `a` to be send over a (`path`) of `payment channels`.\n",
>     "It is computed by taking the negative logarithm of the `success probability` if we have no prior belief of the channel the `uncertainty cost` will be computed as: `-log((c+1-a)/(c+1))`. Note that the uncertainty cost is a convex growing function taking the value `0 = -log((c+1-0)/(c+1)) = -log(1)` if no sats are to be allocated on the channel. The maximum possible `uncertainty cost` occurs if `a=c` and takes the value `log(c+1)` which is exactly the `entropy` of the channel which can be seen by the following calculation:\n",
155c155
<     "As discussed in our research and on the mailinglist the fee function of the lightning network is not linear. In the mentioned discussion one can see that `fee(a+b) = fee(a)+fee(b) - base_fee_msat` which is linear if and only if `base_fee_msat = 0`. \n",
---
>     "As discussed in our research and on the mailing list the fee function of the lightning network is not linear. In the mentioned discussion one can see that `fee(a+b) = fee(a)+fee(b) - base_fee_msat` which is linear if and only if `base_fee_msat = 0`. \n",
166c166
<     "The `features` of our cost function encode our optimization goals. Typicalfeatures goals might be to have a cheap price (usually encoded by `fee`) or to have a high reliability (in this work best encoded by the `uncertainty cost`that comes from the estimated `success probability` that the channel has enough `liquidity`. Other features that seam reasonable are things related to `latency` of payments (e.g. [pyhsical geodistance or virtaul IP-distance as indicted in this comment](https://github.com/lightningdevkit/rust-lightning/issues/1170)). Implementations have also experimented with other features like CLTV delta or channel age. \n",
---
>     "The `features` of our cost function encode our optimization goals. Typical features / goals might be to have a cheap price (usually encoded by `fee`) or to have a high reliability (in this work best encoded by the `uncertainty cost`that comes from the estimated `success probability` that the channel has enough `liquidity`. Other features that seam reasonable are things related to `latency` of payments (e.g. [physical geodistance or virtual IP-distance as indicted in this comment](https://github.com/lightningdevkit/rust-lightning/issues/1170)). Implementations have also experimented with other features like CLTV delta or channel age. \n",
178,180c178,180
<     "1. Payment failed at the channel: This means the channel of capacity `c` has at most `h-1` satoshi in it. In Terms of Probability theory we can ask ourselves what is the success rate for a subsequent payment of size `a` given the event `X<h` (the htlc  `h` has failed) which can be expressed as `P(X>=a | X<h)`. Here we see because of the condition that for `a>=h` the proabbility has to be `0`. Computing the conditional probability for the remainder in the uniform case we can se `P(X>=a | X < h) = (h-a)/h`. Note that this is the same as the success probability for a channel of capacity `h-1`. \n",
<     "2. Payment failed at a downstream channel: This means that the channel of capcity `c` was able to route `h` satoshi and has at least a minimum `liquidity` of `h`. This means that `P(X>=a | X>=h) = 1`  for all amounts `a <=h` and in the unform case for all larger values of `a` the conditional probability materializes to: `P(X>=a)/P(X>=h) = ((c+1-a)/(c+1))/((c+1-h)/(c+1)) = (c+1-a)/(c+1-h)` Note that this fraction is always smaller than `1` as `a >=h`\n",
<     "3. The case that the payment was successfull or did not return an error: Following the same thoughts and the first bullet point at the end of section 3 in the second paper we know that for a future payment of size `a` we have to look at `P(X>=a + h | X >=h)` which in the uniform case materializes to `( (c-h) + 1 - a)/( (c - h) + 1)`. Note that this is the same as if the channel shrunk from `c` to `c-h` as we now know that the maximum `liquidity` is not `c` but rather `c-h`\n",
---
>     "1. Payment failed at the channel: This means the channel of capacity `c` has at most `h-1` satoshi in it. In Terms of Probability theory we can ask ourselves what is the success rate for a subsequent payment of size `a` given the event `X<h` (the htlc  `h` has failed) which can be expressed as `P(X>=a | X<h)`. Here we see because of the condition that for `a>=h` the probability has to be `0`. Computing the conditional probability for the remainder in the uniform case we can se `P(X>=a | X < h) = (h-a)/h`. Note that this is the same as the success probability for a channel of capacity `h-1`. \n",
>     "2. Payment failed at a downstream channel: This means that the channel of capacity `c` was able to route `h` satoshi and has at least a minimum `liquidity` of `h`. This means that `P(X>=a | X>=h) = 1`  for all amounts `a <=h` and in the uniform case for all larger values of `a` the conditional probability materializes to: `P(X>=a)/P(X>=h) = ((c+1-a)/(c+1))/((c+1-h)/(c+1)) = (c+1-a)/(c+1-h)` Note that this fraction is always smaller than `1` as `a >=h`\n",
>     "3. The case that the payment was successful or did not return an error: Following the same thoughts and the first bullet point at the end of section 3 in the second paper we know that for a future payment of size `a` we have to look at `P(X>=a + h | X >=h)` which in the uniform case materializes to `( (c-h) + 1 - a)/( (c - h) + 1)`. Note that this is the same as if the channel shrunk from `c` to `c-h` as we now know that the maximum `liquidity` is not `c` but rather `c-h`\n",
185c185
<     "We call a payment flow optimal if it minimizes the `cost function` that encodes our optimization goals. Note that the solution of the min cost flow problem also defines a split of the payment across various paths. While simple algorithms for discecting a flow into paths exist the actual disection is not uique and needs further research. Especially when we start taking channel configurations into account.\n",
---
>     "We call a payment flow optimal if it minimizes the `cost function` that encodes our optimization goals. Note that the solution of the min cost flow problem also defines a split of the payment across various paths. While simple algorithms for dissecting a flow into paths exist the actual dissection is not unique and needs further research. Especially when we start taking channel configurations into account.\n",
193c193
<     "The idea of `probabilistic payment delivery` is to send onions that maximize the success proabbility for all (or parts of them) to be successul. This takes the `uncertainty` about the remote `liquidity` into consideration and was first introduced in path finding (Note I did not use the term path findng nore did I put it in the glossary)\n",
---
>     "The idea of `probabilistic payment delivery` is to send onions that maximize the success probability for all (or parts of them) to be successful. This takes the `uncertainty` about the remote `liquidity` into consideration and was first introduced in path finding (Note I did not use the term path finding nor did I put it in the glossary)\n",
196c196
<     "As the term is more and more starting to flow around I will try to summarize best what is currently from a technial point of view meant by it: \n",
---
>     "As the term is more and more starting to flow around I will try to summarize best what is currently from a technical point of view meant by it: \n",
198c198
<     "`Using probabilistic payment delivery` in a round based `payment loop` that updeates our belief of the remote `liquidity` in the `uncertainty network` and generates reliable and cheap `payment flows` in every round by solving a `piece wise linearized min integer cost flow problem with a seperable cost function` (I start to see why a shorter term was needed).\n",
---
>     "`Using probabilistic payment delivery` in a round based `payment loop` that updates our belief of the remote `liquidity` in the `uncertainty network` and generates reliable and cheap `payment flows` in every round by solving a `piece wise linearized min integer cost flow problem with a separable cost function` (I start to see why a shorter term was needed).\n",
210c210
<     "* `random` is used to feed our oracle with a simulated ground truth. If you make actual payments the mainnet Lightnign Network will act as an oracle\n",
---
>     "* `random` is used to feed our oracle with a simulated ground truth. If you make actual payments the mainnet Lightning Network will act as an oracle\n",
212,213c212,213
<     "* While `log` is used to measure `entropy` and `uncertainty cost` the linerized problem gets rid of the `log` and the Entropy is also only measured for experimental data.\n",
<     "* the `typing` lib is just for type safty"
---
>     "* While `log` is used to measure `entropy` and `uncertainty cost` the linearized problem gets rid of the `log` and the Entropy is also only measured for experimental data.\n",
>     "* the `typing` lib is just for type safety"
279c279
<     "The `Channel` is used to store publicly available infromation about a channel. It currently follows mainly the format of `lightning-cli listchannels` output from `c-lightning`. The class is intended to only store the gossip information and not meant to modify the information. In a mainnet `pickhardt-pay` implementation one would obviously want a class that also processes incoming `update_channel` messages.\n",
---
>     "The `Channel` is used to store publicly available information about a channel. It currently follows mainly the format of `lightning-cli listchannels` output from `c-lightning`. The class is intended to only store the gossip information and not meant to modify the information. In a mainnet `pickhardt-pay` implementation one would obviously want a class that also processes incoming `update_channel` messages.\n",
281c281
<     "`Channels` are identified by the trippled `(source_node_id, destionation_node_id, short_channel_id)` in this way we make the direction explicit and don't encode about lexicographical DER-encoding of `node_ids` and the direction field as done by the bolts.\n"
---
>     "`Channels` are identified by the tripled `(source_node_id, destination_node_id, short_channel_id)` in this way we make the direction explicit and don't encode about lexicographical DER-encoding of `node_ids` and the direction field as done by the bolts.\n"
316c316
<     "    The `Channel` Class is intended to be read only and internatlly stores\n",
---
>     "    The `Channel` Class is intended to be read only and internally stores\n",
389c389
<     "The `ChannelGraph` is the most basic data structure and extended to the `UncertaintyNetwork` and `OracleNetwork`. Note that we didn't use the Term `Network` or `ChannelNetwork` which from a software engineering perspecive would have been more resonable. There was [a poll deciding against](https://twitter.com/renepickhardt/status/1513095719862816769) this."
---
>     "The `ChannelGraph` is the most basic data structure and extended to the `UncertaintyNetwork` and `OracleNetwork`. Note that we didn't use the Term `Network` or `ChannelNetwork` which from a software engineering perspective would have been more reasonable. There was [a poll deciding against](https://twitter.com/renepickhardt/status/1513095719862816769) this."
403c403
<     "    The channels of the Channel Graph are directed and identiried uniquly by a triple consisting of\n",
---
>     "    The channels of the Channel Graph are directed and identified uniquely by a triple consisting of\n",
409c409
<     "        extracts the dictionary from the file that contains lightnig-cli listchannels json string\n",
---
>     "        extracts the dictionary from the file that contains lighting-cli listchannels json string\n",
478,479c478,479
<     "        This is usful for experiments but must of course not be used in routing and is also\n",
<     "        not a vailable if mainnet remote channels are being used.\n",
---
>     "        This is useful for experiments but must of course not be used in routing and is also\n",
>     "        not available if mainnet remote channels are being used.\n",
519c519
<     "            #If Channel in oposite direction already exists with liquidity information match the channel\n",
---
>     "            #If Channel in opposite direction already exists with liquidity information match the channel\n",
559c559
<     "            #liqudity = 0\n",
---
>     "            #liquidity = 0\n",
599c599
<     "    As we also optimize for fees and want to be able to compute the fees of a flow the classe\n",
---
>     "    As we also optimize for fees and want to be able to compute the fees of a flow the class\n",
609c609
<     "    pieceweise linearized cost for a channel rising from uncertainty as well as routing fees.\n",
---
>     "    piecewise linearized cost for a channel rising from uncertainty as well as routing fees.\n",
670c670
<     "        assign or remove ammount that is assigned to be `in_flight`.\n",
---
>     "        assign or remove amount that is assigned to be `in_flight`.\n",
752c752
<     "        #FIXME: interesting! Quantization does not change unit cost as it cancles itself\n",
---
>     "        #FIXME: interesting! Quantization does not change unit cost as it cancels itself\n",
850c850
<     "        This API works ony if we have an Oracle that allows to ask the actual liquidity of a channel\n",
---
>     "        This API works only if we have an Oracle that allows to ask the actual liquidity of a channel\n",
952c952
<     "        With the help of an `OracleLightningNetwork` probes all chennels `n` times to reduce uncertainty.\n",
---
>     "        With the help of an `OracleLightningNetwork` probes all channels `n` times to reduce uncertainty.\n",
1006c1006
<     "        print(\"channels with full knowlege: \", len(ego_netwok))\n",
---
>     "        print(\"channels with full knowledge: \", len(ego_netwok))\n",
1016c1016
<     "Payments are conducted within a payment session. The payment session needs to be given an instance from the` UncertaintyNetwork` and an `Oracle` against which it will `send_onions`. In this notebook the `Oracel` is just a simulated distribution of the Liquidity in the network. For experiments we could distribute the liquidity differently or we could use the mainnet network or a crawl as the oracle. In particular the `UncertaintyNetwork` may contain prior belief about the uncertainty of the liquidity in remote channels.\n",
---
>     "Payments are conducted within a payment session. The payment session needs to be given an instance from the` UncertaintyNetwork` and an `Oracle` against which it will `send_onions`. In this notebook the `Oracle` is just a simulated distribution of the Liquidity in the network. For experiments we could distribute the liquidity differently or we could use the mainnet network or a crawl as the oracle. In particular the `UncertaintyNetwork` may contain prior belief about the uncertainty of the liquidity in remote channels.\n",
1018,1019c1018,1019
<     "The main API call in a payment session is `pickhardt_pay` which is a `pay`-implementation of our method. Internally `pickhardt_pay` feeds the min cost solver with `pieceweise linearized integer unit costs` for all channels of the uncertainty network that are not to be pruned. The pruning is currently pretty arbitrary and exists to show that another order of magnitude in compuational time seems possible without loosing much optimality but shall be chosen better for future extensability. (I assume with proper feature engineering one may prune based on unit costs).\n",
<     "The min cost flow solver produces a flow that is disected into candidate path. instead of just invoking `send_onion` and makting payment attempts the `pickhardt_pay` loop here collects our belief about success probabilities and fees so that we can investigate the results of the simulation. These results are also depicted as part of the API call. "
---
>     "The main API call in a payment session is `pickhardt_pay` which is a `pay`-implementation of our method. Internally `pickhardt_pay` feeds the min cost solver with `piecewise linearized integer unit costs` for all channels of the uncertainty network that are not to be pruned. The pruning is currently pretty arbitrary and exists to show that another order of magnitude in computational time seems possible without loosing much optimality but shall be chosen better for future extensibility. (I assume with proper feature engineering one may prune based on unit costs).\n",
>     "The min cost flow solver produces a flow that is dissected into candidate path. instead of just invoking `send_onion` and making payment attempts the `pickhardt_pay` loop here collects our belief about success probabilities and fees so that we can investigate the results of the simulation. These results are also depicted as part of the API call. "
1030c1030
<     "    A PaymentSesssion is used to create the min cost flow problem from the UncertaintyNetwork\n",
---
>     "    A PaymentSession is used to create the min cost flow problem from the UncertaintyNetwork\n",
1035,1036c1035,1036
<     "    The main API call ist `pickhardt_pay` which invokes a sequential loop to conduct trial and error\n",
<     "    attmpts. The loop could easily send out all onions concurrently but this does not make sense \n",
---
>     "    The main API call is `pickhardt_pay` which invokes a sequential loop to conduct trial and error\n",
>     "    attempts. The loop could easily send out all onions concurrently but this does not make sense \n",
1081c1081
<     "            # Prune channels away thay have too low success probability! This is a huge runtime boost\n",
---
>     "            # Prune channels away that have too low success probability! This is a huge runtime boost\n",
1110c1110
<     "        generator to iterate through edges indext by node id of paths\n",
---
>     "        generator to iterate through edges indexed by node id of paths\n",
1147,1148c1147,1148
<     "        FIXME: Note that this disection while accurate is probably not optimal in practise. \n",
<     "        As noted in our Probabilistic payment delivery paper the payment process is a bernoulli trial \n",
---
>     "        FIXME: Note that this dissection while accurate is probably not optimal in practise. \n",
>     "        As noted in our Probabilistic payment delivery paper the payment process is a Bernoulli trial \n",
1200c1200
<     "        Retuns the residual amount of the `amt` that could ne be delivered and the paid fees\n",
---
>     "        Returns the residual amount of the `amt` that could not be delivered and the paid fees\n",
1317c1317
<     "        print(\"actually deliverd {:10} sats \\t({:4.2f}%)\".format(amt-residual_amt,fraction))\n",
---
>     "        print(\"actually delivered {:10} sats \\t({:4.2f}%)\".format(amt-residual_amt,fraction))\n",
1340c1340
<     "        I could not put it here as some experiments require sharing of liqudity information\n",
---
>     "        I could not put it here as some experiments require sharing of liquidity information\n",
1353c1353
<     "        #a better stop criteria would be if we compute infeasable flows or if the probabilities \n",
---
>     "        #a better stop criteria would be if we compute infeasible flows or if the probabilities \n",
1359c1359
<     "            #transfer to a min cost flow problem and rund the solver\n",
---
>     "            #transfer to a min cost flow problem and run the solver\n",
1365c1365
<     "            #matke attempts and update our information about the UncertaintyNetwork\n",
---
>     "            #make attempts and update our information about the UncertaintyNetwork\n",
1385c1385
<     "        print(\"total runtime (including inefficient memory managment): {:4.3f} sec\".format(end-start))\n",
---
>     "        print(\"total runtime (including inefficient memory management): {:4.3f} sec\".format(end-start))\n",
1387c1387
<     "        print(\"Fees for successfull delivery: {:8.3f} sat --> {} ppm\".format(total_fees,int(total_fees*1000*1000/full_amt)))\n",
---
>     "        print(\"Fees for successful delivery: {:8.3f} sat --> {} ppm\".format(total_fees,int(total_fees*1000*1000/full_amt)))\n",
1483c1483
<       "actually deliverd   18879741 sats \t(46.43%)\n",
---
>       "actually delivered   18879741 sats \t(46.43%)\n",
1515c1515
<       "actually deliverd   10439991 sats \t(47.93%)\n",
---
>       "actually delivered   10439991 sats \t(47.93%)\n",
1542c1542
<       "actually deliverd    1342177 sats \t(11.83%)\n",
---
>       "actually delivered    1342177 sats \t(11.83%)\n",
1567c1567
<       "actually deliverd          0 sats \t(0.00%)\n",
---
>       "actually delivered          0 sats \t(0.00%)\n",
1590c1590
<       "actually deliverd   10000000 sats \t(100.00%)\n",
---
>       "actually delivered   10000000 sats \t(100.00%)\n",
1604c1604
<       "total runtime (including inefficient memory managment): 7.449 sec\n",
---
>       "total runtime (including inefficient memory management): 7.449 sec\n",
1606c1606
<       "Fees for successfull delivery: 134027.620 sat --> 3284 ppm\n",
---
>       "Fees for successful delivery: 134027.620 sat --> 3284 ppm\n",
1691c1691
<       "actually deliverd    9216488 sats \t(22.59%)\n",
---
>       "actually delivered    9216488 sats \t(22.59%)\n",
1755c1755
<       "actually deliverd    7791946 sats \t(24.67%)\n",
---
>       "actually delivered    7791946 sats \t(24.67%)\n",
1818c1818
<       "actually deliverd    5291265 sats \t(22.24%)\n",
---
>       "actually delivered    5291265 sats \t(22.24%)\n",
1884c1884
<       "actually deliverd    4245690 sats \t(22.94%)\n",
---
>       "actually delivered    4245690 sats \t(22.94%)\n",
1957c1957
<       "actually deliverd    2618603 sats \t(18.79%)\n",
---
>       "actually delivered    2618603 sats \t(18.79%)\n",
2015c2015
<       "actually deliverd    1983837 sats \t(18.04%)\n",
---
>       "actually delivered    1983837 sats \t(18.04%)\n",
2060c2060
<       "actually deliverd    2265220 sats \t(26.48%)\n",
---
>       "actually delivered    2265220 sats \t(26.48%)\n",
2090c2090
<       "actually deliverd     416877 sats \t(6.98%)\n",
---
>       "actually delivered     416877 sats \t(6.98%)\n",
2125c2125
<       "actually deliverd     190561 sats \t(3.43%)\n",
---
>       "actually delivered     190561 sats \t(3.43%)\n",
2162c2162
<       "actually deliverd     493472 sats \t(9.20%)\n",
---
>       "actually delivered     493472 sats \t(9.20%)\n",
2176c2176
<       "total runtime (including inefficient memory managment): 16.823 sec\n",
---
>       "total runtime (including inefficient memory management): 16.823 sec\n",
2178c2178
<       "Fees for successfull delivery: 47160.204 sat --> 1155 ppm\n",
---
>       "Fees for successful delivery: 47160.204 sat --> 1155 ppm\n",
2242c2242
<       "actually deliverd   19353499 sats \t(47.43%)\n",
---
>       "actually delivered   19353499 sats \t(47.43%)\n",
2273c2273
<       "actually deliverd   13926487 sats \t(64.93%)\n",
---
>       "actually delivered   13926487 sats \t(64.93%)\n",
2299c2299
<       "actually deliverd    1342177 sats \t(17.84%)\n",
---
>       "actually delivered    1342177 sats \t(17.84%)\n",
2321c2321
<       "actually deliverd    6181316 sats \t(100.00%)\n",
---
>       "actually delivered    6181316 sats \t(100.00%)\n",
2335c2335
<       "total runtime (including inefficient memory managment): 2.382 sec\n",
---
>       "total runtime (including inefficient memory management): 2.382 sec\n",
2337c2337
<       "Fees for successfull delivery: 152334.929 sat --> 3733 ppm\n",
---
>       "Fees for successful delivery: 152334.929 sat --> 3733 ppm\n",
2357c2357
<     "note that pruning still removes unreliable channels thus this method already provides a tradeoff between routing fees and realiability "
---
>     "note that pruning still removes unreliable channels thus this method already provides a tradeoff between routing fees and reliability "
2418c2418
<       "actually deliverd   11253010 sats \t(27.58%)\n",
---
>       "actually delivered   11253010 sats \t(27.58%)\n",
2464c2464
<       "actually deliverd   13992675 sats \t(47.35%)\n",
---
>       "actually delivered   13992675 sats \t(47.35%)\n",
2506c2506
<       "actually deliverd    4304111 sats \t(27.67%)\n",
---
>       "actually delivered    4304111 sats \t(27.67%)\n",
2546c2546
<       "actually deliverd    3887982 sats \t(34.55%)\n",
---
>       "actually delivered    3887982 sats \t(34.55%)\n",
2585c2585
<       "actually deliverd    5474818 sats \t(74.33%)\n",
---
>       "actually delivered    5474818 sats \t(74.33%)\n",
2614c2614
<       "actually deliverd     478005 sats \t(25.28%)\n",
---
>       "actually delivered     478005 sats \t(25.28%)\n",
2650c2650
<       "actually deliverd     486349 sats \t(34.42%)\n",
---
>       "actually delivered     486349 sats \t(34.42%)\n",
2674c2674
<       "actually deliverd     926529 sats \t(100.00%)\n",
---
>       "actually delivered     926529 sats \t(100.00%)\n",
2688c2688
<       "total runtime (including inefficient memory managment): 4.760 sec\n",
---
>       "total runtime (including inefficient memory management): 4.760 sec\n",
2690c2690
<       "Fees for successfull delivery: 61145.521 sat --> 1498 ppm\n",
---
>       "Fees for successful delivery: 61145.521 sat --> 1498 ppm\n",
2798c2798
<       "actually deliverd   23658486 sats \t(57.98%)\n",
---
>       "actually delivered   23658486 sats \t(57.98%)\n",
2851c2851
<       "actually deliverd    9793229 sats \t(57.12%)\n",
---
>       "actually delivered    9793229 sats \t(57.12%)\n",
2890c2890
<       "actually deliverd    5474195 sats \t(74.46%)\n",
---
>       "actually delivered    5474195 sats \t(74.46%)\n",
2917c2917
<       "actually deliverd    1877569 sats \t(100.00%)\n",
---
>       "actually delivered    1877569 sats \t(100.00%)\n",
2931c2931
<       "total runtime (including inefficient memory managment): 3.206 sec\n",
---
>       "total runtime (including inefficient memory management): 3.206 sec\n",
2933c2933
<       "Fees for successfull delivery: 58721.653 sat --> 1439 ppm\n",
---
>       "Fees for successful delivery: 58721.653 sat --> 1439 ppm\n",
2998c2998
<       "actually deliverd   24722971 sats \t(60.59%)\n",
---
>       "actually delivered   24722971 sats \t(60.59%)\n",
3028c3028
<       "actually deliverd   10316509 sats \t(64.16%)\n",
---
>       "actually delivered   10316509 sats \t(64.16%)\n",
3051c3051
<       "actually deliverd    5763999 sats \t(100.00%)\n",
---
>       "actually delivered    5763999 sats \t(100.00%)\n",
3065c3065
<       "total runtime (including inefficient memory managment): 1.250 sec\n",
---
>       "total runtime (including inefficient memory management): 1.250 sec\n",
3067c3067
<       "Fees for successfull delivery: 89060.245 sat --> 2182 ppm\n",
---
>       "Fees for successful delivery: 89060.245 sat --> 2182 ppm\n",
3132c3132
<     "## Probably depricated functions and todos for refactoring from old code base"
---
>     "## Probably deprecated functions and todos for refactoring from old code base"
3157c3157
<     "        FIXME: depricate? It's\n",
---
>     "        FIXME: deprecate? It's\n",

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants