Skip to content
forked from hoprnet/hoprnet

HOPR is an open incentivized mixnet which enables privacy-preserving point-to-point data exchange. HOPR is similar to Tor but actually private, decentralized and economically sustainable.

License

Notifications You must be signed in to change notification settings

BorkBorked/hoprnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HOPR Logo

HOPR

A project by the HOPR Association

HOPR is a privacy-preserving messaging protocol which enables the creation of a secure communication network via relay nodes powered by economic incentives using digital tokens.

Gitpod

Table of Contents

Getting Started

A good place to start is the Getting Started guide on YouTube which walks through the following instructions using GitPod.

Install

The following instructions show how the latest community release may be installed. The instructions should be adapted if you want to use the latest development release or any other older release.

Install via NPM

Using the hoprd npm package:

mkdir MY_NEW_HOPR_TEST_FOLDER
cd MY_NEW_HOPR_TEST_FOLDER
npm install @hoprnet/[email protected]

Install via Docker

All our docker images can be found in our Google Cloud Container Registry. Each image is prefixed with gcr.io/hoprassociation/$PROJECT:$RELEASE. The master-goerli tag represents the master branch, while the lisbon tag represents the most recent release/* branch.

You can pull the Docker image like so:

docker pull gcr.io/hoprassociation/hoprd:lisbon

For ease of use you can set up a shell alias to run the latest release as a docker container:

alias hoprd='docker run --pull always -ti -v ${HOPRD_DATA_DIR:-$HOME/.hoprd-db}:/app/db -p 9091:9091 -p 3000:3000 -p 3001:3001 gcr.io/hoprassociation/hoprd:lisbon'

IMPORTANT: Using the above command will map the database folder used by hoprd to a local folder called .hoprd-db in your home directory. You can customize the location of that folder further by executing the following command:

HOPRD_DATA_DIR=${HOME}/.hoprd-better-db-folder eval hoprd

Also all ports are mapped to your local host, assuming you stick to the default port numbers.

NOTE: This setup should only be used for development or if you know what you are doing and don't neetd further supported. Otherwise you should use the npm or docker setup.

You will need to clone the hoprnet repo first:

git clone https://github.com/hoprnet/hoprnet

If you have direnv set up properly your nix-shell will be configured automatically upon entering the hoprnet directory and enabling it via direnv allow. Otherwise you must enter the nix-shell manually:

nix develop

Now you may follow the instructions in Develop.

Using

The hoprd provides various command-line switches to configure its behaviour. For reference these are documented here as well:

$ hoprd --help
Options:
  --help                         Show help  [boolean]
  --version                      Show version number  [boolean]
  --environment                  Environment id which the node shall run on  [string] [choices: "hardhat-localhost", "hardhat-localhost2", "master-goerli", "debug-goerli", "tuttlingen", "prague", "budapest", "athens", "lisbon"] [default: ""]
  --host                         The network host to run the HOPR node on.  [string] [default: "0.0.0.0:9091"]
  --announce                     Announce public IP to the network  [boolean] [default: false]
  --admin                        Run an admin interface on localhost:3000, requires --apiToken  [boolean] [default: false]
  --adminHost                    Host to listen to for admin console  [string] [default: "localhost"]
  --adminPort                    Port to listen to for admin console  [string] [default: 3000]
  --api                          Expose the Rest (V1, V2) and Websocket (V2) API on localhost:3001, requires --apiToken.  [boolean] [default: false]
  --apiHost                      Set host IP to which the Rest and Websocket API server will bind.  [string] [default: "localhost"]
  --apiPort                      Set host port to which the Rest and Websocket API server will bind.  [number] [default: 3001]
  --healthCheck                  Run a health check end point on localhost:8080  [boolean] [default: false]
  --healthCheckHost              Updates the host for the healthcheck server  [string] [default: "localhost"]
  --healthCheckPort              Updates the port for the healthcheck server  [number] [default: 8080]
  --forwardLogs                  Forwards all your node logs to a public available sink  [boolean] [default: false]
  --forwardLogsProvider          A provider url for the logging sink node to use  [string] [default: "https://ceramic-clay.3boxlabs.com"]
  --password                     A password to encrypt your keys  [string] [default: ""]
  --apiToken                     A REST API token and admin panel password for user authentication  [string]
  --privateKey                   A private key to be used for your HOPR node  [string]
  --identity                     The path to the identity file  [string] [default: "/home/tino/.hopr-identity"]
  --run                          Run a single hopr command, same syntax as in hopr-admin  [string] [default: ""]
  --dryRun                       List all the options used to run the HOPR node, but quit instead of starting  [boolean] [default: false]
  --data                         manually specify the database directory to use  [string] [default: ""]
  --init                         initialize a database if it doesn't already exist  [boolean] [default: false]
  --allowLocalNodeConnections    Allow connections to other nodes running on localhost.  [boolean] [default: false]
  --allowPrivateNodeConnections  Allow connections to other nodes running on private addresses.  [boolean] [default: false]
  --testAnnounceLocalAddresses   For testing local testnets. Announce local addresses.  [boolean] [default: false]
  --testPreferLocalAddresses     For testing local testnets. Prefer local peers to remote.  [boolean] [default: false]
  --testUseWeakCrypto            weaker crypto for faster node startup  [boolean] [default: false]
  --testNoAuthentication         no remote authentication for easier testing  [boolean] [default: false]

As you might have noticed running the node without any command-line argument might not work depending on the installation method used. Here are examples to run a node with some safe configurations set.

Using NPM

The following command assumes you've setup a local installation like described in Install via NPM.

cd MY_NEW_HOPR_TEST_FOLDER
DEBUG=hopr* npx hoprd --admin --init --announce --identity .hopr-identity --password switzerland --forwardLogs --apiToken <MY_TOKEN>

Here is a short break-down of each argument.

hoprd
  --admin   	                         # enable the node's admin UI, available at localhost:3000
  --init 				 # initialize the database and identity if not present
  --announce 				 # announce the node to other nodes in the network and act as relay if publicly reachable
  --identity .hopr-identity              # store your node identity information in your test folder
  --password switzerland   		 # set the encryption password for your identity
  --forwardLogs                          # enable the node's log forwarding to the ceramic network
  --apiToken <MY_TOKEN> # specify password for accessing admin panel and REST API (REQUIRED)

Using Docker

The following command assumes you've setup an alias like described in Install via Docker.

hoprd --identity /app/db/.hopr-identity --password switzerland --init --announce --host "0.0.0.0:9091" --admin --adminHost 0.0.0.0 --forwardLogs --apiToken <MY_TOKEN> --environment jungfrau

Here is a short break-down of each argument.

hoprd
  --identity /app/db/.hopr-identity      # store your node identity information in the persisted database folder
  --password switzerland   		 # set the encryption password for your identity
  --init 				 # initialize the database and identity if not present
  --announce 				 # announce the node to other nodes in the network and act as relay if publicly reachable
  --host "0.0.0.0:9091"   		 # set IP and port of the P2P API to the container's external IP so it can be reached on your host
  --admin   	                         # enable the node's admin UI
  --adminHost 0.0.0.0                    # set IP of the Rest API to the container's external IP so it can be reached on your host
  --forwardLogs                          # enable the node's log forwarding to the ceramic network
  --apiToken <MY_TOKEN> # specify password for accessing admin panel and REST API(REQUIRED)
  --environment jungfrau # an environment is defined as a chain plus a number of deployed smart contract addresses to use on that chain
                         # each release has a default environment id set, but the user can override this value
                         # nodes from different environments are **not able** to communicate

Migrating between releases

At the moment we DO NOT HAVE backward compatibility between releases. We attempt to provide instructions on how to migrate your tokens between releases.

  1. Set your automatic channel strategy to MANUAL.
  2. Redeem all unredeemed tickets.
  3. Close all open payment channels.
  4. Once all payment channels have closed, withdraw your funds to an external wallet.
  5. Run info and take note of the network name.
  6. Once funds are confirmed to exist in a different wallet, backup .hopr-identity folder.
  7. Launch new HOPRd instance using latest release, observe the account address.
  8. Only tranfer funds to new HOPRd instance if HOPRd operates on the same network as last release, you can compare the two networks using info.

Develop

yarn          # Install dependencies and sets up workspces
yarn build    # Builds contracts, clients, etc

# starting network
HOPR_ENVIRONMENT_ID=hardhat-localhost yarn run:network

# workaround for a temp issue with local hardhat-network
cp -R packages/ethereum/deployments/hardhat-localhost/localhost/* packages/ethereum/deployments/hardhat-localhost/hardhat

# running normal node alice (separate terminal)
DEBUG="hopr*" yarn run:hoprd:alice --environment hardhat-localhost

# running normal node bob (separate terminal)
DEBUG="hopr*" yarn run:hoprd:bob --environment hardhat-localhost

# fund all your nodes to get started
HOPR_ENVIRONMENT_ID=hardhat-localhost yarn run:faucet:all

Test

Unit testing

We use mocha for our tests. You can run our test suite across all packages using the following command:

yarn test

To run tests of a single package (e.g. hoprd) execute:

yarn --cwd packages/hoprd test

To run tests of a single test suite (e.g. Identity) within a package (e.g. hoprd) execute:

For instance, to run only the Identity test suite in hoprd, you need to run the following:

yarn --cwd packages/hoprd test --grep "Identity"

In a similar fashion, our contracts can be tested in isolation. For now, you need to pass the file to be tested, as hardhat does not support --grep

yarn test:contracts test/HoprChannels.spec.ts

In case a package you need to test is not included in our package.json, please feel free to update it as needed.

Test-driven development

To make sure we add the least amount of untested code to our codebase, whenever possible all code should come accompanied by a test. To do so, locate the .spec or equivalent test file for your code. If it does not exist, create it within the same file your code will live in.

Afterwards, ensure you create a breaking test for your feature. For example, the following commit added a test to a non-existing feature. The immediate commit provided the actual feature for that given test. Repeat this process for all the code you add to our codebase.

(The code was pushed as an example, but ideally, you only push code that has working tests on your machine, as to avoid overusing our CI pipeline with known broken tests.)

Github Actions CI

We run a fair amount of automation using Github Actions. To ease development of these workflows one can use act to run workflows locally in a Docker environment.

E.g. running the build workflow:

act -j build

For more information please refer to act's documentation.

End-to-End Testing

Running Tests Locally

End-to-end testing is usually performed by the CI, but can also be performed locally by executing:

./scripts/run-integration-tests-source.sh

Read the full help information of the script in case of questions:

./scripts/run-integration-tests-source.sh --help

That command will spawn multiple hoprd nodes locally from the local source-code and run the tests against this cluster of nodes. The tests can be found in the files test/*.sh. The script will cleanup all nodes once completed unless instructed otherwise.

An alternative to using the local source-code is running the tests against a NPM package.

./scripts/run-integration-tests-npm.sh

If no parameter is given the NPM package which correlates to the most recent Git tag will be used, otherwise the first parameter is used as the NPM package version to test.

Read the full help information of the script in case of questions:

./scripts/run-integration-tests-npm.sh --help

Running Tests on Google Cloud Platform

In some unique cases, some bugs might not had been picked up by our end-to-end testing and instead only show up when deployed to production. To avoid having to see these only after a time consuming build, a cluster of nodes can be deployed to Google Cloud Platform which is then used to run tests against it.

A requirement for this setup is a working gcloud configuration locally. The easiest approach would be to authenticate with gcloud auth login.

The cluster creation and tests can be run with:

FUNDING_PRIV_KEY=mysecretaccountprivkey \
  ./scripts/run-integration-tests-gcloud.sh

The given account private key is used to fund the test nodes to be able to perform throughout the tests. Thus the account must have enough funds available.

The test instantiated by this script will also include nodes behind NAT.

Read the full help information of the script in case of questions:

./scripts/run-integration-tests-gcloud.sh --help

Deploy

The deployment nodes and networks is mostly orchestrated through the script files in scripts/ which are executed by the Github Actions CI workflows. Therefore, all common and minimal networks do not require manual steps to be deployed.

Using Google Cloud Platform

However, sometimes it is useful to deploy additional nodes or specific versions of hoprd. To accomplish that its possible to create a cluster on GCP using the following scripts:

./scripts/setup-gcloud-cluster.sh my-custom-cluster-without-name

Read the full help information of the script in case of questions:

./scripts/setup-gcloud-cluster.sh --help

The script requires a few environment variables to be set, but will inform the user if one is missing. It will create a cluster of 6 nodes. By default these nodes will use the latest Docker image of hoprd and run on the Goerli network. Different versions and different target networks can be configured through the parameters and environment variables.

To launch nodes using the xDai network one would execute (with the placeholders replaced accordingly):

HOPRD_API_TOKEN="<ADMIN_AUTH_HTTP_TOKEN>" \
HOPRD_PASSWORD="<IDENTITY_FILE_PASSWORD>" \
  ./scripts/setup-gcloud-cluster.sh environment "" my-custom-cluster-without-name

A previously started cluster can be destroyed, which includes all running nodes, by using the same script but setting the cleanup switch:

HOPRD_PERFORM_CLEANUP=true \
  ./scripts/setup-gcloud-cluster.sh environment "" my-custom-cluster-without-name

The default Docker image in scripts/setup-gcloud-cluster.sh deploys GCloud public nodes. If you wish to deploy GCloud nodes that are behind NAT, you need to specify a NAT-variant of the hoprd image (note the -nat suffix in the image name):

HOPRD_PERFORM_CLEANUP=true \
  ./scripts/setup-gcloud-cluster.sh environment "" my-nat-cluster gcr.io/hoprassociation/hoprd-nat

Note that if the Docker image version is not specified, the script will use the environment as version.

Using Google Cloud Platform and a Default Topology

The creation of a hoprd cluster on GCP can be enhanced by providing a topology script to the creation script:

./scripts/setup-gcloud-cluster.sh \
  my-custom-cluster-without-name \
  gcr.io/hoprassociation/hoprd:lisbon \
  `pwd`/scripts/topologies/full_interconnected_cluster.sh

After the normal cluster creation the topology script will then open channels between all nodes so they are fully interconnected. Custom topology scripts can be easily added and used in the same manner. Refer to the referenced scripts as a guideline on how to get started.

Tooling

As some tools are only partially supported, please tag the respective team member whenever you need an issue about a particular tool.

Maintainer Technology
@jjperezaguinaga Visual Code
@tolbrino Nix

Contact

License

GPL v3 © HOPR Association

About

HOPR is an open incentivized mixnet which enables privacy-preserving point-to-point data exchange. HOPR is similar to Tor but actually private, decentralized and economically sustainable.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 50.1%
  • Solidity 33.0%
  • TypeScript 7.9%
  • Shell 5.0%
  • Python 1.8%
  • Makefile 1.7%
  • Other 0.5%