From bc5695eecc837e25cb2db515b047b0dd58ff5d8b Mon Sep 17 00:00:00 2001 From: Henk van Cann Date: Tue, 7 Jul 2020 13:43:43 +0200 Subject: [PATCH] Create README.md initial file --- README.md | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..8c6b5ed --- /dev/null +++ b/README.md @@ -0,0 +1,35 @@ +# DID_matrix + +Personal ranking tool DID methods [registered at W3C](https://w3c-ccg.github.io/did-method-registry/) + +## Philosophy + +Why are we shedding a light on the quality and status of distinctive DID methods +* an essential topic for personal freedom +* real developments are not transparent +* it is a complex and extensive field + +How could the tool be used and what are the expected results? +* Filter and select DID methods for your organisational or personal use +* Choose based on objective criteria, scores and personal weights related to your use case +* Quick fix to skip lengthy assessments + +# How it works +A list of objective criteria has been structured into a spreadsheet. The spreadsheet offers the ability to rank projects (methods) in the DID registry. +Based on objective `scores` (compy or explain) and subjective `weights` (opinion). + +# Main categories +A. github / bitbucket activity: code, people, issues, wiki, forks, branches. (numbers and date) +B. (de)centralisation: any intermediation? (steering groups, ownership, investors, patents, licenses) +C. Recent communication (number and date of interactions in social media about the method) +D. Timeline and roadmap (any present? sticked to ….?) +E. Quality of the method description: Completeness, Interoperability, creativity, uniqueness +F. Ease of use of the method (operational, webservice for testing, sandbox, examples, blockchains / DLTs to create the DIDs + +# Default scores and timestamps +We'll provide default scores for registered methods against these criteria and use a spreadsheet to calculate marks. We will add the timestamp of our assessment and explain the score. Any users could alter these scores and weights. + +# Help us improve the Ranking tool DID_matrix +Did we miss out on important criteria? Do you think that we should skip one or more criteria that we’ve listed? + +**Feel free to contribute**