Personal ranking tool DID methods registered at W3C
Why shed a light on the quality and status of distinctive DID methods?
- it's an essential topic for personal freedom
- real developments are not transparent
- it's a complex and extensive field
How could the tool be used and what are the expected results?
- Filter and select DID methods for your organisational or personal use
- Choose based on objective criteria, scores and personal weights related to your use case
- Quick fix to skip lengthy assessments
A list of objective criteria has been structured into a spreadsheet. The spreadsheet offers the ability to rank projects (methods) in the DID registry.
Based on objective scores
(compy or explain) and subjective weights
(opinion).
- github / bitbucket activity: code, people, issues, wiki, forks, branches. (numbers and date)
- (de)centralisation: any intermediation? (steering groups, ownership, investors, patents, licenses)
- Recent communication (number and date of interactions in social media about the method)
- Timeline and roadmap (any present? sticked to ….?)
- Quality of the method description: completeness, interoperability, creativity, uniqueness
- Ease of use of the method (operational, webservice for testing, sandbox, examples, blockchains / DLTs to create the DIDs
We'll provide default scores for registered methods against these criteria and use a spreadsheet to calculate marks. We will add the timestamp of our assessment and explain the score. Any users could alter these scores and weights.
Did we miss out on important criteria? Do you think that we should skip one or more criteria that we’ve listed?
Feel free to contribute