-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' of https://github.com/kozodoi/Fairness
- Loading branch information
Showing
43 changed files
with
815 additions
and
1,038 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,3 +5,5 @@ inst/doc | |
.Rhistory | ||
.RData | ||
.Ruserdata | ||
.DS_Store | ||
cran-comments.md |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,25 +1,16 @@ | ||
Package: fairness | ||
Title: Algorithmic Fairness Metrics | ||
Version: 1.0.1 | ||
Authors@R: c(person("Nikita", "Kozodoi", email = "[email protected]", role = c("aut", "cre")), | ||
person("Tibor", "V. Varga", email = "[email protected]", role = c("aut"), comment = c(ORCID = "0000-0002-2383-699X"))) | ||
Maintainer: Nikita Kozodoi <[email protected]> | ||
Description: Offers various metrics of algorithmic fairness. Fairness in machine learning is an emerging | ||
topic with the overarching aim to critically assess algorithms (predictive and classification models) whether | ||
their results reinforce existing social biases. While unfair algorithms can propagate such biases and offer | ||
prediction or classification results with a disparate impact on various sensitive subgroups of populations (defined | ||
by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities), fair algorithms possess | ||
the underlying foundation that these groups should be treated similarly / should have similar outcomes. The fairness | ||
R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population | ||
subgroups. These methods are described by Calders and Verwer (2010) <doi:10.1007/s10618-010-0190-x>, Chouldechova | ||
(2017) <doi:10.1089/big.2016.0047>, Feldman et al. (2015) <doi:10.1145/2783258.2783311> , Friedler et al. | ||
(2018) <doi:10.1145/3287560.3287589> and Zafar et al. (2017) <doi:10.1145/3038912.3052660>. The package also | ||
offers convenient visualizations to help understand fairness metrics. | ||
Version: 1.1.0 | ||
Authors@R: c(person('Nikita', 'Kozodoi', email = '[email protected]', role = c('aut', 'cre')), | ||
person('Tibor', 'V. Varga', email = '[email protected]', role = c('aut'), comment = c(ORCID = '0000-0002-2383-699X'))) | ||
Maintainer: Nikita Kozodoi <[email protected]> | ||
Description: Offers various metrics of algorithmic fairness. Fairness in machine learning is an emerging topic with the overarching aim to critically assess algorithms (predictive and classification models) whether their results reinforce existing social biases. While unfair algorithms can propagate such biases and offer prediction or classification results with a disparate impact on various sensitive subgroups of populations (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities), fair algorithms possess the underlying foundation that these groups should be treated similarly / should have similar outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) <doi:10.1007/s10618-010-0190-x>, Chouldechova (2017) <doi:10.1089/big.2016.0047>, Feldman et al. (2015) <doi:10.1145/2783258.2783311> , Friedler et al. (2018) <doi:10.1145/3287560.3287589> and Zafar et al. (2017) <doi:10.1145/3038912.3052660>. The package also offers convenient visualizations to help understand fairness metrics. | ||
License: MIT + file LICENSE | ||
Language: en-US | ||
Encoding: UTF-8 | ||
LazyData: true | ||
RoxygenNote: 6.1.1 | ||
BugReports: https://github.com/kozodoi/Fairness/issues | ||
RoxygenNote: 7.1.0 | ||
BugReports: https://github.com/kozodoi/fairness/issues | ||
Depends: R (>= 3.5.0) | ||
Imports: | ||
caret, | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,2 @@ | ||
YEAR: 2019 | ||
COPYRIGHT HOLDER: Nikita Kozodoi | ||
YEAR: 2020 | ||
COPYRIGHT HOLDER: Nikita Kozodoi |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,16 @@ | ||
# fairness 1.1.0 | ||
- fixed `outcome_levels` issue when levels of provided predictions do not match outcome levels | ||
- renamed `outcome_levels` to `preds_levels` to improve clarity | ||
- added `outcome_base` argument to set base level for target variable used to compute fairness metrics | ||
- fixed `fnr_parity()` and `fpr_parity()` calculations for different outcome bases | ||
- updates in package documentation | ||
|
||
# fairness 1.0.2 | ||
- small fixes in documentation | ||
|
||
# fairness 1.0.1 | ||
CRAN resubmission of fairness. Fix of DESCRIPTION and LICENSE files. | ||
- CRAN resubmission of fairness | ||
- fix of `DESCRIPTION` and `LICENSE` files | ||
|
||
# fairness 1.0.0 | ||
The first stable version of fairness. | ||
- the first stable version of fairness |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.