Skip to content

Commit

Permalink
Merge pull request jtleek#3 from lcolladotor/master
Browse files Browse the repository at this point in the history
Fixed typos, added links
  • Loading branch information
jtleek committed Nov 18, 2013
2 parents 041e2a4 + a3961cc commit 262a91d
Showing 1 changed file with 40 additions and 39 deletions.
79 changes: 40 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This is a guide for anyone who needs to share data with a statistician. The targ
* Junior statistics students whose job it is to collate/clean data sets

The goals of this guide are to provide some instruction on the best way to share data to avoid the most common pitfalls
and sources of delay in the transition from data collection to data analysis. The Leek group works with a large
and sources of delay in the transition from data collection to data analysis. The [Leek group](http://biostat.jhsph.edu/~jleek/) works with a large
number of collaborators and the number one source of variation in the speed to results is the status of the data
when they arrive at the Leek group. Based on my conversations with other statisticians this is true nearly universally.

Expand Down Expand Up @@ -38,37 +38,37 @@ Let's look at each part of the data package you will transfer.
It is critical that you include the rawest form of the data that you have access to. Here are some examples of the
raw form of data:

* The strange binary file your measurement machine spits out
* The unformated Excel file with 10 worksheets the company you contracted with sent you
* The complicated JSON data you got from scraping the Twitter API
* The strange [binary file](http://en.wikipedia.org/wiki/Binary_file) your measurement machine spits out
* The unformatted Excel file with 10 worksheets the company you contracted with sent you
* The complicated [JSON](http://en.wikipedia.org/wiki/JSON) data you got from scraping the [Twitter API](https://twitter.com/twitterapi)
* The hand-entered numbers you collected looking through a microscope

You know the raw data is in the right format if you:

1. Ran no software on the data
2. Did not manipulate any of the numbers in the data
3. You did not remove any data from the data set
4. You did not summarize the data in any way
1. Did not manipulate any of the numbers in the data
1. You did not remove any data from the data set
1. You did not summarize the data in any way

If you did any manipulation of the data at all it is not the raw form of the data. Reporting manipulated data
as raw data is a very common way to slow down the analysis process, since the analyst will often have to do a
forensic study of your data to figure out why the raw data looks weird.

### The tidy data set

The general principles of tidy data are laid out by Hadley Wickham in [this paper](http://vita.had.co.nz/papers/tidy-data.pdf)
and [this video](http://vimeo.com/33727555). The paper and the video are both focused on the R package, which you
The general principles of tidy data are laid out by [Hadley Wickham](http://had.co.nz/) in [this paper](http://vita.had.co.nz/papers/tidy-data.pdf)
and [this video](http://vimeo.com/33727555). The paper and the video are both focused on the [R](http://www.r-project.org/) package, which you
may or may not know how to use. Regardless the three general principles you should pay attention to are:

1. Each variable you measure should be in one column
2. Each different observation of that variable should be in a different row
3. There should be one table for each "kind" of variable
4. If you have multiple tables, they should include a column in the table that allows them to be linked
1. Each different observation of that variable should be in a different row
1. There should be one table for each "kind" of variable
1. If you have multiple tables, they should include a column in the table that allows them to be linked

While these are the hard and fast rules, there are a number of other things that will make your data set much easier
to handle. First is to include a row at the top of each data table/spreadsheet that contains full row names.
So if you measured age at diagnosis for patients, you would head that column with the name AgeAtDiagnosis instead
of something like ADx or another abreviation that may be hard for another person to understand.
So if you measured age at diagnosis for patients, you would head that column with the name `AgeAtDiagnosis` instead
of something like `ADx` or another abbreviation that may be hard for another person to understand.


Here is an example of how this would work from genomics. Suppose that for 20 people you have collected gene expression measurements with
Expand All @@ -80,8 +80,9 @@ is summarized at the level of the number of counts per exon. Suppose you have 10
table/spreadsheet that had 21 rows (a row for gene names, and one row for each patient) and 100,001 columns (one row for patient
ids and one row for each data type).

If you are sharing your data with the collaborator in Excel the tidy data should be in one Excel file per table. They
If you are sharing your data with the collaborator in Excel, the tidy data should be in one Excel file per table. They
should not have multiple worksheets, no macros should be applied to the data, and no columns/cells should be highlighted.
Alternatively share the data in a [CSV](http://en.wikipedia.org/wiki/Comma-separated_values) or [TAB-delimited](http://en.wikipedia.org/wiki/Tab-separated_values) text file.


### The code book
Expand All @@ -90,8 +91,8 @@ For almost any data set, the measurements you calculate will need to be describe
into the spreadsheet. The code book contains this information. At minimum it should contain:

1. Information about the variables (including units!) in the data set not contained in the tidy data
2. Information about the summary choices you made
3. Information about the experimental study design you used
1. Information about the summary choices you made
1. Information about the experimental study design you used

In our genomics example, the analyst would want to know what the unit of measurement for each
clinical/demographic variable is (age in years, treatment by name/dose, level of diagnosis and how heterogeneous). They
Expand All @@ -100,30 +101,30 @@ would also want to know any other information about how you did the data collect
are these the first 20 patients that walked into the clinic? Are they 20 highly selected patients by some characteristic
like age? Are they randomized to treatments?

A common format for this document is a word file. There should be a section called "Study design" that has a thorugh
A common format for this document is a Word file. There should be a section called "Study design" that has a through
description of how you collected the data. There is a section called "Code book" that describes each variable and its
units.

### How to code variables

When you put variables into a spreadsheet there are several main categories you will run into:
When you put variables into a spreadsheet there are several main categories you will run into depending on their [data type](http://en.wikipedia.org/wiki/Statistical_data_type):

1. Continuous
2. Ordinal
3. Categorical
4. Misssing
5. Censored
1. Ordinal
1. Categorical
1. Misssing
1. Censored

Continuous variables are anything measured on a quantitative scale that could be any fractional number. An example
would be something like weight measured in kg. Ordinal data are data that have a fixed, small (< 100) number of levels but are ordered.
This could be for example survey responses where the choices are: poor, fair, good. Categorical data are data where there
are multiple categories, but they aren't ordered. One example would be sex: male or female. Missing data are data
that are missing and you don't know the mechanism. You should code missing values as NA. Censored data are data
would be something like weight measured in kg. [Ordinal data](http://en.wikipedia.org/wiki/Ordinal_data) are data that have a fixed, small (< 100) number of levels but are ordered.
This could be for example survey responses where the choices are: poor, fair, good. [Categorical data](http://en.wikipedia.org/wiki/Categorical_variable) are data where there
are multiple categories, but they aren't ordered. One example would be sex: male or female. [Missing data](http://en.wikipedia.org/wiki/Missing_data) are data
that are missing and you don't know the mechanism. You should code missing values as `NA`. [Censored data](http://en.wikipedia.org/wiki/Censoring_(statistics)) are data
where you know the missingness mechanism on some level. Common examples are a measurement being below a detection limit
ora patient being lost to follow-up. They should also be coded as NA when you don't have the data. But you should
also add a new column to your tidy data called, "VariableNameCensored" which should have values of TRUE if censored
and FALSE if not. In the code book you should explain why those values are missing. It is absolutely critical to report
to the analyst if there is a reason you know about that some of the data are missing. You should also not impute/make up/
or a patient being lost to follow-up. They should also be coded as `NA` when you don't have the data. But you should
also add a new column to your tidy data called, "VariableNameCensored" which should have values of `TRUE` if censored
and `FALSE` if not. In the code book you should explain why those values are missing. It is absolutely critical to report
to the analyst if there is a reason you know about that some of the data are missing. You should also not [impute](http://en.wikipedia.org/wiki/Imputation_(statistics))/make up/
throw away missing observations.

In general, try to avoid coding categorical or ordinal variables as numbers. When you enter the value for sex in the tidy
Expand All @@ -138,17 +139,17 @@ That means, when you submit your paper, the reviewers and the rest of the world
the analyses from raw data all the way to final results. If you are trying to be efficient, you will likely perform
some summarization/data analysis steps before the data can be considered tidy.

The ideal thing for you to do when performing summarization is to create a computer script (in R, Python, or something else)
The ideal thing for you to do when performing summarization is to create a computer script (in `R`, `Python`, or something else)
that takes the raw data as input and produces the tidy data you are sharing as output. You can try running your script
a couple of times and see if the code produces the same output.

In many cases, the person who collected the data has incentive to make it tidy for a statistician to speed the process
of collaboration. They may not know how to code in a scripting language. In that case, what you should provide the statistician
is something called psuedocode. It should look something like:
is something called [pseudocode](http://en.wikipedia.org/wiki/Pseudocode). It should look something like:

1. Step 1 - take the raw file, run version 3.1.2 of summarize software with parameters a=1, b=2, c=3
2. Step 2 - run the software separatly for each sample
3. Step 3 - take column three of outputfile.out for each sample and that is the corresponding row in the output data set
1. Step 2 - run the software separately for each sample
1. Step 3 - take column three of outputfile.out for each sample and that is the corresponding row in the output data set

You should also include information about which system (Mac/Windows/Linux) you used the software on and whether you
tried it more than once to confirm it gave the same results. Ideally, you will run this by a fellow student/labmate
Expand All @@ -168,8 +169,8 @@ checks.
You should then expect from the statistician:

1. An analysis script that performs each of the analyses (not just instructions)
2. The exact computer code they used to run the analysis
3. All output files/figures they generated.
1. The exact computer code they used to run the analysis
1. All output files/figures they generated.

This is the information you will use in the supplement to establish reproducibility and precision of your results. Each
of the steps in the analysis should be clearly explained and you should ask questions when you don't understand
Expand All @@ -181,7 +182,7 @@ to explain why the statistician performed each step to a labmate/your principal
Contributors
====================

[Jeff Leek](http://biostat.jhsph.edu/~jleek/) - Wrote the initial version.

* [Jeff Leek](http://biostat.jhsph.edu/~jleek/) - Wrote the initial version.
* [L. Collado-Torres](http://bit.ly/LColladoTorres) - Fixed typos, added links.


0 comments on commit 262a91d

Please sign in to comment.