Skip to content

The RedPajama-Data repository contains code for preparing large datasets for training large language models.

License

Notifications You must be signed in to change notification settings

mpskex/RedPajama-Data

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset

This repo contains a reproducible data receipe for the RedPajama data, with the following token counts:

Dataset Token Count
Commoncrawl 878 Billion
C4 175 Billion
GitHub 59 Billion
Books 26 Billion
ArXiv 28 Billion
Wikipedia 24 Billion
StackExchange 20 Billion
Total 1.2 Trillion

Data Preparation

In data_prep, we provide all pre-processing scripts and guidelines.

Tokenization

In tokenization, we provide an example of how to tokenize the dataset using the GPT-NeoX tokenizer.

Visualization

In viz, we provide a dashboard for exploring a subset of the data using Meerkat.

License

The code in this repo is licensed under the Apache 2.0 license. Unless otherwise noted,

Copyright 2023 Together Computer, ETH Zürich, Stanford University

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

The file data_prep/book/dedup.py was co-developed with Ontocord.ai.

Copyright 2023 Ontocord.ai, Together Computer, ETH Zürich, Stanford University

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

The dataset itself, please refer to the licenses of the data subsets you use.

For full terms, see the LICENSE file. If you have any questions, comments, or concerns about licensing please contact us.

To cite RedPajama, please use:

@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}

Acknowledgement

We are appreciative to the work done by the growing open-source AI community that made this project possible. That includes:

About

The RedPajama-Data repository contains code for preparing large datasets for training large language models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.0%
  • Shell 3.3%
  • Makefile 2.7%