Skip to content

Commit

Permalink
merged upstream
Browse files Browse the repository at this point in the history
Signed-off-by: root <root@zhadong-4mhn8-8819-worker-0.yanzhaodong.baaishare-sailing.svc.kubebrain.local>
  • Loading branch information
root committed Oct 24, 2022
2 parents 0ec4280 + a9a4e91 commit 35b7abd
Show file tree
Hide file tree
Showing 82 changed files with 120 additions and 100 deletions.
Empty file modified CHANGELOG.md
100644 → 100755
Empty file.
Empty file modified CLA.md
100644 → 100755
Empty file.
Empty file modified CODE_OF_CONDUCT.md
100644 → 100755
Empty file.
Empty file modified COMMITTERS.csv
100644 → 100755
Empty file.
Empty file modified CONTRIBUTING.md
100644 → 100755
Empty file.
Empty file modified GOVERNANCE.md
100644 → 100755
Empty file.
Empty file modified LICENSE
100644 → 100755
Empty file.
Empty file modified README.md
100644 → 100755
Empty file.
Empty file modified README_zh.md
100644 → 100755
Empty file.
Empty file modified SUPPORT.md
100644 → 100755
Empty file.
Empty file modified doc_zh/APPENDIX_GLM_IO.md
100644 → 100755
Empty file.
Empty file modified doc_zh/APPENDIX_TASK.md
100644 → 100755
Empty file.
Empty file modified doc_zh/Advanced_Usage.md
100644 → 100755
Empty file.
Empty file modified doc_zh/DATASET_EXAMPLE.md
100644 → 100755
Empty file.
Empty file modified doc_zh/GLM.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_10_METATRON.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_11_GLM_BLANK_FILLING_QA.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_12_GLM_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_13_GLM_EXAMPLE_PEOTRY_GENERATION.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_14_HUGGINGFACE_T5.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_15_BERT_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_16_BERT_EXAMPLE_SEMANTIC_MATCHING.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_17_BERT_EXAMPLE_NER.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_18_GPT2_WRITING.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_19_T5_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
3 changes: 2 additions & 1 deletion doc_zh/TUTORIAL_1_TOKENIZER.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@ from flagai.data.tokenizer import Tokenizer
model_name = "GLM-large-ch"
tokenizer = Tokenizer.from_pretrained(model_name)
```
在这一步里,模型仓库中的词表文件将被自动下载到`cache_dir`参数中指定的路径, 默认设置为`~/.cache/FlagAI/{model_name}`

在这一步里,模型仓库中的词表文件将被自动下载到`cache_dir`参数中指定的路径。默认设置为 `./checkpoints/{model_name}` 目录。


## 应用分词器
Expand Down
Empty file modified doc_zh/TUTORIAL_20_GLM_TNEWS.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_20_SUPPORTED_TASKS_backup.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_2_DATASET.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_3_MODEL.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_4_TRAINER.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_5_INSTRUCTIONS_FOR_AutoLoader.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_6_INSTRUCTIONS_FOR_PREDICTOR.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_7_PROMPT_LEARNING.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_8_ENVIRONMENT_SETUP.md
100644 → 100755
Empty file.
Empty file modified doc_zh/TUTORIAL_9_SEQ2SEQ_METHOD.md
100644 → 100755
Empty file.
Empty file modified doc_zh/tokenization.md
100644 → 100755
Empty file.
Empty file modified docs/APPENDIX_GLM_IO.md
100644 → 100755
Empty file.
Empty file modified docs/Advanced_Usage.md
100644 → 100755
Empty file.
Empty file modified docs/DATASET_EXAMPLE.md
100644 → 100755
Empty file.
Empty file modified docs/GLM.md
100644 → 100755
Empty file.
Empty file modified docs/QuickTour.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_10_MEGATRON.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_11_GLM_BLANK_FILLING_QA.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_12_GLM_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_13_GLM_EXAMPLE_PEOTRY_GENERATION.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_14_HUGGINGFACE_T5.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_15_BERT_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_16_BERT_EXAMPLE_SEMANTIC_MATCHING.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_17_BERT_EXAMPLE_NER.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_18_GPT2_WRITING.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_19_T5_EXAMPLE_TITLE_GENERATION.md
100644 → 100755
Empty file.
2 changes: 1 addition & 1 deletion docs/TUTORIAL_1_TOKENIZER.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ from flagai.data.tokenizer import Tokenizer
model_name = "GLM-large-en"
tokenizer = Tokenizer.from_pretrained(model_name) # Load tokenizer
```
At this step, the vocab files from Modelhub will be automatically downloaded to the path specified in `cache_dir` parameter. It is set to `~/.cache/FlagAI/{model_name}` directory under the tokenizer file in default.
At this step, the vocab files from Modelhub will be automatically downloaded to the path specified in `cache_dir` parameter. It is set to `./checkpoints/{model_name}` directory in default.

## Applying a tokenizer
The tokenizer can be used to encode text to a list of token IDs, as well as decoding the token IDs to the original text.
Expand Down
Empty file modified docs/TUTORIAL_20_SUPPORTED_TASKS.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_2_DATASET.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_3_MODEL.md
100644 → 100755
Empty file.
2 changes: 1 addition & 1 deletion docs/TUTORIAL_4_TRAINER.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -416,4 +416,4 @@ python -m torch.distributed.launch --nproc_per_node 2 --nnodes 1 --node_rank 0 -
### deepspeed
```commandline
python -m deepspeed.launcher.launch --master_addr=172.31.125.121 --master_port=17500 train.py --not_call_launch
```
```
Empty file modified docs/TUTORIAL_5_INSTRUCTIONS_FOR_AutoLoader.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_6_INSTRUCTIONS_FOR_PREDICTOR.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_7_PROMPT_LEARNING.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_8_ENVIRONMENT_SETUP.md
100644 → 100755
Empty file.
Empty file modified docs/TUTORIAL_9_SEQ2SEQ_METHOD.md
100644 → 100755
Empty file.
Empty file modified docs/img.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file modified docs/tokenization.md
100644 → 100755
Empty file.
1 change: 1 addition & 0 deletions examples/vit_cifar100/train_deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
save_interval=1000,
num_checkpoints=1,
hostfile="./hostfile",
deepspeed_config='./deepspeed.json',
training_script="train_deepspeed.py"
)

Expand Down
20 changes: 3 additions & 17 deletions flagai/auto_model/auto_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
# Licensed under the Apache License, Version 2.0 (the "License")
import importlib
import os

from flagai.model.file_utils import _get_model_id, _get_vocab_path
import copy

class LazyImport(object):
Expand Down Expand Up @@ -162,28 +160,16 @@ def __init__(self,
)
return

model_id = _get_model_id(f"{raw_model_name}-{task_name}")
if model_id != 'null':
model_name_ = f"{raw_model_name}-{task_name}"
else:
model_name_ = raw_model_name
download_path = os.path.join(model_dir, model_name_)
os.makedirs(download_path, exist_ok=True)
self.model = getattr(LazyImport(self.model_name[0]),
self.model_name[1]).from_pretrain(
download_path=model_dir,
model_name=model_name_,
model_name=raw_model_name,
only_download_config=only_download_config,
device=device,
**kwargs)

try:
model_id = _get_model_id(model_name)
except:
print("Model hub is not reachable!")
model_id = -1

print("*"*20, task_name, model_id, model_name)
download_path = os.path.join(model_dir, raw_model_name)
print("*"*20, task_name, model_name)
if model_type == "mm" or model_type == "nlp":
tokenizer_class = getattr(LazyImport("flagai.data.tokenizer"),
"Tokenizer")
Expand Down
5 changes: 2 additions & 3 deletions flagai/data/tokenizer/uni_tokenizer/base_tokenizer.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,10 @@ def from_pretrained(cls,
The directory that contains the vocab files, or will receive the downloaded vocab files
"""
if cache_dir is None:
cache_dir = os.path.join(os.path.dirname(__file__), 'vocabs', f"{tokenizer_model_name}")
# cache_dir = "/root/.cache/FlagAI/"+tokenizer_model_name
# cache_dir = os.path.join(os.path.dirname(__file__), 'vocabs', f"{tokenizer_model_name}")
cache_dir = './checkpoints/'+tokenizer_model_name
tokenizer_class = ""
# search the cache directory for certain files

if os.path.exists(cache_dir):
files = os.listdir(cache_dir)
if SP_MODEL_FILE in files:
Expand Down
28 changes: 17 additions & 11 deletions flagai/data/tokenizer/uni_tokenizer/tokenizer.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,10 @@ def __init__(self,
super().__init__(**kwargs)

if self.tokenizer_class == "wp":
self.text_tokenizer = WordpieceTokenizer(self.vocab_file)
if self.tokenizer_model_name.lower().endswith("ch"):
self.text_tokenizer = WordpieceTokenizer(self.vocab_file, is_ch=True)
else:
self.text_tokenizer = WordpieceTokenizer(self.vocab_file)
elif self.tokenizer_class == "bpe":
if self.tokenizer_model_name.lower().startswith('clip'):
self.text_tokenizer = MMBPETokenizer(self.vocab_file, self.merges_file)
Expand Down Expand Up @@ -302,7 +305,7 @@ def get_command_id(self, name):
return self.command_name_map[name].Id

def rematch(self, text, tokens):
"""给出原始的text和tokenize后的tokens的映射关系
"""output the mapping relation between raw text and tokenizezd text
"""
text = text.lower()
normalized_text, char_mapping = '', []
Expand All @@ -325,28 +328,21 @@ def rematch(self, text, tokens):
end = start + len(token)
token_mapping.append(char_mapping[start:end])
offset = end

return token_mapping

@staticmethod
def _is_control(ch):
"""控制类字符判断
"""
return unicodedata.category(ch) in ('Cc', 'Cf')

@staticmethod
def stem(token):
"""获取token的“词干”(如果是##开头,则自动去掉##)
"""
if token[:2] == '##':
return token[2:]
else:
return token

@staticmethod
def _is_special(ch):
"""判断是不是有特殊含义的符号
"""
return bool(ch) and (ch[0] == '[') and (ch[-1] == ']')

def _encode(self, text):
Expand Down Expand Up @@ -637,9 +633,19 @@ def tokenize_as_tensor(self, texts):
eot_token = self.get_command_id('eot')
return self.text_tokenizer.tokenize(texts, sot_token=sot_token, eot_token=eot_token)

def tokenize(self, text, maxlen=None, add_spatial_tokens=False):
tokens = self.text_tokenizer.tokenize(text)

if add_spatial_tokens:
tokens.insert(0, self.get_command_id('cls'))
tokens.append(self.get_command_id('sep'))

if maxlen is not None:
index = int(self.get_command_id('sep') is not None) + 1
self.truncate_sequence(maxlen, tokens, pop_index=-index)
return tokens


def tokenize(self, texts):
return self.text_tokenizer.tokenize(texts)



9 changes: 6 additions & 3 deletions flagai/data/tokenizer/uni_tokenizer/wp_tokenizer.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class WordpieceTokenizer(object):
def __init__(self, vocab_file=None, do_basic_tokenize=True,
do_lower_case=True, max_len=None,
never_split=("[UNK]", "[SEP]", "[PAD]", "[CLS]", "[MASK]"),
unk_token="[UNK]", max_input_chars_per_word=100, *input, **kwargs):
unk_token="[UNK]", max_input_chars_per_word=100, is_ch=False, *input, **kwargs):
if not os.path.isfile(vocab_file):
raise ValueError(
"Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained "
Expand All @@ -51,6 +51,7 @@ def __init__(self, vocab_file=None, do_basic_tokenize=True,
self.max_len = max_len if max_len is not None else int(1e12)
self.unk_token = unk_token
self.max_input_chars_per_word = max_input_chars_per_word
self.is_ch = is_ch

@property
def vocab_size(self):
Expand Down Expand Up @@ -122,7 +123,6 @@ def tokenize(self, text, maxlen=None, add_spatial_tokens=False):
if maxlen is not None:
index = int(self._token_sep is not None) + 1
self.truncate_sequence(maxlen, split_tokens, pop_index=-index)
# print(f"split_tokens is {split_tokens}")
return split_tokens

def truncate_sequence(self,
Expand Down Expand Up @@ -168,7 +168,10 @@ def convert_ids_to_tokens(self, ids):

def convert_tokens_to_string(self, tokens, all_command_token={}):
"""Converts a sequence of tokens (string) in a single string."""
out_string = " ".join(tokens).replace(" ##", "").strip()
if self.is_ch:
out_string = "".join(tokens).replace(" ", "").strip()
else:
out_string = " ".join(tokens).replace(" ##", "").strip()
return out_string

def load_vocab(vocab_file):
Expand Down
96 changes: 52 additions & 44 deletions flagai/model/base_model.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -47,18 +47,66 @@ def from_pretrain(cls,
device="cpu",
**kwargs):
model_id = None

# Try load model from local path
download_path = os.path.join(download_path, model_name)
config_path = os.path.join(download_path, "config.json")
checkpoint_path = os.path.join(download_path, "pytorch_model.bin")

def load_local(checkpoint_path):
model = cls.init_from_json(config_path, **kwargs)
model.to(device)
if os.getenv('ENV_TYPE') != 'deepspeed+mpu':
if os.path.exists(checkpoint_path):
model.load_weights(checkpoint_path)
elif os.getenv('ENV_TYPE') == 'deepspeed+mpu':
model_parallel_size = int(os.getenv("MODEL_PARALLEL_SIZE"))
if torch.distributed.is_initialized(
) and torch.distributed.get_rank() == 0:
# change the mp_size in rank 0
print(
"preparing the model weights for model parallel size = {:02d}"
.format(model_parallel_size))
from flagai.auto_model.auto_loader import MODEL_DICT
from flagai.mp_tools import change_pytorch_model_mp_from_1_to_n_new, check_pytorch_model_mp_size
if model_parallel_size > 1 and not check_pytorch_model_mp_size(
download_path, model_parallel_size):
brief_model_name = MODEL_DICT[model_name.lower()][2]
change_pytorch_model_mp_from_1_to_n_new(brief_model_name,
download_path, model_parallel_size)

from flagai import mpu
torch.distributed.barrier(group=mpu.get_model_parallel_group())

if model_parallel_size > 1:
from flagai.mpu import get_model_parallel_rank
model_parallel_rank = get_model_parallel_rank()
checkpoint_path = os.path.join(
download_path,
"pytorch_model_{:02d}.bin".format(model_parallel_rank))
if os.path.exists(checkpoint_path):
model.load_weights(checkpoint_path)
else:
model.load_weights(checkpoint_path)
return model

if os.path.exists(config_path):
"""
It is fine when checkpoint_path does not exist, for the case that only_download_config=True
At that time the model will not be loaded.
"""
return load_local(checkpoint_path)

try:
model_id = _get_model_id(model_name)
except:
print("Model hub is not reachable!")
# config_path = None
download_path = os.path.join(download_path, model_name)
checkpoint_path = os.path.join(download_path, "pytorch_model.bin")
# prepare the download path
# downloading the files
model: Union[Module, None]
if model_id and model_id != "null":
model_files = eval(_get_model_files(model_name))
print("model files:" + str(model_files))
for file_name in model_files:
if not file_name.endswith("bin"):
_get_vocab_path(download_path, file_name, model_id)
Expand Down Expand Up @@ -102,44 +150,4 @@ def from_pretrain(cls,
checkpoint_merge[k] = v
# save all parameters
torch.save(checkpoint_merge, os.path.join(download_path, "pytorch_model.bin"))

config_path = os.path.join(download_path, "config.json")

if os.path.exists(config_path):
model = cls.init_from_json(config_path, **kwargs)
model.to(device)
if os.getenv('ENV_TYPE') != 'deepspeed+mpu':
if os.path.exists(checkpoint_path):
model.load_weights(checkpoint_path)
elif os.getenv('ENV_TYPE') == 'deepspeed+mpu':
model_parallel_size = int(os.getenv("MODEL_PARALLEL_SIZE"))
if torch.distributed.is_initialized(
) and torch.distributed.get_rank() == 0:
# change the mp_size in rank 0
print(
"preparing the model weights for model parallel size = {:02d}"
.format(model_parallel_size))
from flagai.auto_model.auto_loader import MODEL_DICT
from flagai.mp_tools import change_pytorch_model_mp_from_1_to_n_new, check_pytorch_model_mp_size
if model_parallel_size > 1 and not check_pytorch_model_mp_size(
download_path, model_parallel_size):
brief_model_name = MODEL_DICT[model_name.lower()][2]
change_pytorch_model_mp_from_1_to_n_new(brief_model_name,
download_path, model_parallel_size)

from flagai import mpu
torch.distributed.barrier(group=mpu.get_model_parallel_group())

if model_parallel_size > 1:
from flagai.mpu import get_model_parallel_rank
model_parallel_rank = get_model_parallel_rank()
checkpoint_path = os.path.join(
download_path,
"pytorch_model_{:02d}.bin".format(model_parallel_rank))
if os.path.exists(checkpoint_path):
model.load_weights(checkpoint_path)
else:
model.load_weights(checkpoint_path)
else:
model = None
return model
return load_local(checkpoint_path)
20 changes: 18 additions & 2 deletions flagai/model/bert_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,9 +153,25 @@ def forward(self,
extended_attention_mask = extended_attention_mask.unsqueeze(
1).unsqueeze(2)
if attention_mask is not None:
input_attention_mask_dim = len(attention_mask.shape)
if input_attention_mask_dim == 4:
# seq2seq mask
extended_attention_mask = extended_attention_mask.unsqueeze(1).unsqueeze(2)
elif input_attention_mask_dim == 3:
extended_attention_mask = extended_attention_mask.unsqueeze(1)
elif input_attention_mask_dim == 2:
# not need to extend
pass
extended_attention_mask = extended_attention_mask * attention_mask
# extended_attention_mask = extended_attention_mask.unsqueeze(
# 1).unsqueeze(2)

# extended_attention_mask need to extend to 4 dimentions.
extended_attention_mask_dim = len(extended_attention_mask.shape)
if extended_attention_mask_dim == 2:
extended_attention_mask = extended_attention_mask.unsqueeze(1).unsqueeze(2)
elif extended_attention_mask_dim == 3:
extended_attention_mask = extended_attention_mask.unsqueeze(1)
elif extended_attention_mask_dim == 4:
pass
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
Expand Down
1 change: 0 additions & 1 deletion flagai/model/predictor/gpt.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ def gpt_random_sample_use_cache(model, tokenizer, text, input_max_length, out_ma
top_k, top_p, repetition_penalty, temperature, device):
tokenizer_out = tokenizer.encode_plus(text, max_length=input_max_length)
token_ids = tokenizer_out["input_ids"]
# print(tokenizer.decode(token_ids))
token_end_id = tokenizer.get_command_id('sep')
if token_ids[-1] == token_end_id:
token_ids = token_ids[:-1]
Expand Down
7 changes: 4 additions & 3 deletions flagai/model/predictor/predictor.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,8 @@ def predict_masklm(self, text: str, maxlen: int = 512) -> str:
def predict_ner(self,
text: str,
target: List[str],
maxlen: int = 256) -> List[Tuple[int, int, str]]:
maxlen: int = 256,
add_spatial_token=False) -> List[Tuple[int, int, str]]:
"""
Args:
text: The input text.
Expand All @@ -168,9 +169,9 @@ def predict_ner(self,
model.eval()
device = next(model.parameters()).device
tokenizer = self.tokenizer
tokens = tokenizer.text_tokenizer.tokenize(text,
tokens = tokenizer.tokenize(text,
maxlen=maxlen,
add_spatial_tokens=True)
add_spatial_tokens=add_spatial_token)

mapping = tokenizer.rematch(text, tokens)
token_ids = tokenizer.text_tokenizer.convert_tokens_to_ids(tokens)
Expand Down
Empty file added flagai/model/vision/__init__.py
Empty file.
Empty file modified flagai_wechat.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified logo.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion prepare_test.sh
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
git clone https://github.com/marscrazy/checkpoints.git
git clone https://github.com/BAAI-OpenPlatform/checkpoints.git
cd checkpoints
unzip checkpoints.zip
mv checkpoints/* .
Expand Down
Empty file modified requirements.txt
100644 → 100755
Empty file.
Empty file modified setup.cfg
100644 → 100755
Empty file.
2 changes: 1 addition & 1 deletion setup.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
setup(
name="flagai",
version="v1.3.2",
description="FlagAI aims to help researchers and developers to freely train and test large-scale models for NLP tasks.",
description="FlagAI aims to help researchers and developers to freely train and test large-scale models for NLP/CV/VL tasks.",
long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
author="FlagAI-Open",
Expand Down
Empty file modified test.py
100644 → 100755
Empty file.
Loading

0 comments on commit 35b7abd

Please sign in to comment.