Skip to content

Commit

Permalink
Merge pull request FlagAI-Open#143 from marscrazy/master
Browse files Browse the repository at this point in the history
Simplify the loading of AltDiffusion models
  • Loading branch information
BAAI-OpenPlatform authored Nov 27, 2022
2 parents db4c59b + 6356cca commit 46d646c
Show file tree
Hide file tree
Showing 187 changed files with 76 additions and 65 deletions.
Empty file modified CHANGELOG.md
100755 → 100644
Empty file.
Empty file modified CLA.md
100755 → 100644
Empty file.
Empty file modified CODE_OF_CONDUCT.md
100755 → 100644
Empty file.
Empty file modified COMMITTERS.csv
100755 → 100644
Empty file.
Empty file modified CONTRIBUTING.md
100755 → 100644
Empty file.
Empty file modified GOVERNANCE.md
100755 → 100644
Empty file.
Empty file modified LICENSE
100755 → 100644
Empty file.
Empty file modified SUPPORT.md
100755 → 100644
Empty file.
Empty file modified doc_zh/APPENDIX_GLM_IO.md
100755 → 100644
Empty file.
Empty file modified doc_zh/APPENDIX_TASK.md
100755 → 100644
Empty file.
Empty file modified doc_zh/Advanced_Usage.md
100755 → 100644
Empty file.
Empty file modified doc_zh/DATASET_EXAMPLE.md
100755 → 100644
Empty file.
Empty file modified doc_zh/GLM.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_10_METATRON.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_11_GLM_BLANK_FILLING_QA.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_12_GLM_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_13_GLM_EXAMPLE_PEOTRY_GENERATION.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_14_HUGGINGFACE_T5.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_15_BERT_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_16_BERT_EXAMPLE_SEMANTIC_MATCHING.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_17_BERT_EXAMPLE_NER.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_18_GPT2_WRITING.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_19_T5_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_1_TOKENIZER.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_20_GLM_TNEWS.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_20_SUPPORTED_TASKS_backup.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_2_DATASET.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_3_MODEL.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_4_TRAINER.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_5_INSTRUCTIONS_FOR_AutoLoader.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_6_INSTRUCTIONS_FOR_PREDICTOR.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_7_PROMPT_LEARNING.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_8_ENVIRONMENT_SETUP.md
100755 → 100644
Empty file.
Empty file modified doc_zh/TUTORIAL_9_SEQ2SEQ_METHOD.md
100755 → 100644
Empty file.
Empty file modified doc_zh/tokenization.md
100755 → 100644
Empty file.
Empty file modified docs/APPENDIX_GLM_IO.md
100755 → 100644
Empty file.
Empty file modified docs/Advanced_Usage.md
100755 → 100644
Empty file.
Empty file modified docs/DATASET_EXAMPLE.md
100755 → 100644
Empty file.
Empty file modified docs/GLM.md
100755 → 100644
Empty file.
Empty file modified docs/QuickTour.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_10_MEGATRON.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_11_GLM_BLANK_FILLING_QA.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_12_GLM_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_13_GLM_EXAMPLE_PEOTRY_GENERATION.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_14_HUGGINGFACE_T5.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_15_BERT_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_16_BERT_EXAMPLE_SEMANTIC_MATCHING.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_17_BERT_EXAMPLE_NER.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_18_GPT2_WRITING.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_19_T5_EXAMPLE_TITLE_GENERATION.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_1_TOKENIZER.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_20_SUPPORTED_TASKS.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_2_DATASET.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_3_MODEL.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_4_TRAINER.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_5_INSTRUCTIONS_FOR_AutoLoader.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_6_INSTRUCTIONS_FOR_PREDICTOR.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_7_PROMPT_LEARNING.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_8_ENVIRONMENT_SETUP.md
100755 → 100644
Empty file.
Empty file modified docs/TUTORIAL_9_SEQ2SEQ_METHOD.md
100755 → 100644
Empty file.
Empty file modified docs/img.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file modified docs/tokenization.md
100755 → 100644
Empty file.
Empty file modified examples/AltDiffusion/README.md
100755 → 100644
Empty file.
4 changes: 2 additions & 2 deletions examples/AltDiffusion/generate.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

loader = AutoLoader(task_name="text2img", #contrastive learning
model_name="AltDiffusion",
model_name="AltDiffusion-m9",
model_dir="./checkpoints")

model = loader.get_model()
Expand All @@ -18,4 +18,4 @@
predictor = Predictor(model)
predictor.predict_generate_images(
"Anime portrait of natalie portman as an anime girl by stanley artgerm lau, wlop, rossdraws, james jean, andrei riabovitchev, marc simonetti, and sakimichan, trending on artstation"
)
)
Empty file modified examples/bert_title_generation_english/generate.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_finetune/arguments.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_finetune/finetune_cpm3.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_finetune/tune_cpm3.sh
100755 → 100644
Empty file.
Empty file modified examples/cpm3_generation/arguments.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_generation/generation.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_generation/infer.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_generation/infer.sh
100755 → 100644
Empty file.
Empty file modified examples/cpm3_pretrain/arguments.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_pretrain/pretrain_cpm3.py
100755 → 100644
Empty file.
Empty file modified examples/cpm3_pretrain/run_cpm3.sh
100755 → 100644
Empty file.
Empty file modified examples/glm_blank_filling/glm_generate_samples.py
100755 → 100644
Empty file.
Empty file modified examples/glm_poetry_generation/generate.py
100755 → 100644
Empty file.
Empty file modified examples/glm_title_generation/generate.py
100755 → 100644
Empty file.
Empty file modified examples/gpt2_title_generation/generate.py
100755 → 100644
Empty file.
Empty file.
Empty file.
Empty file.
Empty file modified examples/roberta_ner/generate.py
100755 → 100644
Empty file.
Empty file modified examples/roberta_ner/generate_crf.py
100755 → 100644
Empty file.
Empty file modified examples/roberta_ner/generate_global_pointer.py
100755 → 100644
Empty file.
Empty file modified examples/roberta_semantic_matching/generate.py
100755 → 100644
Empty file.
Empty file modified examples/roberta_title_generation/generate.py
100755 → 100644
Empty file.
9 changes: 6 additions & 3 deletions flagai/auto_model/auto_loader.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -109,9 +109,9 @@ def __getattr__(self, name):
"clip-large-p14-336": ["flagai.model.mm.clip_model", "CLIP", "clip", "mm"],
"clip-large-p14-336": ["flagai.model.mm.clip_model", "CLIP", "clip", "mm"],
"altdiffusion":
["flagai.model.mm.diffusion", "LatentDiffusion", "diffusion", "mm"],
["flagai.model.mm.diffusion", "LatentDiffusion", "diffusion", "mm","flagai.model.mm.AltCLIP", "AltCLIPProcess"],
"altdiffusion-m9":
["flagai.model.mm.diffusion", "LatentDiffusion", "diffusion", "mm"],
["flagai.model.mm.diffusion", "LatentDiffusion", "diffusion", "mm","flagai.model.mm.AltCLIP", "AltCLIPProcess"],
"swinv1-base-patch4-window7-224":
["flagai.model.vision.swinv1", "SwinTransformer", "swinv1", "vision"],
"swinv2-base-patch4-window8-256":
Expand Down Expand Up @@ -212,7 +212,10 @@ def __init__(self,

elif model_type == "mm":
if model_name.startswith("altdiffusion"):
self.tokenizer = None
self.process = getattr(LazyImport(MODEL_DICT[model_name][4]),
MODEL_DICT[model_name][5]).from_pretrained(os.path.join(model_dir, raw_model_name))
self.tokenizer = self.process.tokenizer
self.model.tokenizer = self.tokenizer
elif "altclip" not in model_name:
from flagai.data.tokenizer.clip.tokenizer import ClipTokenizer
self.tokenizer = ClipTokenizer(bpe_path=os.path.join(download_path, 'bpe_simple_vocab_16e6.txt.gz'))
Expand Down
Empty file modified flagai/data/dataset/cpm3_data/__init__.py
100755 → 100644
Empty file.
Empty file modified flagai/data/dataset/cpm3_data/dataset.py
100755 → 100644
Empty file.
Empty file modified flagai/data/dataset/cpm3_data/distributed_indexed.py
100755 → 100644
Empty file.
Empty file modified flagai/data/dataset/cpm3_data/indexed.py
100755 → 100644
Empty file.
Empty file modified flagai/data/tokenizer/cpm_1/cpm1_tokenizer.py
100755 → 100644
Empty file.
Empty file modified flagai/data/tokenizer/cpm_3/__init__.py
100755 → 100644
Empty file.
Empty file modified flagai/data/tokenizer/cpm_3/cpm3_tokenizer.py
100755 → 100644
Empty file.
Empty file modified flagai/data/tokenizer/uni_tokenizer/base_tokenizer.py
100755 → 100644
Empty file.
Empty file.
Empty file modified flagai/data/tokenizer/uni_tokenizer/tokenizer.py
100755 → 100644
Empty file.
Empty file modified flagai/data/tokenizer/uni_tokenizer/wp_tokenizer.py
100755 → 100644
Empty file.
26 changes: 14 additions & 12 deletions flagai/model/base_model.py
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
# Copyright © 2022 BAAI. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License")
from sklearn.linear_model import HuberRegressor
from torch.nn import Module
import torch
import json
from typing import Union
from flagai.model.file_utils import _get_model_id, _get_config_path, _get_checkpoint_path, _get_vocab_path, _get_model_files
from flagai.model.file_utils import _get_model_id, _get_checkpoint_path, _get_vocab_path, _get_model_files
import os


Expand Down Expand Up @@ -46,10 +45,13 @@ def _load_state_dict_into_model(cls,
pretrained_model_name_or_path,
verbose=False):
pl_sd = torch.load(pretrained_model_name_or_path, map_location="cpu")
sd = pl_sd["state_dict"]
if "state_dict" in pl_sd:
sd = pl_sd["state_dict"]
else:
sd = pl_sd
if "global_step" in pl_sd:
print(f"Global Step: {pl_sd['global_step']}")
m, u = model.load_state_dict(sd, strict=False)
m, u = model.load_state_dict(sd, strict=True)
if len(m) > 0 and verbose:
print("missing keys:")
print(m)
Expand Down Expand Up @@ -113,7 +115,7 @@ def load_local(checkpoint_path):
model.load_weights(checkpoint_path)
return model

def load_diffusion_local(yaml_path):
def load_diffusion_local(yaml_path, only_download_config=False):
"""
Now only diffusion models requires yaml
"""
Expand All @@ -126,19 +128,19 @@ def load_diffusion_local(yaml_path):
model_config.params.cond_stage_config.params.download_path = raw_download_path

model = cls(**model_config.get("params", dict()))

model = cls._load_state_dict_into_model(
model,
checkpoint_path,
)
if not only_download_config:
model = cls._load_state_dict_into_model(
model,
checkpoint_path,
)
return model

yaml_path = os.path.join(download_path, "config.yaml")
if os.path.exists(yaml_path):
"""
Now only diffusion models requires yaml
"""
return load_diffusion_local(yaml_path)
return load_diffusion_local(yaml_path, only_download_config=only_download_config)
elif os.path.exists(config_path):
"""
It is fine when checkpoint_path does not exist, for the case that only_download_config=True
Expand Down Expand Up @@ -202,7 +204,7 @@ def load_diffusion_local(yaml_path):
checkpoint_merge,
os.path.join(download_path, "pytorch_model.bin"))
if os.path.exists(yaml_path):
return load_diffusion_local(yaml_path)
return load_diffusion_local(yaml_path,only_download_config=only_download_config)
return load_local(checkpoint_path)

@classmethod
Expand Down
Empty file modified flagai/model/bert_model.py
100755 → 100644
Empty file.
Empty file modified flagai/model/blocks/cpm_block.py
100755 → 100644
Empty file.
Empty file modified flagai/model/cpm3_model.py
100755 → 100644
Empty file.
Empty file modified flagai/model/cpm3_train_model.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/attentions.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/embeddings.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/feedforward.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/feedforward_bmt.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/layer_norm.py
100755 → 100644
Empty file.
Empty file modified flagai/model/layers/linear.py
100755 → 100644
Empty file.
1 change: 0 additions & 1 deletion flagai/model/mm/AltCLIP.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
from flagai.model.base_model import BaseModel

from .modeling_berts import BertSeriesConfig, RobertaSeriesConfig, BertSeriesModelWithTransformation, RobertaSeriesModelWithTransformation
from transformers.models.bert.tokenization_bert import BertTokenizer

STUDENT_CONFIG_DICT = {
'hfl/chinese-roberta-wwm-ext': BertSeriesConfig,
Expand Down
70 changes: 26 additions & 44 deletions flagai/model/mm/AltDiffusion.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
from flagai.model.mm.utils import make_beta_schedule, extract_into_tensor, noise_like
from flagai.model.mm.Sampler import DDIMSampler
from flagai.model.base_model import BaseModel
from flagai.auto_model.auto_loader import AutoLoader

__conditioning_keys__ = {
'concat': 'c_concat',
Expand Down Expand Up @@ -562,6 +561,8 @@ def __init__(self,
self.instantiate_first_stage(first_stage_config)
self.instantiate_cond_stage(cond_stage_config)
self.cond_stage_forward = cond_stage_forward
if self.cond_stage_forward is None:
self.set_cond_stage_forward()
self.clip_denoised = False
self.bbox_tokenizer = None

Expand Down Expand Up @@ -619,47 +620,13 @@ def instantiate_first_stage(self, config):
self.first_stage_model.train = disabled_train
for param in self.first_stage_model.parameters():
param.requires_grad = False

def instantiate_cond_stage(self, config):
dct = config.get("params", dict())
model_dir = dct.get("model_dir", None)
if not model_dir:
model_dir = dct.get("download_path", None)
if not self.cond_stage_trainable:
if config == "__is_first_stage__":
print("Using first stage also as cond stage.")
self.cond_stage_model = self.first_stage_model
elif config == "__is_unconditional__":
print(
f"Training {self.__class__.__name__} as an unconditional model."
)
self.cond_stage_model = None
else:
loader = AutoLoader(
task_name="txt_img_matching", #contrastive learning
model_name=dct["model_name"],
model_dir=model_dir)
model = loader.get_model()
tokenizer = loader.get_tokenizer()
self.tokenizer = tokenizer
model.to(self.device)
self.cond_stage_model = model.eval()
self.cond_stage_model.train = disabled_train
for param in self.cond_stage_model.parameters():
param.requires_grad = False
else:
assert config != '__is_first_stage__'
assert config != '__is_unconditional__'
loader = AutoLoader(
task_name="txt_img_matching", #contrastive learning
model_name=dct["model_name"],
model_dir=model_dir)
tokenizer = loader.get_tokenizer()
self.tokenizer = tokenizer
model = loader.get_model()
model.to(self.device)
self.cond_stage_model = model

model = instantiate_from_config(config)
self.cond_stage_model = model.eval()
self.cond_stage_model.train = disabled_train
for param in self.cond_stage_model.parameters():
param.requires_grad = False

def _get_denoise_row_from_list(self,
samples,
desc='',
Expand Down Expand Up @@ -687,7 +654,23 @@ def get_first_stage_encoding(self, encoder_posterior):

z = encoder_posterior.sample()
return self.scale_factor * z

def set_cond_stage_forward(self):
def func(c):
device = next(self.cond_stage_model.parameters()).device
text = self.tokenizer(c,
truncation=True,
max_length=77,
return_length=False,
return_overflowing_tokens=False,
padding="max_length",
return_tensors="pt")
text["input_ids"] = torch.tensor(text["input_ids"]).to(device)
text["attention_mask"] = torch.tensor(
text['attention_mask']).to(device)
# text = torch.tensor(text).to(device
features = self.cond_stage_model(**text)
return features['projection_state']
self.cond_stage_forward = func
def get_learned_conditioning(self, c):
# C will be directly returned if it is None
if c is None:
Expand All @@ -701,8 +684,7 @@ def get_learned_conditioning(self, c):
else:
c = self.cond_stage_model(c)
else:
assert hasattr(self.cond_stage_model, self.cond_stage_forward)
c = getattr(self.cond_stage_model, self.cond_stage_forward)(c)
c = self.cond_stage_forward(c)
return c

def meshgrid(self, h, w):
Expand Down
Empty file modified flagai/model/mm/Sampler.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/Unets/Unet.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/__init__.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/attentions/attention.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/autoencoders.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/clip_guohua/__init__.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/clip_guohua/bert_tokenizer.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/clip_guohua/model.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/clip_guohua/modeling_bert.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/clip_model.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/lm/clip_guohua.py
100755 → 100644
Empty file.
Empty file modified flagai/model/mm/model.py
100755 → 100644
Empty file.
27 changes: 26 additions & 1 deletion flagai/model/mm/modeling_berts.py
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,32 @@ class BertSeriesModelWithTransformation(BertPreTrainedModel):
_keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
config_class = BertSeriesConfig

def __init__(self, config):
def __init__(self, config=None, **kargs):
# modify initialization for autoloading
if config is None:
config = XLMRobertaConfig()
config.attention_probs_dropout_prob= 0.1
config.bos_token_id=0
config.eos_token_id=2
config.hidden_act='gelu'
config.hidden_dropout_prob=0.1
config.hidden_size=1024
config.initializer_range=0.02
config.intermediate_size=4096
config.layer_norm_eps=1e-05
config.max_position_embeddings=514

config.num_attention_heads=16
config.num_hidden_layers=24
config.output_past=True
config.pad_token_id=1
config.position_embedding_type= "absolute"

config.type_vocab_size= 1
config.use_cache=True
config.vocab_size= 250002
config.project_dim = 768
config.learn_encoder = False
super().__init__(config)
if config.model_type == 'bert':
self.bert = BertModel(config)
Expand Down
Empty file modified flagai/model/mm/utils.py
100755 → 100644
Empty file.
Empty file modified flagai/model/predictor/gpt.py
100755 → 100644
Empty file.
Empty file modified flagai/model/predictor/predictor.py
100755 → 100644
Empty file.
Empty file modified flagai/model/predictor/utils.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/__init__.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/activations.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/activations_jit.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/activations_me.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/adaptive_avgmax_pool.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/attention_pool2d.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/blur_pool.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/bottleneck_attn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/cbam.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/classifier.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/cond_conv2d.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/config.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/conv2d_same.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/conv_bn_act.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/create_act.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/create_attn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/create_conv2d.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/create_norm_act.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/drop.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/eca.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/evo_norm.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/filter_response_norm.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/gather_excite.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/global_context.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/halo_attn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/helpers.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/inplace_abn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/lambda_layer.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/linear.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/median_pool.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/mixed_conv2d.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/ml_decoder.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/mlp.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/non_local_attn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/norm.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/norm_act.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/padding.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/patch_embed.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/pool2d_same.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/pos_embed.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/selective_kernel.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/separable_conv.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/space_to_depth.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/split_attn.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/split_batchnorm.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/squeeze_excite.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/std_conv.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/test_time_pool.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/trace_utils.py
100755 → 100644
Empty file.
Empty file modified flagai/model/vision/layers/weight_init.py
100755 → 100644
Empty file.
Empty file modified flagai_wechat.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file modified logo.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file modified prepare_test.sh
100755 → 100644
Empty file.
Empty file modified quickstart/glm_title_ch.py
100755 → 100644
Empty file.
Empty file modified requirements.txt
100755 → 100644
Empty file.
2 changes: 1 addition & 1 deletion setup.cfg
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[easy_install]

index_url = https://pypi.tuna.tsinghua.edu.cn/simple
index_url = https://mirrors.aliyun.com/pypi/simple/
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

setup(
name="flagai",
version="v1.4.4",
version="v1.4.5",
description="FlagAI aims to help researchers and developers to freely train and test large-scale models for NLP/CV/VL tasks.",
long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
Expand Down
Empty file modified test.py
100755 → 100644
Empty file.

0 comments on commit 46d646c

Please sign in to comment.