将新词汇标记添加到模型并将其保存以供下游模型使用

乘风 nlp 422

原文标题Adding New Vocabulary Tokens to the Models and saving it for downstream model

新令牌的平均初始化是否正确?另外我应该如何保存新的标记器(在向它添加新标记之后)以在下游模型中使用它?

我通过添加新标记并取平均值来训练 MLM 模型。我应该如何将微调的 MLM 模型用于新的分类任务?

tokenizer_org = tr.BertTokenizer.from_pretrained("/home/pc/bert_base_multilingual_uncased")
tokenizer.add_tokens(joined_keywords)
model = tr.BertForMaskedLM.from_pretrained("/home/pc/bert_base_multilingual_uncased", return_dict=True)

# prepare input
text = ["Replace me by any text you'd like"]
encoded_input = tokenizer(text, truncation=True, padding=True, max_length=512, return_tensors="pt")
print(encoded_input)


# add embedding params for new vocab words
model.resize_token_embeddings(len(tokenizer))
weights = model.bert.embeddings.word_embeddings.weight
    
# initialize new embedding weights as mean of original tokens
with torch.no_grad():
    emb = []
    for i in range(len(joined_keywords)):
        word = joined_keywords[i]
        # first & last tokens are just string start/end; don't keep
        tok_ids = tokenizer_org(word)["input_ids"][1:-1]
        tok_weights = weights[tok_ids]

        # average over tokens in original tokenization
        weight_mean = torch.mean(tok_weights, axis=0)
        emb.append(weight_mean)
    weights[-len(joined_keywords):,:] = torch.vstack(emb).requires_grad_()

model.to(device)

trainer.save_model("/home/pc/Bert_multilingual_exp_TCM/model_mlm_exp1")

它保存模型、配置、training_args。如何保存新的标记器?

原文链接:https://stackoverflow.com//questions/71443134/adding-new-vocabulary-tokens-to-the-models-and-saving-it-for-downstream-model

回复

我来回复
  • meti的头像
    meti 评论

    您将要做的是一种向原始文本添加新标记和信息的便捷方法。huggingface提供了几种方法来做到这一点,我使用了最简单的 IMO。

    BASE_MODEL = "distilbert-base-multilingual-cased"
    tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
    print('Vocab size before manipulation: ', len(tokenizer))
    special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']}
    num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
    print('Vocab size after manipulation: ', len(tokenizer))
    tokenizer.save_pretrained("./models/tokenizer/")
    tokenizer2 = AutoTokenizer.from_pretrained("./models/tokenizer/")
    print('Vocab size after saving and loading: ', len(tokenizer)) 
    

    输出:

    Vocab size before manipulation:  119547
    Vocab size after manipulation:  119551
    Vocab size after saving and loading:  119551
    

    最大的警告:当您操作tokenizer时,您需要相应地更新模型的嵌入层。像这样的东西model.resize_token_embeddings(len(tokenizer))

    2年前 0条评论