chunk_id
stringlengths
44
45
chunk_content
stringlengths
21
448
filename
stringlengths
36
36
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_7
netune. Copied from transformers import AutoModelForSeq2SeqLM model_name_or_path = "bigscience/mt0-large" tokenizer_name_or_path = "bigscience/mt0-large" model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path) Wrap your base model and peft_config with the get_peft_model function to create a PeftModel. To get a sense of the number of trainable parameters in your model, use the print_trainable_parameters method. In this case, you’re
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_8
get a sense of the number of trainable parameters in your model, use the print_trainable_parameters method. In this case, you’re only training 0.19% of the model’s parameters! 🤏 Copied from peft import get_peft_model model = get_peft_model(model, peft_config) model.print_trainable_parameters() "output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" That is it 🎉! Now you can train the model using the
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_9
rams: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" That is it 🎉! Now you can train the model using the 🤗 Transformers Trainer, 🤗 Accelerate, or any custom PyTorch training loop. Save and load a model After your model is finished training, you can save your model to a directory using the save_pretrained function. You can also save your model to the Hub (make sure you log in to your Hugging Face account first) with the
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_10
_pretrained function. You can also save your model to the Hub (make sure you log in to your Hugging Face account first) with the push_to_hub function. Copied model.save_pretrained("output_dir") # if pushing to Hub from huggingface_hub import notebook_login notebook_login() model.push_to_hub("my_awesome_peft_model") This only saves the incremental 🤗 PEFT weights that were trained, meaning it is super efficient to store, transfer, and load.
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_11
) This only saves the incremental 🤗 PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this bigscience/T0_3B model trained with LoRA on the twitter_complaints subset of the RAFT dataset only contains two files: adapter_config.json and adapter_model.bin. The latter file is just 19MB! Easily load your model for inference using the from_pretrained function: Copied from transformers import
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_12
ile is just 19MB! Easily load your model for inference using the from_pretrained function: Copied from transformers import AutoModelForSeq2SeqLM + from peft import PeftModel, PeftConfig + peft_model_id = "smangrul/twitter_complaints_bigscience_T0_3B_LORA_SEQ_2_SEQ_LM" + config = PeftConfig.from_pretrained(peft_model_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) + model = PeftModel.from_pretrained(mode
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_13
del_id) model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path) + model = PeftModel.from_pretrained(model, peft_model_id) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model = model.to(device) model.eval() inputs = tokenizer("Tweet text : @HondaCustSvc Your customer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt")
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_14
ustomer service has been horrible during the recall process. I will never purchase a Honda again. Label :", return_tensors="pt") with torch.no_grad(): outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=10) print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]) 'complaint' Easy loading with Auto classes If you have saved your adapter locally or on the Hub, yo
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_15
ecial_tokens=True)[0]) 'complaint' Easy loading with Auto classes If you have saved your adapter locally or on the Hub, you can leverage the AutoPeftModelForxxx classes and load any PEFT model with a single line of code: Copied - from peft import PeftConfig, PeftModel - from transformers import AutoModelForCausalLM + from peft import AutoPeftModelForCausalLM - peft_config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora") - base_
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_16
salLM + from peft import AutoPeftModelForCausalLM - peft_config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora") - base_model_path = peft_config.base_model_name_or_path - transformers_model = AutoModelForCausalLM.from_pretrained(base_model_path) - peft_model = PeftModel.from_pretrained(transformers_model, peft_config) + peft_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora") Currently, supported auto classes are:
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_17
config) + peft_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora") Currently, supported auto classes are: AutoPeftModelForCausalLM, AutoPeftModelForSequenceClassification, AutoPeftModelForSeq2SeqLM, AutoPeftModelForTokenClassification, AutoPeftModelForQuestionAnswering and AutoPeftModelForFeatureExtraction. For other tasks (e.g. Whisper, StableDiffusion), you can load the model with: Copied - from peft import PeftModel
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_18
Extraction. For other tasks (e.g. Whisper, StableDiffusion), you can load the model with: Copied - from peft import PeftModel, PeftConfig, AutoPeftModel + from peft import AutoPeftModel - from transformers import WhisperForConditionalGeneration - model_id = "smangrul/openai-whisper-large-v2-LORA-colab" peft_model_id = "smangrul/openai-whisper-large-v2-LORA-colab" - peft_config = PeftConfig.from_pretrained(peft_model_id) - model = WhisperFo
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_19
id = "smangrul/openai-whisper-large-v2-LORA-colab" - peft_config = PeftConfig.from_pretrained(peft_model_id) - model = WhisperForConditionalGeneration.from_pretrained( - peft_config.base_model_name_or_path, load_in_8bit=True, device_map="auto" - ) - model = PeftModel.from_pretrained(model, peft_model_id) + model = AutoPeftModel.from_pretrained(peft_model_id) Next steps Now that you’ve seen how to train a model with one of the 🤗 PEFT meth
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_20
AutoPeftModel.from_pretrained(peft_model_id) Next steps Now that you’ve seen how to train a model with one of the 🤗 PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in this quickstart; prepare a PeftConfig for a 🤗 PEFT method, and use the get_peft_model to create a PeftModel from the configuration and base model. Then you can train it however you like! Feel f
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_21
e the get_peft_model to create a PeftModel from the configuration and base model. Then you can train it however you like! Feel free to also take a look at the task guides if you’re interested in training a model with a 🤗 PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, and token classification.
07f8c92a5b1e1218d01d76b64b6741e0.txt
07f8c92a5b1e1218d01d76b64b6741e0.txt_chunk_22
ition, DreamBooth, and token classification.
07f8c92a5b1e1218d01d76b64b6741e0.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_1
Prompting Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as prompting. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_2
at describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model’s parameters. There are two categorie
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_3
fficient to train and store a smaller set of prompt parameters than to train all the model’s parameters. There are two categories of prompting methods: hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren’t h
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_4
learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren’t human readable because you aren’t matching these “virtual tokens” to the embeddings of a real word This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, and P-tuning. Prompt tuning Only train and store a significantly smaller set of task-specific
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_5
prompt tuning, prefix tuning, and P-tuning. Prompt tuning Only train and store a significantly smaller set of task-specific prompt parameters (image source). Prompt tuning was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens tha
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_6
assification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are generated. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters. The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently.
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_7
el parameters. The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model’s parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases. Take a look at Prompt tuning for causal langua
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_8
g the entire model, and prompt tuning performance scales as model size increases. Take a look at Prompt tuning for causal language modeling for a step-by-step guide on how to train a model with prompt tuning. Prefix tuning Optimize the prefix parameters for each task (image source). Prefix tuning was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequenc
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_9
atural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model’s parameters frozen. The main difference is that the prefix parameters are inserted in all of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_10
re inserted in all of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts. As a result, the authors found that prefix tuning demonstrates comparable performance
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_11
discarded after updating the soft prompts. As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings. Take a look at Prefix tuning for conditional generation for a step-by-step guide on how to train a model with prefix tuning. P-tuning Prompt tokens can be inserted anywhere in the input sequence, a
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_12
p guide on how to train a model with prefix tuning. P-tuning Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder (image source). P-tuning is designed for natural language understanding (NLU) tasks and all language models. It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encod
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_13
method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though: the prompt tokens can be inserted anywhere in the input sequence, and it isn’t restricted to only the beginning the prompt tokens are only added to the input instead of adding them to every layer
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_14
d it isn’t restricted to only the beginning the prompt tokens are only added to the input instead of adding them to every layer of the model introducing anchor tokens can improve performance because they indicate characteristics of a component in the input sequence The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks. Take a look at P-tun
253edc30a26ac7ad175a42edb306db1b.txt
253edc30a26ac7ad175a42edb306db1b.txt_chunk_15
an manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks. Take a look at P-tuning for sequence classification for a step-by-step guide on how to train a model with P-tuning.
253edc30a26ac7ad175a42edb306db1b.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_1
LoRA for token classification Low-Rank Adaptation (LoRA) is a reparametrization method that aims to reduce the number of trainable parameters with low-rank representations. The weight matrix is broken down into low-rank matrices that are trained and updated. All the pretrained model parameters remain frozen. After training, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoRA mod
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_2
aining, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoRA model because there are significantly fewer parameters. 💡 Read LoRA: Low-Rank Adaptation of Large Language Models to learn more about LoRA. This guide will show you how to train a roberta-large model with LoRA on the BioNLP2004 dataset for token classification. Before you begin, make sure you have all the necessary librar
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_3
odel with LoRA on the BioNLP2004 dataset for token classification. Before you begin, make sure you have all the necessary libraries installed: Copied !pip install -q peft transformers datasets evaluate seqeval Setup Let’s start by importing all the necessary libraries you’ll need: 🤗 Transformers for loading the base roberta-large model and tokenizer, and handling the training loop 🤗 Datasets for loading and preparing the bionlp2004 datase
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_4
ase roberta-large model and tokenizer, and handling the training loop 🤗 Datasets for loading and preparing the bionlp2004 dataset for training 🤗 Evaluate for evaluating the model’s performance 🤗 PEFT for setting up the LoRA configuration and creating the PEFT model Copied from datasets import load_dataset from transformers import ( AutoModelForTokenClassification, AutoTokenizer, DataCollatorForTokenClassification, TrainingArg
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_5
formers import ( AutoModelForTokenClassification, AutoTokenizer, DataCollatorForTokenClassification, TrainingArguments, Trainer, ) from peft import get_peft_config, PeftModel, PeftConfig, get_peft_model, LoraConfig, TaskType import evaluate import torch import numpy as np model_checkpoint = "roberta-large" lr = 1e-3 batch_size = 16 num_epochs = 10 Load dataset and metric The BioNLP2004 dataset includes tokens and tags fo
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_6
a-large" lr = 1e-3 batch_size = 16 num_epochs = 10 Load dataset and metric The BioNLP2004 dataset includes tokens and tags for biological structures like DNA, RNA and proteins. Load the dataset: Copied bionlp = load_dataset("tner/bionlp2004") bionlp["train"][0] { "tokens": [ "Since", "HUVECs", "released", "superoxide", "anions", "in", "response", "to", "TNF",
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_7
"released", "superoxide", "anions", "in", "response", "to", "TNF", ",", "and", "H2O2", "induces", "VCAM-1", ",", "PDTC", "may", "act", "as", "a", "radical", "scavenger", ".", ], "tags": [0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0], } The tags val
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_8
cavenger", ".", ], "tags": [0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0], } The tags values are defined in the label ids dictionary. The letter that prefixes each label indicates the token position: B is for the first token of an entity, I is for a token inside the entity, and 0 is for a token that is not part of an entity. Copied { "O": 0, "B-DNA": 1, "I-DNA": 2, "B-protein": 3, "I
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_9
is for a token that is not part of an entity. Copied { "O": 0, "B-DNA": 1, "I-DNA": 2, "B-protein": 3, "I-protein": 4, "B-cell_type": 5, "I-cell_type": 6, "B-cell_line": 7, "I-cell_line": 8, "B-RNA": 9, "I-RNA": 10, } Then load the seqeval framework which includes several metrics - precision, accuracy, F1, and recall - for evaluating sequence labeling tasks. Copied seqeval = evaluate.load("seqev
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_10
metrics - precision, accuracy, F1, and recall - for evaluating sequence labeling tasks. Copied seqeval = evaluate.load("seqeval") Now you can write an evaluation function to compute the metrics from the model predictions and labels, and return the precision, recall, F1, and accuracy scores: Copied label_list = [ "O", "B-DNA", "I-DNA", "B-protein", "I-protein", "B-cell_type", "I-cell_type", "B-cell_line",
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_11
"O", "B-DNA", "I-DNA", "B-protein", "I-protein", "B-cell_type", "I-cell_type", "B-cell_line", "I-cell_line", "B-RNA", "I-RNA", ] def compute_metrics(p): predictions, labels = p predictions = np.argmax(predictions, axis=2) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_12
t[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = seqeval.compute(predictions=true_predictions, references=true_labels) return { "precision": results["overall_precision"], "recall": res
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_13
=true_predictions, references=true_labels) return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } Preprocess dataset Initialize a tokenizer and make sure you set is_split_into_words=True because the text sequence has already been split into words. However, this doesn’t mean it is tokenized yet (eve
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_14
into_words=True because the text sequence has already been split into words. However, this doesn’t mean it is tokenized yet (even though it may look like it!), and you’ll need to further tokenize the words into subwords. Copied tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, add_prefix_space=True) You’ll also need to write a function to: Map each token to their respective word with the word_ids method. Ignore the special tokens b
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_15
also need to write a function to: Map each token to their respective word with the word_ids method. Ignore the special tokens by setting them to -100. Label the first token of a given entity. Copied def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) labels = [] for i, label in enumerate(examples[f"tags"]): word_ids = tokenized_inputs.word_
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_16
it_into_words=True) labels = [] for i, label in enumerate(examples[f"tags"]): word_ids = tokenized_inputs.word_ids(batch_index=i) previous_word_idx = None label_ids = [] for word_idx in word_ids: if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: label_ids.append(label[word_idx]) else: label_i
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_17
elif word_idx != previous_word_idx: label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs Use map to apply the tokenize_and_align_labels function to the dataset: Copied tokenized_bionlp = bionlp.map(tokenize_and_align_labels, batched=True) Finally,
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_18
_align_labels function to the dataset: Copied tokenized_bionlp = bionlp.map(tokenize_and_align_labels, batched=True) Finally, create a data collator to pad the examples to the longest length in a batch: Copied data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) Train Now you’re ready to create a PeftModel. Start by loading the base roberta-large model, the number of expected labels, and the id2label and label2id dic
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_19
ate a PeftModel. Start by loading the base roberta-large model, the number of expected labels, and the id2label and label2id dictionaries: Copied id2label = { 0: "O", 1: "B-DNA", 2: "I-DNA", 3: "B-protein", 4: "I-protein", 5: "B-cell_type", 6: "I-cell_type", 7: "B-cell_line", 8: "I-cell_line", 9: "B-RNA", 10: "I-RNA", } label2id = { "O": 0, "B-DNA": 1, "I-DNA": 2, "B-protein": 3,
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_20
I-cell_line", 9: "B-RNA", 10: "I-RNA", } label2id = { "O": 0, "B-DNA": 1, "I-DNA": 2, "B-protein": 3, "I-protein": 4, "B-cell_type": 5, "I-cell_type": 6, "B-cell_line": 7, "I-cell_line": 8, "B-RNA": 9, "I-RNA": 10, } model = AutoModelForTokenClassification.from_pretrained( model_checkpoint, num_labels=11, id2label=id2label, label2id=label2id ) Define the LoraConfig with: task_type, token
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_21
ained( model_checkpoint, num_labels=11, id2label=id2label, label2id=label2id ) Define the LoraConfig with: task_type, token classification (TaskType.TOKEN_CLS) r, the dimension of the low-rank matrices lora_alpha, scaling factor for the weight matrices lora_dropout, dropout probability of the LoRA layers bias, set to all to train all bias parameters 💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more we
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_22
to all to train all bias parameters 💡 The weight matrix is scaled by lora_alpha/r, and a higher lora_alpha value assigns more weight to the LoRA activations. For performance, we recommend setting bias to None first, and then lora_only, before trying all. Copied peft_config = LoraConfig( task_type=TaskType.TOKEN_CLS, inference_mode=False, r=16, lora_alpha=16, lora_dropout=0.1, bias="all" ) Pass the base model and peft_config to the get_pe
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_23
CLS, inference_mode=False, r=16, lora_alpha=16, lora_dropout=0.1, bias="all" ) Pass the base model and peft_config to the get_peft_model() function to create a PeftModel. You can check out how much more efficient training the PeftModel is compared to fully training the base model by printing out the trainable parameters: Copied model = get_peft_model(model, peft_config) model.print_trainable_parameters() "trainable params: 1855499 || all par
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_24
s: Copied model = get_peft_model(model, peft_config) model.print_trainable_parameters() "trainable params: 1855499 || all params: 355894283 || trainable%: 0.5213624069370061" From the 🤗 Transformers library, create a TrainingArguments class and specify where you want to save the model to, the training hyperparameters, how to evaluate the model, and when to save the checkpoints: Copied training_args = TrainingArguments( output_dir="rob
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_25
s, how to evaluate the model, and when to save the checkpoints: Copied training_args = TrainingArguments( output_dir="roberta-large-lora-token-classification", learning_rate=lr, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=num_epochs, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) Pass the model, Train
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_26
decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ) Pass the model, TrainingArguments, datasets, tokenizer, data collator and evaluation function to the Trainer class. The Trainer handles the training loop for you, and when you’re ready, call train to begin! Copied trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_bionlp["train"], eval_dataset=to
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_27
ied trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_bionlp["train"], eval_dataset=tokenized_bionlp["validation"], tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics, ) trainer.train() Share model Once training is complete, you can store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted:
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_28
n store and share your model on the Hub if you’d like. Log in to your Hugging Face account and enter your token when prompted: Copied from huggingface_hub import notebook_login notebook_login() Upload the model to a specific model repository on the Hub with the push_to_hub method: Copied model.push_to_hub("your-name/roberta-large-lora-token-classification") Inference To use your model for inference, load the configuration and model:
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_29
name/roberta-large-lora-token-classification") Inference To use your model for inference, load the configuration and model: Copied peft_model_id = "stevhliu/roberta-large-lora-token-classification" config = PeftConfig.from_pretrained(peft_model_id) inference_model = AutoModelForTokenClassification.from_pretrained( config.base_model_name_or_path, num_labels=11, id2label=id2label, label2id=label2id ) tokenizer = AutoTokenizer.from_pretr
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_30
( config.base_model_name_or_path, num_labels=11, id2label=id2label, label2id=label2id ) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) model = PeftModel.from_pretrained(inference_model, peft_model_id) Get some text to tokenize: Copied text = "The activation of IL-2 gene expression and NF-kappa B through CD28 requires reactive oxygen production by 5-lipoxygenase." inputs = tokenizer(text, return_tensors="pt") Pa
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_31
NF-kappa B through CD28 requires reactive oxygen production by 5-lipoxygenase." inputs = tokenizer(text, return_tensors="pt") Pass the inputs to the model, and print out the model prediction for each token: Copied with torch.no_grad(): logits = model(**inputs).logits tokens = inputs.tokens() predictions = torch.argmax(logits, dim=2) for token, prediction in zip(tokens, predictions[0].numpy()): print((token, model.config.id2label[pr
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_32
.argmax(logits, dim=2) for token, prediction in zip(tokens, predictions[0].numpy()): print((token, model.config.id2label[prediction])) ("<s>", "O") ("The", "O") ("Ġactivation", "O") ("Ġof", "O") ("ĠIL", "B-DNA") ("-", "O") ("2", "I-DNA") ("Ġgene", "O") ("Ġexpression", "O") ("Ġand", "O") ("ĠNF", "B-protein") ("-", "O") ("k", "I-protein") ("appa", "I-protein") ("ĠB", "I-protein") ("Ġthrough", "O") ("ĠCD", "B-protein") ("28", "I-protein") ("Ġ
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_33
"O") ("k", "I-protein") ("appa", "I-protein") ("ĠB", "I-protein") ("Ġthrough", "O") ("ĠCD", "B-protein") ("28", "I-protein") ("Ġrequires", "O") ("Ġreactive", "O") ("Ġoxygen", "O") ("Ġproduction", "O") ("Ġby", "O") ("Ġ5", "B-protein") ("-", "O") ("lip", "I-protein") ("oxy", "I-protein") ("gen", "I-protein") ("ase", "I-protein") (".", "O") ("</s>", "O")
346ed4237bc67c71a2f6581042511145.txt
346ed4237bc67c71a2f6581042511145.txt_chunk_34
rotein") (".", "O") ("</s>", "O")
346ed4237bc67c71a2f6581042511145.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_1
Configuration The configuration classes stores the configuration of a PeftModel, PEFT adapter models, and the configurations of PrefixTuning, PromptTuning, and PromptEncoder. They contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads. PeftConfigMixin class peft.utils.config.PeftConfig
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_2
model configurations like number of layers and number of attention heads. PeftConfigMixin class peft.utils.config.PeftConfigMixin < source > ( peft_type: typing.Optional[peft.utils.config.PeftType] = None auto_mapping: typing.Optional[dict] = None ) Parameters peft_type (Union[PeftType, str]) — The type of Peft method to use. This is the base configuration class for PEFT adapter models. It contains all the methods that are common t
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_3
ft method to use. This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all PEFT adapter models. This class inherits from PushToHubMixin which contains the methods to push your model to the Hub. The method save_pretrained will save the configuration of your adapter model in a directory. The method from_pretrained will load the configuration of your adapter model from a directory. from_j
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_4
ter model in a directory. The method from_pretrained will load the configuration of your adapter model from a directory. from_json_file < source > ( path_json_file **kwargs ) Parameters path_json_file (str) — The path to the json file. Loads a configuration file from a json file. from_pretrained < source > ( pretrained_model_name_or_path subfolder = None **kwargs ) Parameters pretrained_model_name_or_path (str) — The directory
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_5
( pretrained_model_name_or_path subfolder = None **kwargs ) Parameters pretrained_model_name_or_path (str) — The directory or the Hub repository id where the configuration is saved. kwargs (additional keyword arguments, optional) — Additional keyword arguments passed along to the child class initialization. This method loads the configuration of your adapter model from a directory. save_pretrained < source > ( save_directory **kwarg
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_6
his method loads the configuration of your adapter model from a directory. save_pretrained < source > ( save_directory **kwargs ) Parameters save_directory (str) — The directory where the configuration will be saved. kwargs (additional keyword arguments, optional) — Additional keyword arguments passed along to the push_to_hub method. This method saves the configuration of your adapter model in a directory. PeftConfig class peft.
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_7
the push_to_hub method. This method saves the configuration of your adapter model in a directory. PeftConfig class peft.PeftConfig < source > ( peft_type: typing.Union[str, peft.utils.config.PeftType] = None auto_mapping: typing.Optional[dict] = None base_model_name_or_path: str = None revision: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False ) Parameters peft_type (Union[Pef
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_8
k_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False ) Parameters peft_type (Union[PeftType, str]) — The type of Peft method to use. task_type (Union[TaskType, str]) — The type of task to perform. inference_mode (bool, defaults to False) — Whether to use the Peft model in inference mode. This is the base configuration class to store the configuration of a PeftModel. PromptLearningConfig class
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_9
nference mode. This is the base configuration class to store the configuration of a PeftModel. PromptLearningConfig class peft.PromptLearningConfig < source > ( peft_type: typing.Union[str, peft.utils.config.PeftType] = None auto_mapping: typing.Optional[dict] = None base_model_name_or_path: str = None revision: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: in
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_10
: str = None task_type: typing.Union[str, peft.utils.config.TaskType] = None inference_mode: bool = False num_virtual_tokens: int = None token_dim: int = None num_transformer_submodules: typing.Optional[int] = None num_attention_heads: typing.Optional[int] = None num_layers: typing.Optional[int] = None ) Parameters num_virtual_tokens (int) — The number of virtual tokens to use. token_dim (int) — The hidden embedding dimension of the base
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_11
num_virtual_tokens (int) — The number of virtual tokens to use. token_dim (int) — The hidden embedding dimension of the base transformer model. num_transformer_submodules (int) — The number of transformer submodules in the base transformer model. num_attention_heads (int) — The number of attention heads in the base transformer model. num_layers (int) — The number of layers in the base transformer model. This is the base configurati
1f12a87103e8110c6a7e263132c8b468.txt
1f12a87103e8110c6a7e263132c8b468.txt_chunk_12
base transformer model. num_layers (int) — The number of layers in the base transformer model. This is the base configuration class to store the configuration of PrefixTuning, PromptEncoder, or PromptTuning.
1f12a87103e8110c6a7e263132c8b468.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_1
Prompting Training large pretrained language models is very time-consuming and compute-intensive. As they continue to grow in size, there is increasing interest in more efficient training methods such as prompting. Prompting primes a frozen pretrained model for a specific downstream task by including a text prompt that describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_2
at describes the task or even demonstrates an example of the task. With prompting, you can avoid fully training a separate model for each downstream task, and use the same frozen pretrained model instead. This is a lot easier because you can use the same model for several different tasks, and it is significantly more efficient to train and store a smaller set of prompt parameters than to train all the model’s parameters. There are two categorie
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_3
fficient to train and store a smaller set of prompt parameters than to train all the model’s parameters. There are two categories of prompting methods: hard prompts are manually handcrafted text prompts with discrete input tokens; the downside is that it requires a lot of effort to create a good prompt soft prompts are learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren’t h
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_4
learnable tensors concatenated with the input embeddings that can be optimized to a dataset; the downside is that they aren’t human readable because you aren’t matching these “virtual tokens” to the embeddings of a real word This conceptual guide provides a brief overview of the soft prompt methods included in 🤗 PEFT: prompt tuning, prefix tuning, and P-tuning. Prompt tuning Only train and store a significantly smaller set of task-specific
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_5
prompt tuning, prefix tuning, and P-tuning. Prompt tuning Only train and store a significantly smaller set of task-specific prompt parameters (image source). Prompt tuning was developed for text classification tasks on T5 models, and all downstream tasks are cast as a text generation task. For example, sequence classification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens tha
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_6
assification usually assigns a single class label to a sequence of text. By casting it as a text generation task, the tokens that make up the class label are generated. Prompts are added to the input as a series of tokens. Typically, the model parameters are fixed which means the prompt tokens are also fixed by the model parameters. The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently.
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_7
el parameters. The key idea behind prompt tuning is that prompt tokens have their own parameters that are updated independently. This means you can keep the pretrained model’s parameters frozen, and only update the gradients of the prompt token embeddings. The results are comparable to the traditional method of training the entire model, and prompt tuning performance scales as model size increases. Take a look at Prompt tuning for causal langua
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_8
g the entire model, and prompt tuning performance scales as model size increases. Take a look at Prompt tuning for causal language modeling for a step-by-step guide on how to train a model with prompt tuning. Prefix tuning Optimize the prefix parameters for each task (image source). Prefix tuning was designed for natural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequenc
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_9
atural language generation (NLG) tasks on GPT models. It is very similar to prompt tuning; prefix tuning also prepends a sequence of task-specific vectors to the input that can be trained and updated while keeping the rest of the pretrained model’s parameters frozen. The main difference is that the prefix parameters are inserted in all of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_10
re inserted in all of the model layers, whereas prompt tuning only adds the prompt parameters to the model input embeddings. The prefix parameters are also optimized by a separate feed-forward network (FFN) instead of training directly on the soft prompts because it causes instability and hurts performance. The FFN is discarded after updating the soft prompts. As a result, the authors found that prefix tuning demonstrates comparable performance
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_11
discarded after updating the soft prompts. As a result, the authors found that prefix tuning demonstrates comparable performance to fully finetuning a model, despite having 1000x fewer parameters, and it performs even better in low-data settings. Take a look at Prefix tuning for conditional generation for a step-by-step guide on how to train a model with prefix tuning. P-tuning Prompt tokens can be inserted anywhere in the input sequence, a
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_12
p guide on how to train a model with prefix tuning. P-tuning Prompt tokens can be inserted anywhere in the input sequence, and they are optimized by a prompt encoder (image source). P-tuning is designed for natural language understanding (NLU) tasks and all language models. It is another variation of a soft prompt method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encod
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_13
method; P-tuning also adds a trainable embedding tensor that can be optimized to find better prompts, and it uses a prompt encoder (a bidirectional long-short term memory network or LSTM) to optimize the prompt parameters. Unlike prefix tuning though: the prompt tokens can be inserted anywhere in the input sequence, and it isn’t restricted to only the beginning the prompt tokens are only added to the input instead of adding them to every layer
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_14
d it isn’t restricted to only the beginning the prompt tokens are only added to the input instead of adding them to every layer of the model introducing anchor tokens can improve performance because they indicate characteristics of a component in the input sequence The results suggest that P-tuning is more efficient than manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks. Take a look at P-tun
ae480d0e3057c1348c51cb3de98b5af7.txt
ae480d0e3057c1348c51cb3de98b5af7.txt_chunk_15
an manually crafting prompts, and it enables GPT-like models to compete with BERT-like models on NLU tasks. Take a look at P-tuning for sequence classification for a step-by-step guide on how to train a model with P-tuning.
ae480d0e3057c1348c51cb3de98b5af7.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_1
LoRA for semantic similarity tasks Low-Rank Adaptation (LoRA) is a reparametrization method that aims to reduce the number of trainable parameters with low-rank representations. The weight matrix is broken down into low-rank matrices that are trained and updated. All the pretrained model parameters remain frozen. After training, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoR
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_2
er training, the low-rank matrices are added back to the original weights. This makes it more efficient to store and train a LoRA model because there are significantly fewer parameters. 💡 Read LoRA: Low-Rank Adaptation of Large Language Models to learn more about LoRA. In this guide, we’ll be using a LoRA script to fine-tune a intfloat/e5-large-v2 model on the smangrul/amazon_esci dataset for semantic similarity tasks. Feel free to explore the
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_3
e-tune a intfloat/e5-large-v2 model on the smangrul/amazon_esci dataset for semantic similarity tasks. Feel free to explore the script to learn how things work in greater detail! Setup Start by installing 🤗 PEFT from source, and then navigate to the directory containing the training scripts for fine-tuning DreamBooth with LoRA: Copied cd peft/examples/feature_extraction Install all the necessary required libraries with: Copied pip inst
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_4
h with LoRA: Copied cd peft/examples/feature_extraction Install all the necessary required libraries with: Copied pip install -r requirements.txt Setup Let’s start by importing all the necessary libraries you’ll need: 🤗 Transformers for loading the intfloat/e5-large-v2 model and tokenizer 🤗 Accelerate for the training loop 🤗 Datasets for loading and preparing the smangrul/amazon_esci dataset for training and inference 🤗 Evaluate for ev
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_5
training loop 🤗 Datasets for loading and preparing the smangrul/amazon_esci dataset for training and inference 🤗 Evaluate for evaluating the model’s performance 🤗 PEFT for setting up the LoRA configuration and creating the PEFT model 🤗 huggingface_hub for uploading the trained model to HF hub hnswlib for creating the search index and doing fast approximate nearest neighbor search It is assumed that PyTorch with CUDA support is already installed
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_6
earch index and doing fast approximate nearest neighbor search It is assumed that PyTorch with CUDA support is already installed. Train Launch the training script with accelerate launch and pass your hyperparameters along with the --use_peft argument to enable LoRA. This guide uses the following LoraConfig: Copied peft_config = LoraConfig( r=8, lora_alpha=16, bias="none", task_type=TaskType.
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_7
d peft_config = LoraConfig( r=8, lora_alpha=16, bias="none", task_type=TaskType.FEATURE_EXTRACTION, target_modules=["key", "query", "value"], ) Here’s what a full set of script arguments may look like when running in Colab on a V100 GPU with standard RAM: Copied accelerate launch \ --mixed_precision="fp16" \ peft_lora_embedding_semantic_search.py \ --dataset_name="smangrul/a
5756e54d888a16fcd03fa7c9e3b0ec6b.txt
5756e54d888a16fcd03fa7c9e3b0ec6b.txt_chunk_8
AM: Copied accelerate launch \ --mixed_precision="fp16" \ peft_lora_embedding_semantic_search.py \ --dataset_name="smangrul/amazon_esci" \ --max_length=70 --model_name_or_path="intfloat/e5-large-v2" \ --per_device_train_batch_size=64 \ --per_device_eval_batch_size=128 \ --learning_rate=5e-4 \ --weight_decay=0.0 \ --num_train_epochs 3 \ --gradient_accumulation_steps=1 \ --output_dir="results/peft_lora_e5_ecommerce_semantic_search_colab" \ --s
5756e54d888a16fcd03fa7c9e3b0ec6b.txt