HuggingFace-OLMo Model Guide

Overview:
The OLMo (Open Language Models) models are part of the OLMo initiative designed to enable the science of language models. The models are trained on the Dolma dataset and are released with code, checkpoints, logs, and details for training.

Model Details:
The core models in this batch include OLMo 1B, OLMo 7B, and OLMo 7B Twin 2T. Each model has specific training token counts, layers, hidden sizes, attention heads, and context lengths.

Loading Specific Model Revision:
To load a specific model revision with HuggingFace, use the `revision` argument while importing the model.

Accessing Revisions for Models:
Revisions and branches for the models can also be accessed using the huggingface_hub library. The revisions.txt file lists all revisions/branches for the models.

Uses:
The OLMo models can be used for various natural language tasks and applications. Inference with OLMo can be set up quickly by installing the ai2-olmo package.

Fine-Tuning:
Model fine-tuning can be done from the final checkpoint or intermediate checkpoints using available recipes in the OLMo repository or AI2’s Open Instruct repository.

Evaluation:
Results for core model tasks and full average evaluation scores are available for the 7B and 1B models.

Model Details:
For detailed insights into the model’s architectural and hyperparameter details, the OLMo 7B and OLMo 1B models are compared with peer models.

Environmental Impact:
Information on the environmental impact of training OLMo 7B and OLMo 7B Twin models is detailed, including GPU type, power consumption, carbon intensity, and carbon emissions.

Bias, Risks, and Limitations:
The guide highlights the potential biases, risks, and limitations associated with using the OLMo models and the importance of considering these factors.

Citation:
The model’s citation in BibTeX and APA formats is provided for reference.

Model Card Contact:
For errors in the model card, the contact information for Nathan or Akshita is provided.

For any further updates and model specifications, please refer to the official website of HuggingFace.

Source link
# HuggingFace OLMo Model Manual

## Introduction
OLMo is a series of Open Language Models designed by the Allen Institute for AI (AI2) to facilitate the study and development of language models. The OLMo models are trained on the Dolma dataset and have different sizes and configurations to suit various applications.

## Model Details
The core models released in this batch include OLMo 1B, OLMo 7B, and OLMo 7B Twin 2T. These models differ in terms of the number of training tokens, layers, hidden size, attention heads, and context length. Checkpoints are released for these models for every 1000 training steps.

## Model Revisions
The OLMo 7B model has several revisions released in the HuggingFace repository, each with different tokens and features. These include the base OLMo 7B model, the non-annealed OLMo 7B model, OLMo 7B-2T, and OLMo-7B-Twin-2T. You can load a specific model revision with HuggingFace by adding the `revision` argument.

## Model Description
OLMo models are developed by the Allen Institute for AI (AI2) and are supported by various organizations including Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), and UW. The models are autoregressive language models designed for English language processing and are released under the Apache 2.0 license.

## Model Sources
The model sources are available in the HuggingFace repository for OLMo 1B and OLMo 7B. Users can access all revisions for the models using code snippets provided in this manual.

## Uses
OLMo models can be used for various NLP tasks including language modeling, text generation, and more.

## Inference
To quickly start using OLMo for inference, install the ai2-olmo package and use HuggingFace’s transformers library to load the desired model and generate text.

## Fine-tuning
To fine-tune the OLMo models, scripts and tools are provided in the OLMo repository and in AI2’s Open Instruct repository.

## Evaluation
The performance of core OLMo models in various tasks is evaluated with comparison to other language models.

## Model Details (Data, Architecture, Hyperparameters)
Detailed information about the training data, architecture, and hyperparameters of OLMo models are provided in the HuggingFace repository.

## Environmental Impact
Information about the environmental impact of training OLMo models on different GPU types is included in the repository.

## Bias, Risks, and Limitations
This section highlights the importance of considering the risks and biases associated with using large language models like OLMo.

## Citation
The provided BibTeX and APA citations can be used to cite the OLMo models in research or publications.

## Model Card Contact
For errors in the model card or any other inquiries, contact the provided email addresses.

For detailed model usage, installation, and code examples, refer to the official [OLMo HuggingFace repository](https://huggingface.co/allenai/OLMo-7B) and documentation.

OLMo Logo

OLMo is a series of Open Language Models designed to enable the science of language models.
The OLMo models are trained on the Dolma dataset.
We release all code, checkpoints, logs (coming soon), and details involved in training these models.



Model Details

The core models released in this batch are the following:

Size Training Tokens Layers Hidden Size Attention Heads Context Length
OLMo 1B 3 Trillion 16 2048 16 2048
OLMo 7B 2.5 Trillion 32 4096 32 2048
OLMo 7B Twin 2T 2 Trillion 32 4096 32 2048

We are releasing many checkpoints for these models, for every 1000 traing steps.
The naming convention is step1000-tokens4B.
In particular, we focus on four revisions of the 7B models:

Name HF Repo Model Revision Tokens Note
OLMo 7B allenai/OLMo-7B main 2.5T The base OLMo 7B model
OLMo 7B (not annealed) allenai/OLMo-7B step556000-tokens2460B 2.5T learning rate not annealed to 0
OLMo 7B-2T allenai/OLMo-7B step452000-tokens2000B 2T OLMo checkpoint at 2T tokens
OLMo-7B-Twin-2T allenai/OLMo-7B-Twin-2T main 2T Twin version on different hardware

To load a specific model revision with HuggingFace, simply add the argument revision:

import hf_olmo 
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", revision="step20000-tokens84B")

All revisions/branches are listed in the file revisions.txt.
Or, you can access all the revisions for the models via the following code snippet:

from huggingface_hub import list_repo_refs
out = list_repo_refs("allenai/OLMo-1B")
branches = [b.name for b in out.branches]

A few revisions were lost due to an error, but the vast majority are present.



Model Description

  • Developed by: Allen Institute for AI (AI2)
  • Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW
  • Model type: a Transformer style autoregressive language model.
  • Language(s) (NLP): English
  • License: The code and model are released under Apache 2.0.
  • Contact: Technical inquiries: olmo at allenai dot org. Press: press at allenai dot org
  • Date cutoff: Feb./March 2023 based on Dolma dataset version.



Model Sources



Uses



Inference

Quickly get inference running with the following required installation:

pip install ai2-olmo

Now, proceed as usual with HuggingFace:

import hf_olmo

from transformers import AutoModelForCausalLM, AutoTokenizer
olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B")
message = ["Language modeling is "]
inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)



response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
>> 'Language modeling is the first step to build natural language generation...'

Alternatively, with the pipeline abstraction:

import hf_olmo

from transformers import pipeline
olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B")
print(olmo_pipe("Language modeling is "))
>> 'Language modeling is a branch of natural language processing that aims to...'

Or, you can make this slightly faster by quantizing the model, e.g. AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B", torch_dtype=torch.float16, load_in_8bit=True) (requires bitsandbytes).
The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as inputs.input_ids.to('cuda') to avoid potential issues.

Note, you may see the following error if ai2-olmo is not installed correctly, which is caused by internal Python check naming. We’ll update the code soon to make this error clearer.

    raise ImportError(
ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo`



Fine-tuning

Model fine-tuning can be done from the final checkpoint (the main revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.

  1. Fine-tune with the OLMo repository:
torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
    --data.paths=[{path_to_data}/input_ids.npy] \
    --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
    --load_path={path_to_checkpoint} \
    --reset_trainer_state

For more documentation, see the GitHub readme.

  1. Further fine-tuning support is being developing in AI2’s Open Instruct repository. Details are here.



Evaluation

Core model results for the 7B model are found below.

Llama 7B Llama 2 7B Falcon 7B MPT 7B OLMo 7B (ours)
arc_challenge 44.5 39.8 47.5 46.5 48.5
arc_easy 57.0 57.7 70.4 70.5 65.4
boolq 73.1 73.5 74.6 74.2 73.4
copa 85.0 87.0 86.0 85.0 90
hellaswag 74.5 74.5 75.9 77.6 76.4
openbookqa 49.8 48.4 53.0 48.6 50.2
piqa 76.3 76.4 78.5 77.3 78.4
sciq 89.5 90.8 93.9 93.7 93.8
winogrande 68.2 67.3 68.9 69.9 67.9
Core tasks average 68.7 68.4 72.1 71.5 71.6
truthfulQA (MC2) 33.9 38.5 34.0 33 36.0
MMLU (5 shot MC) 31.5 45.0 24.0 30.8 28.3
GSM8k (mixed eval.) 10.0 (8shot CoT) 12.0 (8shot CoT) 4.0 (5 shot) 4.5 (5 shot) 8.5 (8shot CoT)
Full average 57.8 59.3 59.2 59.3 59.8

And for the 1B model:

task random StableLM 2 1.6b* Pythia 1B TinyLlama 1.1B OLMo 1B (ours)
arc_challenge 25 43.81 33.11 34.78 34.45
arc_easy 25 63.68 50.18 53.16 58.07
boolq 50 76.6 61.8 64.6 60.7
copa 50 84 72 78 79
hellaswag 25 68.2 44.7 58.7 62.5
openbookqa 25 45.8 37.8 43.6 46.4
piqa 50 74 69.1 71.1 73.7
sciq 25 94.7 86 90.5 88.1
winogrande 50 64.9 53.3 58.9 58.9
Average 36.11 68.41 56.44 61.48 62.42

*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.



Model Details



Data

For training data details, please see the Dolma documentation.



Architecture

OLMo 7B architecture with peer models for comparison.

OLMo 7B Llama 2 7B OpenLM 7B Falcon 7B PaLM 8B
d_model 4096 4096 4096 4544 4096
num heads 32 32 32 71 16
num layers 32 32 32 32 32
MLP ratio ~8/3 ~8/3 ~8/3 4 4
LayerNorm type non-parametric LN RMSNorm parametric LN parametric LN parametric LN
pos embeddings RoPE RoPE RoPE RoPE RoPE
attention variant full GQA full MQA MQA
biases none none in LN only in LN only none
block type sequential sequential sequential parallel parallel
activation SwiGLU SwiGLU SwiGLU GeLU SwiGLU
sequence length 2048 4096 2048 2048 2048
batch size (instances) 2160 1024 2048 2304 512
batch size (tokens) ~4M ~4M ~4M ~4M ~1M
weight tying no no no no yes



Hyperparameters

AdamW optimizer parameters are shown below.

Size Peak LR Betas Epsilon Weight Decay
1B 4.0E-4 (0.9, 0.95) 1.0E-5 0.1
7B 3.0E-4 (0.9, 0.99) 1.0E-5 0.1

Optimizer settings comparison with peer models.

OLMo 7B Llama 2 7B OpenLM 7B Falcon 7B
warmup steps 5000 2000 2000 1000
peak LR 3.0E-04 3.0E-04 3.0E-04 6.0E-04
minimum LR 3.0E-05 3.0E-05 3.0E-05 1.2E-05
weight decay 0.1 0.1 0.1 0.1
beta1 0.9 0.9 0.9 0.99
beta2 0.95 0.95 0.95 0.999
epsilon 1.0E-05 1.0E-05 1.0E-05 1.0E-05
LR schedule linear cosine cosine cosine
gradient clipping global 1.0 global 1.0 global 1.0 global 1.0
gradient reduce dtype FP32 FP32 FP32 BF16
optimizer state dtype FP32 most likely FP32 FP32 FP32



Environmental Impact

OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.
A summary of the environmental impact. Further details are available in the paper.

GPU Type Power Consumption From GPUs Carbon Intensity (kg CO₂e/KWh) Carbon Emissions (tCO₂eq)
OLMo 7B Twin MI250X (LUMI supercomputer) 135 MWh 0* 0*
OLMo 7B A100-40GB (MosaicML) 104 MWh 0.656 75.05



Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.

Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.



Citation

BibTeX:

@article{Groeneveld2023OLMo,
  title={OLMo: Accelerating the Science of Language Models},
  author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh},
  journal={Preprint},
  year={2024}
}

APA:

Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.



Model Card Contact

For errors in this model card, contact Nathan or Akshita, {nathanl, akshitab} at allenai dot org.

The

(division) tag in HTML is a container tag which defines a division or section in an HTML document. It is a versatile and commonly used tag which can be used to create space between different sections of a document or to group elements together.

In the given HTML, the

tag is being used for various purposes in a model card for OLMo (Open Language Models). Let’s break down the use cases of the

tag in the provided HTML:

1. Organizing Content: The

tag is used to organize the content of the model card. It provides a way to group various elements together, making the HTML document more structured and organized.

2. Creating Tables: The

tag is used to create tables within the document. This allows for the structured presentation of tabular data, organizing information in a readable and structured manner.

3. Displaying Code Snippets: The

tag is used to display code snippets or programming examples with a class of “max-w-full overflow-auto”. It allows for the horizontal scrolling of content within the

container.

4. Model Details: The

tag is used to present the model details such as size, training tokens, layers, hidden size, attention heads, and context length in tabular form.

5. Loading Specific Model Revision: The

tag is used to display code snippets for loading a specific model revision with HuggingFace. This includes a demonstration of usage with Python code.

6. Environmental Impact: It is used to display the environmental impact details such as GPU type, power consumption from GPUs, carbon intensity, and carbon emissions in tabular form.

7. Citation: The

tag is used to credit the model with citations in various formats, including BibTeX and APA.

8. Model Card Contact: The

tag holds the contact information for reporting errors or issues with the model card.

This use of the

tag demonstrates its role in creating a structured, well-organized, and easily navigable document that presents detailed information related to the OLMo language models. It is a key element in defining the layout and structure of the content presented in the model card.

2024-02-04T15:56:40+01:00