tiiuae-falcon-7b
Version: 10
tiiuaeLast updated August 2024

Description

Falcon-7B is a large language model with 7 billion parameters. It is a causal decoder-only model developed by TII and trained on 1,500 billion tokens of RefinedWeb dataset, which was enhanced with curated corpora. The model is available under the Apache 2.0 license. It outperforms comparable open-source models and features an architecture optimized for inference. However, it is a raw, pretrained model that should be further finetuned for most use cases. The model is recommended for research on large language models and as a foundation for further specialization and finetuning for specific tasks. It should not be used in production without adequate assessment of risks and mitigation. The model carries biases commonly encountered online and is trained on English and French data only. The training details of Falcon-7B include information about the training data, training procedure, and hyperparameters used. It was trained on 384 A100 40GB GPUs using a 2D parallelism strategy combined with ZeRO. The model description mentions the architectural adaptations from the GPT-3 model, such as rotary positional embeddings, multiquery attention, and FlashAttention.
The above summary was generated using ChatGPT. Review the original model card to understand the data used to train the model, evaluation metrics, license, intended uses, limitations and bias before using the model. Some of the content has been made available below.

Training Details

Training Data

Falcon-7B was trained on 1,500B tokens of RefinedWeb , a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile (Gao et al., 2020 ).
Data sourceFractionTokensSources
RefinedWeb-English 79%1,185Bmassive web crawl
Books7%110B
Conversations6%85BReddit, StackOverflow, HackerNews
Code3%45B
RefinedWeb-French3%45Bmassive web crawl
Technical2%30BarXiv, PubMed, USPTO, etc.
The data was tokenized with the Falcon-7B /40B tokenizer.

Training Procedure

Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
HyperparameterValueComment
Precisionbfloat16
OptimizerAdamW
Learning rate6e-44B tokens warm-up, cosine decay to 1.2e-5
Weight decay1e-1
Z-loss1e-4
Batch size230430B tokens ramp-up

Speeds, Sizes, Times

Training happened in early March 2023 and took about two weeks.

Evaluation

Paper coming soon. See the OpenLLM Leaderboard for early results.

Technical Specifications

Model Architecture and Objective

Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020 ), with the following differences:
HyperparameterValueComment
Layers32
d_model4544Increased to compensate for multiquery
head_dim64Reduced to optimise for FlashAttention
Vocabulary65024
Sequence length2048

Compute Infrastructure

Hardware

Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.

Software

Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)

License

Falcon-7B is made available under the Apache 2.0 license.

Finetuning samples

TaskUse caseDatasetPython sample (Notebook)CLI with YAML
Text ClassificationEmotion DetectionEmotion emotion-detection.ipynb emotion-detection.sh

Model Evaluation Sample

TaskUse caseDatasetPython sample (Notebook)CLI with YAML
Text generationText generation cnn_dailymail evaluate-model-text-generation.ipynb evaluate-model-text-generation.yml

Inference samples

Inference typePython sample (Notebook)CLI with YAML
Real timetext-generation-online-endpoint.ipynb text-generation-online-endpoint.sh
Batchtext-generation-batch-endpoint.ipynb coming soon

Sample input (for real-time inference)

{
  "input_data": {
      "input_string":["the meaning of life is"]
  }
}

Sample output

[
  {
    "0": "the meaning of life is to find your gift. the purpose of life is to give it away."
  }
]
Model Specifications
LicenseApache-2.0
Last UpdatedAugust 2024
Providertiiuae
Languages1 Language