JAIS 30b Chat
JAIS 30b Chat
Version: 3
Core42Last updated February 2025
JAIS 30b Chat is an auto-regressive bilingual LLM for Arabic & English with state-of-the-art capabilities in Arabic.
Conversation
Multilingual
RAG
JAIS 30b Chat from Core42 is an auto-regressive bi-lingual LLM for Arabic & English with state-of-the-art capabilities in Arabic.

Model Architecture

The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU non-linearity. It uses LiBi position embeddings, enabling the model to extrapolate to long sequence lengths, providing improved context length handling. The tuned versions use supervised fine-tuning (SFT).

Training Datasets

Overview: The pretraining data for Jais-30b is a total of 1.63 T tokens consisting of English, Arabic, and code. Jais-30b-chat model is finetuned with both Arabic and English prompt-response pairs. We extended our finetuning datasets used for jais-13b-chat which included a wide range of instructional data across various domains. We cover a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, we developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic. Data Freshness: The pretraining data has a cutoff of December 2022, with some tuning data being more recent, up to October 2023.

Contact model provider

To submit any feedback go to the original model card and open an issue.

Responsible AI Considerations

The model is trained on publicly available data which was in part curated by Inception. We have employed different techniques to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias. The model is trained as an AI assistant for Arabic and English speakers. The model is limited to producing responses for queries in these two languages and may not produce appropriate responses to other language queries. By using JAIS, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The content generated by JAIS is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model.

Evaluation

NOTE: The evaluation report below was provided by the model publisher. Core42 conducted a comprehensive evaluation of Jais-30b-chat and benchmarked it against other leading base and instruction finetuned language models, focusing on both English and Arabic. Benchmarks used have a significant overlap with the widely used OpenLLM Leaderboard tasks. The evaluation criteria span various dimensions, including:
  • Knowledge: How well the model answers factual questions.
  • Reasoning: The model's ability to answer questions that require reasoning.
  • Misinformation/Bias: Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
The following results report F1 or Accuracy (depending on the task) of the evaluated models on benchmarked tasks. Both metrics are higher the better.

Arabic Benchmark Results

ModelsAvgEXAMSMMLU (M)LitQAHellaswagPIQABoolQASituatedQAARC-COpenBookQATruthfulQACrowS-Pairs
Jais-30b-chat51.340.735.157.159.364.181.652.939.129.653.152.5
Jais-chat (13B)49.2239.73452.661.467.565.74740.731.644.856.4
acegpt-13b-chat45.9438.631.242.349.260.269.739.535.135.448.255.9
BLOOMz (7.1B)43.6534.9314438.159.166.642.830.229.248.455.8
acegpt-7b-chat43.363729.639.446.158.95538.833.134.650.154.4
aya-101-13b-chat41.9229.932.538.335.655.776.242.228.329.442.850.2
mT0-XXL (13B)41.4131.531.236.633.956.177.844.726.127.844.545.3
LLama2-70b-chat39.429.729.333.734.35267.336.426.428.446.349.6
Llama2-13b-chat38.7326.329.133.13252.16636.324.128.448.650
For evaluations, the focus is on LLMs that are multilingual or Arabic centric, except for Llama2 13B-chat and Llama2-70B-chat models. Among Arabic centric models like AceGPT and multilingual models like Aya, both Jais models outperform all other models by 4+ points. Jais models outperforming English only LLMs such as Llama2-13B-chat and LLama2-70B-chat demonstrates the obvious - though these models are trained on more tokens (2T) and in one case is much larger, Jais’ Arabic centric training gives it a dramatic advantage in Arabic linguistic tasks. Note that LLama’s pretraining may include traces of Arabic as evidenced by its limited yet observable capability to understand Arabic, but it is insufficient to obtain an LLM capable of conversing in Arabic, as is expected.

English Benchmark Results

ModelsAvgMMLURACEHellaswagPIQABoolQASituatedQAARC-COpenBookQAWinograndeTruthfulQACrowS-Pairs
Jais-30b-chat59.5936.545.678.973.19056.751.244.470.242.366.6
Jais-chat (13B)57.4537.740.877.678.275.857.846.84168.639.768
acegpt-13b-chat57.8434.442.77678.881.945.44541.671.345.773.4
BLOOMz (7.1B)57.8136.745.663.177.491.759.743.64265.345.265.6
acegpt-7b-chat54.2530.940.167.675.475.344.238.839.666.349.369.3
aya-101-13b-chat49.5536.641.34665.981.953.531.23356.242.557
mT0-XXL (13B)50.213443.642.267.687.655.429.435.254.943.459
LLama2-70b-chat61.254345.280.380.686.546.54943.87452.872.1
Llama2-13b-chat58.0536.945.777.678.88347.44642.47144.165.7
Jais-30b-chat outperforms the best other multilingual/ Arabic centric model in English language capabilities by ~2 points. Note that the best model among other Arabic centric models is AceGPT, which finetunes from Llama2-13B. Llama2 models (13B and 70B) are both pre-trained on far more English tokens (2T) vs those that were used for the pretrained Jais-30b (0.97T). At less than half the model and pretraining data size, Jais models reach within 2 points of the English capabilities of Llama2-70B-chat.

Cultural/ Local Context Knowledge

One of the key motivations to train an Arabic LLM is to include knowledge specific to the local context. In training Jais-30b-chat, we have invested considerable effort to include data that reflects high quality knowledge in both languages in the UAE and regional domains. To evaluate the impact of this training, in addition to LM harness evaluations in the general language domain, we also evaluate Jais models on a dataset testing knowledge pertaining to the UAE/regional domain. We curated ~320 UAE + Region specific factual questions in both English and Arabic. Each question has four answer choices, and like in the LM Harness, the task for the LLM is to choose the correct one. The following table shows Accuracy for both Arabic and English subsets of this test set.
ModelArabicEnglish
Jais-30b-chat57.255

Long Context Evaluations

We adopted the needle-in-haystack approach to assess the model's capability of handling long contexts. In this
evaluation setup, we input a lengthy irrelevant text (the haystack) along with a required fact to answer a question (the
needle), which is embedded within this text. The model's task is to answer the question by locating and extracting the
needle from the text.
We plot the accuracies of the model at retrieving the needle from the given context. We conducted evaluations for both Arabic and English languages. For brevity, we are presenting the plot for Arabic only. We observe that jais-30b-chat-v3 is improved over jais-30b-chat-v1 as it can answer the question upto 8k context
lengths.
Model Specifications
Context Length8192
LicenseCustom
Training DataDec 2022
Last UpdatedFebruary 2025
Input TypeText
Output TypeText
PublisherCore42
Languages2 Languages