JAIS 30b Chat
JAIS 30b Chat
Version: 3
Core42Last updated October 2025
JAIS 30b Chat is an auto-regressive bilingual LLM for Arabic & English with state-of-the-art capabilities in Arabic.
Conversation
Multilingual
RAG

Models from Microsoft, Partners, and Community

1P and 3P MaaS models are a select portfolio of curated models both general-purpose and niche models across diverse scenarios developed by Microsoft teams, partners, and community contributors
  • Managed by Microsoft: Purchase and manage models directly through Azure with a single license, world class support and enterprise grade Azure infrastructure
  • Validated by providers: Each model is validated and maintained by its respective provider, with Azure offering integration and deployment guidance.
  • Innovation and agility: Combines Microsoft research models with rapid, community-driven advancements.
  • Seamless Azure integration: Standard Azure AI Foundry experience, with support managed by the model provider.
  • Flexible deployment: Deployable as Managed Compute or Serverless API, based on provider preference.
Learn more about models from Microsoft, Partners, and Community

Key capabilities

About this model

JAIS 30b Chat from Core42 is an auto-regressive bi-lingual LLM for Arabic & English with state-of-the-art capabilities in Arabic.

Key model capabilities

Core42 conducted a comprehensive evaluation of Jais-30b-chat and benchmarked it against other leading base and instruction finetuned language models, focusing on both English and Arabic. The evaluation criteria span various dimensions, including:
  • Knowledge: How well the model answers factual questions.
  • Reasoning: The model's ability to answer questions that require reasoning.
  • Misinformation/Bias: Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
One of the key motivations to train an Arabic LLM is to include knowledge specific to the local context. In training Jais-30b-chat, we have invested considerable effort to include data that reflects high quality knowledge in both languages in the UAE and regional domains.

Use cases

See Responsible AI for additional considerations for responsible use.

Key use cases

The model is trained as an AI assistant for Arabic and English speakers.

Out of scope use cases

The model is limited to producing responses for queries in these two languages and may not produce appropriate responses to other language queries. By using JAIS, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The content generated by JAIS is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use.

Pricing

Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.

Technical specs

The provider has not supplied this information.

Training cut-off date

The pretraining data has a cutoff of December 2022, with some tuning data being more recent, up to October 2023.

Training time

The provider has not supplied this information.

Input formats

The provider has not supplied this information.

Output formats

The provider has not supplied this information.

Supported languages

Arabic & English

Sample JSON response

The provider has not supplied this information.

Model architecture

The model is based on transformer-based decoder-only (GPT-3) architecture and uses SwiGLU non-linearity. It uses LiBi position embeddings, enabling the model to extrapolate to long sequence lengths, providing improved context length handling. The tuned versions use supervised fine-tuning (SFT).

Long context

We adopted the needle-in-haystack approach to assess the model's capability of handling long contexts. In this
evaluation setup, we input a lengthy irrelevant text (the haystack) along with a required fact to answer a question (the
needle), which is embedded within this text. The model's task is to answer the question by locating and extracting the
needle from the text.
We plot the accuracies of the model at retrieving the needle from the given context. We conducted evaluations for both Arabic and English languages. For brevity, we are presenting the plot for Arabic only. We observe that jais-30b-chat-v3 is improved over jais-30b-chat-v1 as it can answer the question upto 8k context
lengths.

Optimizing model performance

The provider has not supplied this information.

Additional assets

The provider has not supplied this information.

Training disclosure

Training, testing and validation

The pretraining data for Jais-30b is a total of 1.63 T tokens consisting of English, Arabic, and code. Jais-30b-chat model is finetuned with both Arabic and English prompt-response pairs. We extended our finetuning datasets used for jais-13b-chat which included a wide range of instructional data across various domains. We cover a wide range of common tasks including question answering, code generation, and reasoning over textual content. To enhance performance in Arabic, we developed an in-house Arabic dataset as well as translating some open-source English instructions into Arabic.

Distribution

Distribution channels

The provider has not supplied this information.

More information

Source: Core42

Responsible AI considerations

Safety techniques

The model is trained on publicly available data which was in part curated by Inception. We have employed different techniques to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.

Safety evaluations

Core42 conducted a comprehensive evaluation of Jais-30b-chat and benchmarked it against other leading base and instruction finetuned language models, focusing on both English and Arabic. The evaluation criteria span various dimensions, including:
  • Knowledge: How well the model answers factual questions.
  • Reasoning: The model's ability to answer questions that require reasoning.
  • Misinformation/Bias: Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.

Known limitations

The model is trained as an AI assistant for Arabic and English speakers. The model is limited to producing responses for queries in these two languages and may not produce appropriate responses to other language queries. By using JAIS, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The content generated by JAIS is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model.

Acceptable use

Acceptable use policy

The provider has not supplied this information.

Quality and performance evaluations

Source: Core42 Core42 conducted a comprehensive evaluation of Jais-30b-chat and benchmarked it against other leading base and instruction finetuned language models, focusing on both English and Arabic. Benchmarks used have a significant overlap with the widely used OpenLLM Leaderboard tasks. The evaluation criteria span various dimensions, including:
  • Knowledge: How well the model answers factual questions.
  • Reasoning: The model's ability to answer questions that require reasoning.
  • Misinformation/Bias: Assessment of the model's susceptibility to generating false or misleading information, and its neutrality.
The following results report F1 or Accuracy (depending on the task) of the evaluated models on benchmarked tasks. Both metrics are higher the better.

Arabic Benchmark Results

ModelsAvgEXAMSMMLU (M)LitQAHellaswagPIQABoolQASituatedQAARC-COpenBookQATruthfulQACrowS-Pairs
Jais-30b-chat51.340.735.157.159.364.181.652.939.129.653.152.5
Jais-chat (13B)49.2239.73452.661.467.565.74740.731.644.856.4
acegpt-13b-chat45.9438.631.242.349.260.269.739.535.135.448.255.9
BLOOMz (7.1B)43.6534.9314438.159.166.642.830.229.248.455.8
acegpt-7b-chat43.363729.639.446.158.95538.833.134.650.154.4
aya-101-13b-chat41.9229.932.538.335.655.776.242.228.329.442.850.2
mT0-XXL (13B)41.4131.531.236.633.956.177.844.726.127.844.545.3
LLama2-70b-chat39.429.729.333.734.35267.336.426.428.446.349.6
Llama2-13b-chat38.7326.329.133.13252.16636.324.128.448.650
For evaluations, the focus is on LLMs that are multilingual or Arabic centric, except for Llama2 13B-chat and Llama2-70B-chat models. Among Arabic centric models like AceGPT and multilingual models like Aya, both Jais models outperform all other models by 4+ points. Jais models outperforming English only LLMs such as Llama2-13B-chat and LLama2-70B-chat demonstrates the obvious - though these models are trained on more tokens (2T) and in one case is much larger, Jais' Arabic centric training gives it a dramatic advantage in Arabic linguistic tasks. Note that LLama's pretraining may include traces of Arabic as evidenced by its limited yet observable capability to understand Arabic, but it is insufficient to obtain an LLM capable of conversing in Arabic, as is expected.

English Benchmark Results

ModelsAvgMMLURACEHellaswagPIQABoolQASituatedQAARC-COpenBookQAWinograndeTruthfulQACrowS-Pairs
Jais-30b-chat59.5936.545.678.973.19056.751.244.470.242.366.6
Jais-chat (13B)57.4537.740.877.678.275.857.846.84168.639.768
acegpt-13b-chat57.8434.442.77678.881.945.44541.671.345.773.4
BLOOMz (7.1B)57.8136.745.663.177.491.759.743.64265.345.265.6
acegpt-7b-chat54.2530.940.167.675.475.344.238.839.666.349.369.3
aya-101-13b-chat49.5536.641.34665.981.953.531.23356.242.557
mT0-XXL (13B)50.213443.642.267.687.655.429.435.254.943.459
LLama2-70b-chat61.254345.280.380.686.546.54943.87452.872.1
Llama2-13b-chat58.0536.945.777.678.88347.44642.47144.165.7
Jais-30b-chat outperforms the best other multilingual/ Arabic centric model in English language capabilities by ~2 points. Note that the best model among other Arabic centric models is AceGPT, which finetunes from Llama2-13B. Llama2 models (13B and 70B) are both pre-trained on far more English tokens (2T) vs those that were used for the pretrained Jais-30b (0.97T). At less than half the model and pretraining data size, Jais models reach within 2 points of the English capabilities of Llama2-70B-chat.

Cultural/ Local Context Knowledge

One of the key motivations to train an Arabic LLM is to include knowledge specific to the local context. In training Jais-30b-chat, we have invested considerable effort to include data that reflects high quality knowledge in both languages in the UAE and regional domains. To evaluate the impact of this training, in addition to LM harness evaluations in the general language domain, we also evaluate Jais models on a dataset testing knowledge pertaining to the UAE/regional domain. We curated ~320 UAE + Region specific factual questions in both English and Arabic. Each question has four answer choices, and like in the LM Harness, the task for the LLM is to choose the correct one. The following table shows Accuracy for both Arabic and English subsets of this test set.
ModelArabicEnglish
Jais-30b-chat57.255

Long Context Evaluations

We adopted the needle-in-haystack approach to assess the model's capability of handling long contexts. In this
evaluation setup, we input a lengthy irrelevant text (the haystack) along with a required fact to answer a question (the
needle), which is embedded within this text. The model's task is to answer the question by locating and extracting the
needle from the text.
We plot the accuracies of the model at retrieving the needle from the given context. We conducted evaluations for both Arabic and English languages. For brevity, we are presenting the plot for Arabic only. We observe that jais-30b-chat-v3 is improved over jais-30b-chat-v1 as it can answer the question upto 8k context
lengths.

Benchmarking methodology

Source: Core42 We adopted the needle-in-haystack approach to assess the model's capability of handling long contexts. In this
evaluation setup, we input a lengthy irrelevant text (the haystack) along with a required fact to answer a question (the
needle), which is embedded within this text. The model's task is to answer the question by locating and extracting the
needle from the text.
We curated ~320 UAE + Region specific factual questions in both English and Arabic. Each question has four answer choices, and like in the LM Harness, the task for the LLM is to choose the correct one.

Public data summary

Source: Core42 The provider has not supplied this information.
Model Specifications
Context Length8192
LicenseCustom
Training DataDecember 2022
Last UpdatedOctober 2025
Input TypeText
Output TypeText
ProviderCore42
Languages2 Languages