32K
0.000945
0.000945
70B
Language

Qwen 1.5 (72B)

Qwen 1.5-72B: Transformer-based language model with multilingual support, 32K context, and strong performance in text completion and reasoning.
Try it now

AI Playground

Test all API models in the sandbox environment before you integrate. We provide more than 200 models to integrate into your app.
AI Playground image
Ai models list in playground
Testimonials

Our Clients' Voices

Qwen 1.5 (72B)Techflow Logo - Techflow X Webflow Template

Qwen 1.5 (72B)

Cutting-edge AI model capable of solving complex problems efficiently.

Introduction

Base language model Qwen1.5-72B represents the beta iteration of Qwen2, an advanced transformer-based language model. It is pre-trained using a vast corpus of data. Noteworthy improvements from its predecessor, Qwen, include multilingual support for both base and chat models, stable support for 32K context length, and the removal of the need for trust_remote_code.

The Model

Qwen1.5-72B belongs to the Qwen1.5 series, which encompasses decoder language models of six model sizes ranging from 0.5B to 72B. Each size variant includes both the base language model and the aligned chat model. Qwen1.5-72B is the largest base model in the series. Built on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, and a mixture of sliding window attention and full attention, among other features, Qwen1.5-72B boasts an improved tokenizer adaptable to multiple natural languages and codes. Notably, the beta version omits GQA and the mixture of SWA and full attention.

Use Cases for the Model

Qwen1.5-72B model 's versatility lends itself to a range of applications, including but not limited to text completion, content moderation and translation. Its robust architecture and multilingual support make it a valuable asset across various domains requiring sophisticated language processing capabilities.

How It Compares to Competitors

Qwen1.5-72B demonstrates strong performance across diverse evaluation benchmarks, showcasing its exceptional capabilities in language understanding, reasoning, and math. Specifically, it outperforms Llama2-70B across all benchmarks, solidifying its position as a top-performing language model in its class. Its ability to handle 32K context length consistently sets it apart, ensuring reliable performance in diverse scenarios without compromising efficiency.

It also prove to be highly competitive with other leading models in the community, such as Mixtral 8x7b. The benchmark results affirm Qwen1.5-72B model's prowess in handling complex linguistic tasks with precision and efficiency, positioning it as a significant player in the landscape of transformer-based language models.

Tips

Even though it is advised to refrain from using base language models for text generation (use chat versions instead), they can be useful for a number of experiments and evaluations, since they have minimal bias when performing text completion. You can access this model through AI/ML API by signing up on this website. 

If you deploy this model locally, you can also apply post-training techniques such as SFT (Sparse Fine-Tuning), RLHF (Reinforcement Learning with Human Feedback), or continued pretraining to enhance model performance and tailor outputs to specific requirements.

License Agreement

The Qwen1.5-72B model is governed by the Tongyi Qianwen license agreement, which can be accessed on the model's repository on GitHub or Huggingface. You don't need to submit any request for commercial use, unless your product or service has more than 100 million monthly active users.

Conclusion

In conclusion, Qwen1.5-72B represents a significant advancement in open-source foundational language models, offering improved capabilities in text completion, multilingual support, and context handling.

Try it now

The Best Growth Choice
for Enterprise

Get API Key