# Guide to Hugging Face DeepSeekMath

## Introduction to DeepSeekMath

DeepSeekMath is a powerful language model that can generate responses to mathematical questions in both English and Chinese. The model is designed to provide step-by-step reasoning and final answers for a wide range of mathematical problems.

For more details about DeepSeekMath, you can visit the [Introduction](https://github.com/deepseek-ai/DeepSeek-Math) page.

## How to Use

### Chat Completion

To use DeepSeekMath for chat completion, follow the steps below:

1. Use the provided Python code snippet to import the necessary libraries and initialize the model.
2. Prepare a user input message with the format: `{question}\nPlease reason step by step, and put your final answer within \boxed{}` for English questions, and `{question}\n请通过逐步推理来解答问题,并把最终答案放置于\boxed{}中` for Chinese questions.
3. Provide the input message to the model and generate the response.

Remember to avoid using the provided function `apply_chat_template` and instead interact with the model using the provided template. Avoid including the system prompt in your input.

### Sample Python Code

“`python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = “deepseek-ai/deepseek-math-7b-instruct”
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=”auto”)
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
{“role”: “user”, “content”: “what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \boxed{}.”}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors=”pt”)
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
“`

### Contact Support

If you have any questions or need support, you can raise an issue or contact the DeepSeek team at [service@deepseek.com](https://huggingface.co/deepseek-ai/mailto:service@deepseek.com).

## License

The code repository for DeepSeekMath is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License and supports commercial use. For more details on the license, visit the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL) page.

For more information and updates, you can visit the [DeepSeek homepage](https://www.deepseek.com/) or chat with the DeepSeek LLM on [Chat with DeepSeek LLM](https://chat.deepseek.com/).

Join the DeepSeek community on [Discord](https://discord.gg/Tc7c45Zzu5) or connect via WeChat by scanning the QR code [here](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg).

For the latest paper on DeepSeekMath, visit the [Paper Link](https://arxiv.org/pdf/2402.03300.pdf) to learn more about the model.

Source link
# Hugging Face DeepSeekMath Manual

## 1. Introduction to DeepSeekMath

DeepSeekMath is a powerful language model designed to assist with mathematical problem-solving. For more details, visit our [Introduction](https://github.com/deepseek-ai/DeepSeek-Math).

## 2. How to Use

To use the DeepSeekMath model for chat completion, follow the example below:

### Python Code Example:
“`python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = “deepseek-ai/deepseek-math-7b-instruct”
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=”auto”)
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
{“role”: “user”, “content”: “what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \boxed{}.”}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors=”pt”)
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)
“`

You can also interact with the model following the sample template:

### Sample Template:
“`
User: {messages[0][‘content’]}

Assistant: {messages[1][‘content’]}<|end▁of▁sentence|>User: {messages[2][‘content’]}

Assistant:
“`

**Note:**
– Avoid including the system prompt in your input.
– The tokenizer automatically adds a bos_token (<|begin▁of▁sentence|>) before the input text.

## 3. License

The DeepSeekMath code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.

View the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL) for more details.

## 4. Contact

For any questions or issues, please contact us at [service@deepseek.com](https://huggingface.co/deepseek-ai/mailto:service@deepseek.com).

To get started with DeepSeekMath, visit our [Homepage](https://www.deepseek.com/) and try chatting with DeepSeek LLM on [Chat with DeepSeek LLM](https://chat.deepseek.com/). Join our [Discord](https://discord.gg/Tc7c45Zzu5) community for discussions. You can also connect with us on Wechat by scanning the QR code [here](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg).

For the paper on DeepSeekMath, visit the [Paper Link](https://arxiv.org/pdf/2402.03300.pdf).

DeepSeek Chat

[🏠Homepage] | [🤖 Chat with DeepSeek LLM] | [Discord] | [Wechat(微信)]

Paper Link👁️




1. Introduction to DeepSeekMath

See the Introduction for more details.



2. How to Use

Here give some examples of how to use our model.

Chat Completion

❗❗❗ Please use chain-of-thought prompt to test DeepSeekMath-Instruct and DeepSeekMath-RL:

  • English questions: {question}\nPlease reason step by step, and put your final answer within \boxed{}.

  • Chinese questions: {question}\n请通过逐步推理来解答问题,并把最终答案放置于\boxed{}中。

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_name = "deepseek-ai/deepseek-math-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
model.generation_config = GenerationConfig.from_pretrained(model_name)
model.generation_config.pad_token_id = model.generation_config.eos_token_id

messages = [
    {"role": "user", "content": "what is the integral of x^2 from 0 to 2?\nPlease reason step by step, and put your final answer within \\boxed{}."}
]
input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100)

result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True)
print(result)

Avoiding the use of the provided function apply_chat_template, you can also interact with our model following the sample template. Note that messages should be replaced by your input.

User: {messages[0]['content']}

Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']}

Assistant:

Note: By default (add_special_tokens=True), our tokenizer automatically adds a bos_token (<|begin▁of▁sentence|>) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input.



3. License

This code repository is licensed under the MIT License. The use of DeepSeekMath models is subject to the Model License. DeepSeekMath supports commercial use.

See the LICENSE-MODEL for more details.



4. Contact

If you have any questions, please raise an issue or contact us at service@deepseek.com.

The

tag in HTML is used to create divisions or sections within an HTML document. It is a block-level element that does not have any specific styling or formatting of its own. Instead, it is used to group and organize other HTML elements, and can be styled and formatted with CSS.

The provided HTML code demonstrates various use cases of the

tag for creating a structured and organized webpage. The

tag is used to divide the content into different sections and provide links to various external resources.

One use case of the

tag is to group related content together. In the provided HTML code, the

tag is used to contain multiple elements including paragraphs, images, and hyperlinks that are all related to the DeepSeek Chat. By using

tags, the content is organized and can be styled as a single unit using CSS.

Another use case for the

tag is to create a section of the webpage where specific content is located. In the provided code, the

tag contains multiple headings, paragraphs, and code snippets that are used to provide detailed information about DeepSeekMath and how to use it. The

tag allows these elements to be grouped and styled as a single section on the webpage.

Additionally, the

tag can be used to create a container for other elements, making it easier to manage and style portions of the webpage. In the provided code, the

tag acts as a container for a variety of content including images, links, headings, paragraphs, and code snippets. This allows developers to apply consistent styling to the entire section, rather than styling each individual element separately.

The

tag also enables the creation of structured layouts on the webpage. By using multiple

tags, developers can create a grid-like layout that contains different sections of content. This can help to organize and present information in a visually appealing and user-friendly manner.

Overall, the

tag is a versatile element in HTML that is commonly used for structuring and organizing content on a webpage. From grouping related elements to creating structured layouts, the

tag provides flexibility and control for developers when designing and building webpages. With the help of CSS, the

tag can be styled and formatted to create visually appealing and functional web content.

2024-02-11T10:52:09+01:00