The Hugging Face model presented in this guide is used for open-source language models and is aimed at providing a user-friendly AI experience. The guide covers the following sections:

1. **Usage**: This section provides guidance on how to use the OpenChat package and the OpenChat OpenAI-compatible API server. It includes instructions for installing the package, running the serving command, and deploying the server as an online service.
2. **(Experimental) Evaluator / Feedback Capabilities**: This section introduces evaluator capabilities for open-source models and provides a prompt for evaluating a response using Default Mode.
3. **Benchmarks**: Here, the model’s performance is evaluated in various benchmarks, including average, MT-Bench, HumanEval, BBH MC, AGIEval, TruthfulQA, MMLU, GSM8K, and BBH CoT. The evaluation details are also included for reference.
4. **HumanEval+**: This section presents the performance of the model in HumanEval+ benchmarks compared to other models.
5. **OpenChat-3.5 vs. Grok**: A comparison of OpenChat-3.5 with the Grok model is provided, showing the performance of each model in different benchmarks.
6. **Limitations**: This section outlines the limitations of the foundation model, including areas such as complex reasoning, mathematical and arithmetic tasks, programming and coding challenges, hallucination of non-existent information, and safety concerns.
7. **License**: The guide states that the OpenChat 3.5 code and models are distributed under the Apache License 2.0.
8. **Citation**: Guidance on how to cite the OpenChat model in research or publications is provided in this section.
9. **💌 Main Contributor**: The guide concludes with a section acknowledging the main contributor to the OpenChat model.

Overall, the guide provides comprehensive information on the usage, benchmarks, limitations, and licensing of the Hugging Face model, making it a valuable resource for users and developers looking to leverage the model for various applications.

Source link
# Huggingface OpenChat Manual

## Table of Contents
1. [Usage](#usage)
2. [Benchmarks](#benchmarks)
3. [Limitations](#limitations)
4. [License](#license)
5. [Citation](#citation)
6. [Acknowledgements](#acknowledgements)

## Usage
To use the Huggingface OpenChat model, follow the steps below:

1. Install the OpenChat package. Use the installation guide in the [GitHub repository](https://github.com/imoneoi/openchat#installation).

2. Start the OpenChat OpenAI-compatible API server by running the serving command:

“`
python -m ochat.serving.openai_api_server –model openchat/openchat-3.5-0106 –engine-use-ray –worker-use-ray
“`

3. The server will listen at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat).

4. Optionally, you can deploy the server as an online service by using API keys and disabling log requests and stats for security purposes.

For a user-friendly experience, use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui).

## Benchmarks
The Huggingface OpenChat model has been benchmarked with various metrics including average scores, MT-Bench, HumanEval, BBH MC, AGIEval, TruthfulQA, MMLU, and GSM8K. Refer to the [benchmark results](#benchmarks) section for detailed evaluation details.

## Limitations
The OpenChat model has certain limitations including complex reasoning, mathematical tasks, programming challenges, hallucination of non-existent information, and safety issues. Users should be aware of these limitations and take appropriate precautions.

## License
The OpenChat 3.5 code and models are distributed under the Apache License 2.0.

## Citation
If you use the OpenChat model, please cite the following article:
“`
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
“`

## Acknowledgements
The Huggingface OpenChat project is sponsored by RunPod.
For more information, visit the [OpenChat team website](https://openchat.team).

![OpenChat Logo](https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true)

### Additional Resources
– [Online Demo](https://openchat.team)
– [GitHub Repository](https://github.com/imoneoi/openchat)
– [Research Paper](https://arxiv.org/pdf/2309.11235.pdf)
– [Discord Community](https://discord.gg/pQjnXvNKHY)


This manual is intended to provide a detailed overview and guide for using the Huggingface OpenChat language model. For any further assistance or inquiries, please refer to the official resources provided.


OpenChat Logo
Online Demo
|

GitHub Logo
GitHub
|

ArXiv Logo
Paper
|

Discord Logo
Discord

Sponsored by RunPod
RunPod Logo

Table of Contents

  1. Usage
  2. Benchmarks
  3. Limitations
  4. License
  5. Citation
  6. Acknowledgements

Usage

To use this model, we highly recommend installing the OpenChat package by following the installation guide in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using vLLM and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append --tensor-parallel-size N to the serving command.

Once started, the server listens at localhost:18888 for requests and is compatible with the OpenAI ChatCompletion API specifications. Please refer to the example request below for reference. Additionally, you can use the OpenChat Web UI for a user-friendly experience.

If you want to deploy the server as an online service, you can use --api-keys sk-KEY1 sk-KEY2 ... to specify allowed API keys and --disable-log-requests --disable-log-stats --log-file openchat.log for logging only to a file. For security purposes, we recommend using an HTTPS gateway in front of the server.

Model Size Context Weights Serving
OpenChat-3.5-0106 7B 8192 Huggingface python -m ochat.serving.openai_api_server --model openchat/openchat-3.5-0106 --engine-use-ray --worker-use-ray
Example request (click to expand)

💡 Default Mode (GPT4 Correct): Best for coding, chat and general tasks

curl http://localhost:18888/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openchat_3.5",
    "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
  }'

🧮 Mathematical Reasoning Mode: Tailored for solving math problems

curl http://localhost:18888/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openchat_3.5",
    "condition": "Math Correct",
    "messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}]
  }'



Conversation templates

💡 Default Mode (GPT4 Correct): Best for coding, chat and general tasks

GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:

🧮 Mathematical Reasoning Mode: Tailored for solving math problems

Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant:

⚠️ Notice: Remember to set <|end_of_turn|> as end of generation token.

The default (GPT4 Correct) template is also available as the integrated tokenizer.chat_template,
which can be used instead of manually specifying the template:

messages = [
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hi"},
    {"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]

(Experimental) Evaluator / Feedback Capabilities

We’ve included evaluator capabilities in this release to advance open-source models as evaluators. You can use Default Mode (GPT4 Correct) with the following prompt (same as Prometheus) to evaluate a response.

###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)"
4. Please do not generate any other opening, closing, and explanations.

###The instruction to evaluate:
{orig_instruction}

###Response to evaluate:
{orig_response}

###Reference Answer (Score 5):
{orig_reference_answer}

###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}

###Feedback: 

Benchmarks

Model # Params Average MT-Bench HumanEval BBH MC AGIEval TruthfulQA MMLU GSM8K BBH CoT
OpenChat-3.5-0106 7B 64.5 7.8 71.3 51.5 49.1 61.0 65.8 77.4 62.2
OpenChat-3.5-1210 7B 63.8 7.76 68.9 49.5 48.0 61.8 65.3 77.3 61.8
OpenChat-3.5 7B 61.6 7.81 55.5 47.6 47.4 59.1 64.3 77.3 63.5
ChatGPT (March)* ???B 61.5 7.94 48.1 47.6 47.1 57.7 67.3 74.9 70.1
OpenHermes 2.5 7B 59.3 7.54 48.2 49.4 46.5 57.5 63.8 73.5 59.9
OpenOrca Mistral 7B 52.7 6.86 38.4 49.4 42.9 45.9 59.3 59.1 58.1
Zephyr-β^ 7B 34.6 7.34 22.0 40.6 39.0 40.8 39.8 5.1 16.0
Mistral 7B 6.84 30.5 39.0 38.0 60.1 52.2
Evaluation Details(click to expand)

*: ChatGPT (March) results are from GPT-4 Technical Report, Chain-of-Thought Hub, and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.

^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.

**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.

All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in our repository.

HumanEval+

Model Size HumanEval+ pass@1
OpenChat-3.5-0106 7B 65.9
ChatGPT (December 12, 2023) ???B 64.6
WizardCoder-Python-34B-V1.0 34B 64.6
OpenChat 3.5 1210 7B 63.4
OpenHermes 2.5 7B 41.5

OpenChat-3.5 vs. Grok

🔥 OpenChat-3.5-0106 (7B) now outperforms Grok-0 (33B) on all 4 benchmarks and Grok-1 (???B) on average and 3/4 benchmarks.

License # Param Average MMLU HumanEval MATH GSM8k
OpenChat-3.5-0106 Apache-2.0 7B 61.0 65.8 71.3 29.3 77.4
OpenChat-3.5-1210 Apache-2.0 7B 60.1 65.3 68.9 28.9 77.3
OpenChat-3.5 Apache-2.0 7B 56.4 64.3 55.5 28.6 77.3
Grok-0 Proprietary 33B 44.5 65.7 39.7 15.7 56.8
Grok-1 Proprietary ???B 55.8 73 63.2 23.9 62.9

*: Grok results are reported by X.AI.

Limitations

Foundation Model Limitations
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model’s performance in areas such as:

  • Complex reasoning
  • Mathematical and arithmetic tasks
  • Programming and coding challenges

Hallucination of Non-existent Information
OpenChat may sometimes generate information that does not exist or is not accurate, also known as “hallucination”. Users should be aware of this possibility and verify any critical information obtained from the model.

Safety
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It’s crucial to apply additional AI safety measures in use cases that require safe and moderated responses.

License

Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.

Citation

@article{wang2023openchat,
  title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
  author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
  journal={arXiv preprint arXiv:2309.11235},
  year={2023}
}

💌 Main Contributor

The primary use case for the `

` HTML tag is to create a division or section in an HTML document. It does not have any specific semantic meaning and is used to group elements that need to be styled or manipulated in JavaScript. In the given code snippet, the `

` tag is used in a variety of ways to structure the content and layout of the web page.

### Usage
The first use case of the `

` tag is aligning content in the center of the page. By setting the `align` attribute to `center`, the image and paragraph within the `

` are aligned to the center.

### Table of Contents
The `

` is used to contain the Table of Contents, which is a list of links to different sections of the page.

### Experimental Evaluator / Feedback Capabilities
This section provides detailed instructions on how to use the OpenChat model for evaluating responses in a specific format. The content is contained within a `

` for better organization and layout.

### Benchmarks
The performance benchmark data is presented in a table that is contained within a `

`. The table provides detailed insights into the benchmarks of various models. The `

` allows the benchmark tables to be displayed in an organized and structured manner.

### Evaluation Details
The `

` is used to contain additional information and explanations related to the evaluation results. By placing this content within a `

`, it can be expanded or collapsed for better readability and organization.

### HumanEval+
The results of the HumanEval+ test are displayed in a table contained within a `

`, contributing to organized presentation of test results.

### OpenChat-3.5 vs. Grok
This section compares the results of OpenChat-3.5 with Grok model data presented in a table enclosed in a `

`, offering a structured comparison between the models.

### Limitations
The limitations and safety considerations of the OpenChat model are presented within a `

`, allowing the information to be displayed as a cohesive block and organized manner.

### License and Citation
Both the license and citation information for the OpenChat model are placed within a `

`, ensuring that these important details are presented in a structured manner.

### Main Contributor
The `

` is used to group and present the main contributor information, allowing for the easy and organized presentation of credit to the contributor.

In summary, the `

` tag is used to structure, organize, and style the web page content in a variety of use cases, ensuring it is presented in a well-organized and cohesive manner.

2024-01-10T23:51:04+01:00