SeaLLM-7B-chat

我要开发同款
匿名用户2024年07月31日
55阅读
所属分类ai、llama、Pytorch
开源地址https://modelscope.cn/models/AI-ModelScope/SeaLLM-7B-chat
授权协议other

作品详情

SeaLLMs - Large Language Models for Southeast Asia

? Tech Memo    ? DEMO    Github    Technical Report

SeaLLM-chat-7B

This a 7B Chat version of SeaLLMs. It Vietnamese ??, Indonesian ??, Thai ??, Malay ??, Khmer ??, Lao ??, Tagalog ?? and Burmese ??. It may have lower capability than the 13B models but it is much more memory-efficient and faster.

Visit our Technical Report and ? Tech Memo for more details.

Terms of Use and License: By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our SeaLLMs Terms Of Use.

Disclaimer: We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos.

The logo was generated by DALL-E 3.

示例代码:

SeaLLM models work the same way as Llama-2, so the Llama-2 generation codebase should be sufficient to run. However, as this is a chat model, you should wrap the prompt/instruction using the following format function.

You should also turn off addspecialtokens with tokenizer.add_special_tokens = False.

from modelscope import AutoTokenizer,Model
from modelscope import snapshot_download
import torch 
from typing import List, Tuple

BOS_TOKEN = '<s>'
EOS_TOKEN = '</s>'

B_INST, E_INST = "[INST]", "[/INST]"
B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n"

SYSTEM_PROMPT = """You are a multilingual, helpful, respectful and honest assistant. \
Please always answer as helpfully as possible, while being safe. Your \
answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure \
that your responses are socially unbiased and positive in nature.

If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \
correct. If you don't know the answer to a question, please don't share false information.

As a multilingual assistant, you must respond and follow instructions in the native language of the user by default, unless told otherwise. \
Your response should adapt to the norms and customs of the respective language and culture.
"""

def chat_multiturn_seq_format(
    message: str,
    history: List[Tuple[str, str]] = [], 
):
    """
    ```
        <bos>[INST] B_SYS SytemPrompt E_SYS Prompt [/INST] Answer <eos>
        <bos>[INST] Prompt [/INST] Answer <eos>
        <bos>[INST] Prompt [/INST]
    ```
    As the format auto-add <bos>, please turn off add_special_tokens with `tokenizer.add_special_tokens = False`
    Inputs:
      message: the current prompt
      history: list of list indicating previous conversation. [[message1, response1], [message2, response2]]
    Outputs:
      full_prompt: the prompt that should go into the chat model

    e.g:
      full_prompt = chat_multiturn_seq_format("Hello world")
      output = model.generate(tokenizer.encode(full_prompt, add_special_tokens=False), ...)
    """
    text = ''
    for i, (prompt, res) in enumerate(history):
        if i == 0:
            text += f"{BOS_TOKEN}{B_INST} {B_SYS} {SYSTEM_PROMPT} {E_SYS} {prompt} {E_INST}"
        else:
            text += f"{BOS_TOKEN}{B_INST} {prompt}{end_instr}"
        if res is not None:
            text += f" {res} {EOS_TOKEN} "
    if len(history) == 0 or text.strip() == '':
        text = f"{BOS_TOKEN}{B_INST} {B_SYS} {SYSTEM_PROMPT} {E_SYS} {message} {E_INST}"
    else:
        text += f"{BOS_TOKEN}{B_INST} {message} {E_INST}"
    return text

local_dir = snapshot_download("AI-ModelScope/SeaLLM-7B-chat",revision='master')

model = Model.from_pretrained(local_dir, revision='master', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(local_dir, revision='master')

full_prompt = chat_multiturn_seq_format("Hello world")
inputs = tokenizer(full_prompt, add_special_tokens=False, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids.to(model.device),  max_length=512,do_sample=True,top_k=10,num_return_sequences=1)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])

Citation

If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: l.bing@alibaba-inc.com

@article{damonlpsg2023seallm,
  author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*,
            Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang,
            Chaoqun Liu, Hang Zhang, Lidong Bing},
  title = {SeaLLMs - Large Language Models for Southeast Asia},
  year = 2023,
  Eprint = {arXiv:2312.00738},
}
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论