Stable Code Instruct 3B 本地部署

官方简介:https://stability.ai/news/introducing-stable-code-instruct-3b

总的来说,和 ChatGLM2-6B-INT4 本地部署 差不多,需要安装 transformerstorch

流式输出例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("stabilityai/stable-code-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
model.eval()
model = model.cuda()

messages = [
{
"role": "system",
"content": "You are a helpful and polite assistant",
},
{
"role": "user",
"content": "show me a demo code to connect to aws lambda using golang"
},
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

tokens = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.5,
top_p=0.95,
top_k=100,
do_sample=True,
use_cache=True,
pad_token_id=tokenizer.eos_token_id,
streamer=TextStreamer(tokenizer=tokenizer, skip_prompt=True)
)

output = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
print(output)