pip install openai==0.27.0 tiktoken
import openai, os
openai.api_key = os.getenv("OPENAI_API_KEY") # 從 https://platform.openai.com/account/api-keys 獲取
寫完先用「代碼審查助手」掃描,確保密鑰不在倉庫硬編碼;再用「代碼優(yōu)化」把同步阻塞換成 aiohttp 異步,并發(fā)提升 5 倍 ??!
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "用 emoji 寫一首關(guān)于春天的短詩"}
]
)
print(response.choices[0].message.content)
輸出:
?????? 春風(fēng)拂面花自開,燕子歸來柳色新~
API 本身無記憶,需要把歷史 messages 反復(fù)傳回去。
messages = [
{"role": "system", "content": "你是資深理財(cái)顧問"},
{"role": "user", "content": "如何每月存 2k 實(shí)現(xiàn)年化 8%?"}
]
# 第一輪
resp = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages)
assistant_say = resp.choices[0].message.content
print(assistant_say)
# 第二輪
messages.append({"role": "assistant", "content": assistant_say})
messages.append({"role": "user", "content": "如果市場下跌 20% 怎么辦?"})
resp = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages)
print(resp.choices[0].message.content)
對話輪數(shù)增多后,tokens 用量暴漲。用 tiktoken 實(shí)時(shí)計(jì)算,接近模型上限就滑動(dòng)窗口截?cái)?/strong> ??:
import tiktoken
def num_tokens_from_messages(msgs, model="gpt-3.5-turbo"):
enc = tiktoken.encoding_for_model(model)
return sum(len(enc.encode(m["content"])) for m in msgs)
| 參數(shù) | 作用 | 示例 |
|---|---|---|
max_tokens |
控制輸出最長長度 | max_tokens=256 節(jié)省費(fèi)用 |
temperature |
0=保守可預(yù)測,2=天馬行空 ?? | 客服機(jī)器人設(shè) 0.2,創(chuàng)意寫作設(shè) 1.5 |
n |
一次返回 n 條候選 | n=3 做 A/B 精選 |
stop |
遇到指定字符串立刻停 | stop=["\n"] 只拿第一行 |
快速對比溫度效果:
for t in [0, 1, 2]:
r = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "寫一條關(guān)于 Python 的推文"}],
temperature=t
)
print(f"T={t}: {r.choices[0].message.content}\n")
調(diào)參完成后,用「代碼文檔生成器」一鍵生成函數(shù)說明,團(tuán)隊(duì)直接復(fù)制粘貼 ??!
import asyncio, aiohttp, json
async def stream_chat():
url = "https://api.openai.com/v1/chat/completions"
headers = {"Authorization": f"Bearer {os.getenv('OPENAI_API_KEY')}"}
payload = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "講個(gè)冷笑話"}],
"stream": True
}
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as resp:
async for line in resp.content:
line = line.decode('utf-8').strip()
if line.startswith("data:"):
data = line[5:]
if data == "[DONE]": break
print(json.loads(data)["choices"][0]["delta"].get("content", ""), end="")
asyncio.run(stream_chat())
流式拉取期間,用「代碼審查助手」檢查異常中斷處理,再用「代碼優(yōu)化」把 aiohttp 連接池調(diào)到 200,高并發(fā)也不掉線 ??!
原文鏈接: https://www.mlexpert.io/blog/chatgpt-api