本文最后更新于9 天前,其中的信息可能已经过时,如有错误请发送邮件到likethedramaallthetime@gmail.com
环境配置
python版本:3.10.11
准备阶段
如果已经安装过openai
库,则运行以下命令查看你用的是哪个版本的 openai SDK
:
pip show openai
```
Name: openai
Version: 1.95.0
Summary: The official Python library for the openai API
Home-page:
Author:
Author-email: OpenAI <support@openai.com>
License: Apache-2.0
Location: d:\code\爬虫\.venv\lib\site-packages
Requires: anyio, distro, httpx, jiter, pydantic, sniffio, tqdm, typing-extensions
Required-by:
```
确定版本在>=1.0,以保证最新版SKD的调用。
client = OpenAI() # 默认会从环境变量 OPENAI_API_KEY 读取 API 密钥
set OPENAI_API_KEY=你的密钥 # Windows CMD
$env:OPENAI_API_KEY="你的密钥" # PowerShell
# 也可以在python代码中设置:
client = OpenAI(api_key="sk-你的密钥")
如果从未安装过openai
包,那么直接运行:
pip install openai
创建实例
from openai import OpenAI
client = OpenAI() # 创建客户端实例
chat.completions
非流输出
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
response
的结构为:
{
"id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
"object": "chat.completion",
"created": 1741569952,
"model": "gpt-4.1-2025-04-14",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 10,
"total_tokens": 29,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default"
}
因此,要AI回复的内容:
response_content = response.choices[0].message.content
流输出
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream = True
)
流输出的每个chunk格式:
ChatCompletionChunk(
id='chatcmpl-Bs8I20ozgWnW9xUT4h1wE7Upb6Y4j',
choices=[
Choice(
delta=ChoiceDelta(
content='Hello! How can I assist you today?',
function_call=None,
refusal=None,
role=None,
tool_calls=None
),
finish_reason=None,
index=0, logprobs=None
)
],
created=1752241278,
model='gpt-4.1-2025-04-14',
object='chat.completion.chunk',
service_tier='default',
system_fingerprint='fp_38343a2f8f',
usage=None
)
因此,要得到最终的回复内容,需要通过循环整合:
for chunk in response:
res_patch = chunk.choices[0].delta.content or ""
response_content += res_patch
print(res_patch, end = "", flush = True)
值得注意的是,<or "">
必须要有,不然最后一个或许为空的chunk在整合的时候会报错
聊天记忆
要知道,只有在messages
中最后一个大括号内为{"role": "user", "content": contents}
,才能得到回复,因此,我们仅需要将每次的输入添加在messages
中,再将返回的输出添加进来,即可实现聊天记忆。
我们先定义输入监听函数:
def input_listening():
global chat_thread # 聊天进程,为bool值
while chat_thread:
try:
user_input = input("输入您的问题:")
if user_input.strip().lower() == "exit":
chat_thread = False
break
elif user_input.strip():
chat(user_input) # 聊天函数
except Exception as e:
print(f"发生错误:{e}")
break
其中的chat()
为聊天函数,获取输出内容。为了实现回复内容形式为“打字”效果,采用流式输出。
def chat(user_input):
global messages
new_input = {"role" : "user", "content": user_input}
messages.append(new_input) # 一定要添加进来
response = client.chat.completions.create(
model = "gpt-4.1",
messages = messages,
stream = True
)
for chunk in response:
response_content += chunk.choices[0].delta.content or ""
print(chunk.choices[0].delta.content or "", end="", flush=True) #flush强制刷新输出
print()
messages.append(response_content) # 一定要添加进来
return response_content
最后,采用线程的方式运行:
if __name__ == "__main__":
chat_thread = threading.Thread(target=input_listening)
chat_thread.start()
chat_thread.join() # 确保主线程等待子线程结束
易错点:
chat_thread = threading.Thread(target=input_listening).start()
# threading.Thread(target=input_listening).start()返回的是NONE,导致无法调用join报错
chat_thread.join() # 确保主线程等待子线程结束