OpenAI 兼容性

本文内容均由Ollama官方文档翻译,仅供个人学习,如有差异请以官网文档为准(https://ollama.com)ollama.cadn.net.cn

[!注意] OpenAI 兼容性是实验性的,可能会进行重大调整,包括重大更改。有关对 Ollama API 的完整功能访问,请参阅 Ollama Python 库JavaScript 库REST APIollama.cadn.net.cn

Ollama 提供与 OpenAI API 部分的实验性兼容性,以帮助将现有应用程序连接到 Ollama。ollama.cadn.net.cn

用法

OpenAI Python 库

from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1/',

    # required but ignored
    api_key='ollama',
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            'role': 'user',
            'content': 'Say this is a test',
        }
    ],
    model='llama3.2',
)

response = client.chat.completions.create(
    model="llava",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": "data:image/png;base64,******",
                },
            ],
        }
    ],
    max_tokens=300,
)

completion = client.completions.create(
    model="llama3.2",
    prompt="Say this is a test",
)

list_completion = client.models.list()

model = client.models.retrieve("llama3.2")

embeddings = client.embeddings.create(
    model="all-minilm",
    input=["why is the sky blue?", "why is the grass green?"],
)

结构化输出

from pydantic import BaseModel
from openai import OpenAI

client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")

# Define the schema for the response
class FriendInfo(BaseModel):
    name: str
    age: int 
    is_available: bool

class FriendList(BaseModel):
    friends: list[FriendInfo]

try:
    completion = client.beta.chat.completions.parse(
        temperature=0,
        model="llama3.1:8b",
        messages=[
            {"role": "user", "content": "I have two friends. The first is Ollama 22 years old busy saving the world, and the second is Alonso 23 years old and wants to hang out. Return a list of friends in JSON format"}
        ],
        response_format=FriendList,
    )

    friends_response = completion.choices[0].message
    if friends_response.parsed:
        print(friends_response.parsed)
    elif friends_response.refusal:
        print(friends_response.refusal)
except Exception as e:
    print(f"Error: {e}")

OpenAI JavaScript 库

import OpenAI from 'openai'

const openai = new OpenAI({
  baseURL: 'http://localhost:11434/v1/',

  // required but ignored
  apiKey: 'ollama',
})

const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: 'user', content: 'Say this is a test' }],
    model: 'llama3.2',
})

const response = await openai.chat.completions.create({
    model: "llava",
    messages: [
        {
        role: "user",
        content: [
            { type: "text", text: "What's in this image?" },
            {
            type: "image_url",
            image_url: "data:image/png;base64,******",
            },
        ],
        },
    ],
})

const completion = await openai.completions.create({
    model: "llama3.2",
    prompt: "Say this is a test.",
})

const listCompletion = await openai.models.list()

const model = await openai.models.retrieve("llama3.2")

const embedding = await openai.embeddings.create({
  model: "all-minilm",
  input: ["why is the sky blue?", "why is the grass green?"],
})

curl

curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "llama3.2",
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'

curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llava",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
               "url": "data:image/png;base64,******"
            }
          }
        ]
      }
    ],
    "max_tokens": 300
  }'

curl http://localhost:11434/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "llama3.2",
        "prompt": "Say this is a test"
    }'

curl http://localhost:11434/v1/models

curl http://localhost:11434/v1/models/llama3.2

curl http://localhost:11434/v1/embeddings \
    -H "Content-Type: application/json" \
    -d '{
        "model": "all-minilm",
        "input": ["why is the sky blue?", "why is the grass green?"]
    }'

端点

/v1/chat/completions

支持的功能

  • [x] 聊天完成次数
  • [x] 流媒体
  • [x] JSON 模式
  • [x] 可重复的输出
  • [x] 愿景
  • [x] 工具
  • [ ] 对数

支持的请求字段

  • [十]model
  • [十]messages
    • [x] 文本content
    • [x] 图像content
      • [x] Base64 编码的图像
      • [ ] 图片网址
    • [x] 数组content部件
  • [十]frequency_penalty
  • [十]presence_penalty
  • [十]response_format
  • [十]seed
  • [十]stop
  • [十]stream
  • [十]stream_options
    • [十]include_usage
  • [十]temperature
  • [十]top_p
  • [十]max_tokens
  • [十]tools
  • [ ] tool_choice
  • [ ] logit_bias
  • [ ] user
  • [ ] n

/v1/completions

支持的功能

  • [x] 完成次数
  • [x] 流媒体
  • [x] JSON 模式
  • [x] 可重复的输出
  • [ ] 对数

支持的请求字段

  • [十]model
  • [十]prompt
  • [十]frequency_penalty
  • [十]presence_penalty
  • [十]seed
  • [十]stop
  • [十]stream
  • [十]stream_options
    • [十]include_usage
  • [十]temperature
  • [十]top_p
  • [十]max_tokens
  • [十]suffix
  • [ ] best_of
  • [ ] echo
  • [ ] logit_bias
  • [ ] user
  • [ ] n

笔记

  • prompt目前只接受字符串

/v1/models

笔记

  • created对应于上次修改模型的时间
  • owned_by对应 OLLAMA 用户名,默认为"library"

/v1/models/{model}

笔记

  • created对应于上次修改模型的时间
  • owned_by对应 OLLAMA 用户名,默认为"library"

/v1/embeddings

支持的请求字段

  • [十]model
  • [十]input
    • [x] 字符串
    • [x] 字符串数组
    • [ ] 标记数组
    • [ ] 令牌数组
  • [ ] encoding format
  • [ ] dimensions
  • [ ] user

模型

使用模型前,请先在本地拉取ollama pull:ollama.cadn.net.cn

ollama pull llama3.2

默认模型名称

对于依赖于默认 OpenAI 模型名称的工具,例如gpt-3.5-turboollama cp要将现有模型名称复制到临时名称:ollama.cadn.net.cn

ollama cp llama3.2 gpt-3.5-turbo

之后,可以在model田:ollama.cadn.net.cn

curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "gpt-3.5-turbo",
        "messages": [
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'

设置上下文大小

OpenAI API 无法为模型设置上下文大小。如果需要更改上下文大小,请创建一个Modelfile如下所示:ollama.cadn.net.cn

FROM <some model>
PARAMETER num_ctx <context size>

使用ollama create mymodel命令创建具有更新的上下文大小的新模型。使用更新的模型名称调用 API:ollama.cadn.net.cn

curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "mymodel",
        "messages": [
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'

结果 匹配 ”"

    没有匹配 “ 的结果"