From 210de0a1c65bce57cd3b29cf22ead402386a01ea Mon Sep 17 00:00:00 2001 From: JMARyA Date: Mon, 18 Mar 2024 10:18:06 +0100 Subject: [PATCH] update ollama --- technology/applications/utilities/Ollama.md | 86 ++++++++++++++++++++- 1 file changed, 85 insertions(+), 1 deletion(-) diff --git a/technology/applications/utilities/Ollama.md b/technology/applications/utilities/Ollama.md index a046e38..006307e 100644 --- a/technology/applications/utilities/Ollama.md +++ b/technology/applications/utilities/Ollama.md @@ -2,7 +2,7 @@ obj: application repo: https://github.com/ollama/ollama website: https://ollama.ai -rev: 2024-02-19 +rev: 2024-03-18 --- # Ollama @@ -1047,6 +1047,90 @@ MESSAGE user Is Ontario in Canada? MESSAGE assistant yes ``` +## OpenAI Compatability +Ollama now has built-in compatibility with the OpenAI Chat Completions API, making it possible to use more tooling and applications with Ollama locally. + +### Usage +#### OpenAI [Python](../../dev/programming/languages/Python.md) library +```python +from openai import OpenAI + +client = OpenAI( + base_url='http://localhost:11434/v1/', + + # required but ignored + api_key='ollama', +) + +chat_completion = client.chat.completions.create( + messages=[ + { + 'role': 'user', + 'content': 'Say this is a test', + } + ], + model='llama2', +) +``` + +#### OpenAI JavaScript library +```js +import OpenAI from 'openai' + +const openai = new OpenAI({ + baseURL: 'http://localhost:11434/v1/', + + // required but ignored + apiKey: 'ollama', +}) + +const chatCompletion = await openai.chat.completions.create({ + messages: [{ role: 'user', content: 'Say this is a test' }], + model: 'llama2', +}) +``` + +#### [curl](../cli/network/curl.md) +```sh +curl http://localhost:11434/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "llama2", + "messages": [ + { + "role": "system", + "content": "You are a helpful assistant." + }, + { + "role": "user", + "content": "Hello!" + } + ] + }' +``` + +### Model names +For tooling that relies on default OpenAI model names such as `gpt-3.5-turbo`, use `ollama cp` to copy an existing model name to a temporary name: + +```sh +ollama cp llama2 gpt-3.5-turbo +``` + +Afterwards, this new model name can be specified the `model` field: +```shell +curl http://localhost:11434/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "gpt-3.5-turbo", + "messages": [ + { + "role": "user", + "content": "Hello!" + } + ] + }' +``` + ## Libraries & Applications - [ollama-rs](https://github.com/pepperoni21/ollama-rs) - [ollama-webui](https://github.com/ollama-webui/ollama-webui)