Open2

Ollama + LangChain ローカルDocker

RyoRyo

Ollama

Dockerイメージ

compose.yml
version: '3'

services:

  ollama:
    image: ollama/ollama:0.1.47
    ports:
      - "11434:11434"
    volumes:
      - ollama:/root/.ollama
    tty: true
    
volumes:
  ollama:

モデルを実行

docker compose exec ollama ollama run llama3
RyoRyo

LangChain

requirements.txt

fastapi == 0.111.0
uvicorn == 0.30.1
langchain == 0.2.6
langchain-community == 0.2.6
sse_starlette == 2.1.2
pydantic==1.10.13

実装コード

main.py
from fastapi import FastAPI
from langserve import add_routes
from langchain_community.chat_models import ChatOllama


model = ChatOllama(model="llama3", base_url="http://ollama:11434")
app = FastAPI()
add_routes(
    app,
    model,
    path="/chat",
)