iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
💨

Playing with Azure OpenAI Service

に公開


"I want to do something like improving development productivity with AI, but how should I go about it?"
"I'm worried about information leaks if we just use ChatGPT casually, so maybe call the internal Azure OpenAI Service..."
Since I received a suggestion like that, I decided to give it a try.

Summarizing the page open in the browser

Extracting data from the company-wide Wiki to train a model feels too big for a starting point, so I'll start small.
Therefore, I thought about having it summarize the page currently open in the browser.
I considered making it a Chrome extension, but installing unofficial extensions is a hassle, so I'll go with a bookmarklet.

javascript:(()=>{
  const apiKey = "OPENAI_API_KEY";
  const url = 'https://example.openai.azure.com/openai/deployments/gpt4-o-model/chat/completions?api-version=2024-02-15-preview';
  const title = document.title;
  const contents = document.body.innerText;
  const data = {
    messages: [
      { role: 'system', content: 'You are an AI assistant that helps us. Please write Japanese.' },
      { role: 'user', content: `以下の文章を要約し、最も重要なポイントをMarkdownで示してください\n\nTitle: ${title}\nText: """\n${contents}\n"""`}
    ],
    max_tokens: 2000,
    temperature: 0.7,
    frequency_penalty: 0,
    presence_penalty: 0,
    top_p: 0.95,
    stop: null
  };

  fetch(url, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'api-key': apiKey
    },
    body: JSON.stringify(data)
  })
  .then(response => response.json())
  .then(result => {
    const summary = result.choices[0].message.content;
    const newWindow = window.open();
    newWindow.document.write(`<html><head><title>${title} Summary</title></head><body><h1>${title} Summary</h1><pre>${summary}</pre></body></html>`);
    newWindow.document.close();
  })
  .catch(error => {
    console.error('Error:', error);
    alert(`Error: ${error}`);
  });
})();

Replace apiKey and url, then save it as a bookmark.
Open https://zenn.dev/watarukura/articles/20240509-fqlkbjdnccw8lmcx1raljdwn3rmz0 in your browser and run the bookmarklet, and it looks like this.
(Since popup blockers might trigger, you might prefer outputting to console.log if you don't like that.)

img.png

Preparing a ChatGPT environment that can be used safely within the company

A senior colleague introduced me to https://github.com/open-webui/open-webui, so I've decided to set it up to call Azure OpenAI Service.
It seems I can just use LiteLLM to proxy it.
Most of the instructions are covered here: https://zenn.dev/kun432/scraps/e1ff3ebfb97177#comment-5af53b5d8ff0dd.

I will prepare compose.yml and config.yml as follows.

services:
  openai-proxy:
    image: ghcr.io/berriai/litellm:main-latest
    ports:
      - "8001:8000"
    volumes:
      - ./config.yml:/app/config.yml
    command: [ "--config", "/app/config.yml", "--port", "8000", "--num_workers", "8" ]
    environment:
      - "MASTER_KEY=${LITELLM_API_KEY}"
      - "OPENAI_API_KEY=${OPENAI_API_KEY}"
  open-webui:
    image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
    volumes:
      - open-webui:/app/backend/data
    ports:
      - ${OPEN_WEBUI_PORT-3000}:8080
    environment:
      - "OLLAMA_BASE_URL=http://ollama:11434"
      - "OPENAI_API_BASE_URL=http://openai-proxy:8000/v1"
      - "OPENAI_API_KEY=${LITELLM_API_KEY}"
    restart: unless-stopped
volumes:
  open-webui: { }
# https://zenn.dev/link/comments/5af53b5d8ff0dd
model_list:
  - model_name: gpt4o
    litellm_params:
      api_base: https://example.openai.azure.com/
      model: azure/gpt4-o-model # "azure/[deployment_name]"
      api_key: "os.environ/OPENAI_API_KEY"
      api_version: "2024-02-15-preview"
litellm_settings:
  num_retries: 3
  request_timeout: 60
  timeout: 60
  set_verbose: True
general_settings:
  master_key: sk-1111

After that, just use direnv to load OPENAI_API_KEY as an environment variable and run docker compose up.
You can then open localhost:3000 in your browser.

img.png

Summary

I'm satisfied that I was able to create something functional in such a short amount of time.

Discussion