Open5

k8sgpt: Slack の スレッドを追う (interactive mode, custom analyzer, etc)

tozastationtozastation

Bartłomiej Płotka さんのスレッドから

Interactive mode of Analysis command. Generative APIs these days support continuous “chat” interaction (perhaps only some APIs?). It would be epic if user could pick one “failure” analysis item and continue to chat about it with LLM to learn more, without getting out of terminal.

Analyze の結果について,LLM と対話形式で深ぼっていけたら最高ではないかという提案

Custom analyzers. What’s the story if we would like to add some for OSS Prometheus? I guess it could be considered to have it built-in. But what about things specific to our managed service? That sounds like it requires some kind of plugin system (or fork). Any thoughts around it?

Prometheus や Google Cloud 提供サービスに対する Analysis も実装したいがビルトインかintegrationでいれるのどちらがいいだろうみたいな話

Custom prompt prefix for specific problem / analysis. Default prompt template is great, but I wonder if analyzer could.. propose additional prompt in the code? e.g. propose what solution could be explained or what specific product was involved :thinking_face:

既存のプロンプトに加えて,追加で情報を与えたりできないかという内容
追加の解決策や製品の内容も一緒に渡したいなどなど

Expectation input for Analysis. That is a pandora box, but maybe there is a way to scope this down. If the pod is crashlooping, that’s likely it’s expected to be actually running, thus analysis covering that problem makes sense. But there are tons of cluster issues that depends on what user expected.
For example, (maybe silly example) user might expect 2 replicas to be running. Why it’s only one replica running now? (perhaps replica field is set to 1?).
More specific example we would love to aim for is for user to be able to say e.g. “I expect Prometheus metric X for pod Y to be returned on my PromQL endpoint. What’s wrong?“. Some Prometheus analysis could do very specific analysis for that particular pod, scrape target and metric. (It could be as simple as --pod=… --metric=… flags)

ひとつ前のカスタムプロンプトと似ているがユーザーがどのような解決を分析に臨んでいるのかそのコンテキストを渡せるようにしたいというもの

(wild) Expectation input for Analysis.. using natural language. I guess that futuristic, but controlling what is analysed and how.. with LLM help.. that would be nice one day.. :sunglasses: I imagine this is ultra hard, required LLM model to trigger or suggest actions etc, but how knows, worth experimenting?

これはわからぬ

tozastationtozastation

40スレッドぐらいの盛り上がりを見せている

Custom Analyzer は Argo などの分析も作りたいと話題に上がっている

tozastationtozastation

These are great points - what I will do this morning is capture them in the project board. After that I think we need a community meeting to drill into some of the depth and understand the features/mechanics and desires of them

コミュニティのMTGをひらいて語り合おうという感じに

tozastationtozastation

Hi,
as mentioned few days ago, a solution that could suit the need is to reuse the VAP concept from kubernetes itself (https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/) and define a k8sgpt CRD based on it:
IMHO, no need to re-invent the wheel
CRD is life, it could be used by the k8sgpt cli simply by loading yaml definitions from the current FS (or the one embedded directly in the k8sgpt binary) and easily handled by the operator
Any project (Argo, Keda, ...) can set its own CR in their deployment method (helm, kustomize, ...)
last but not least, the translation to the kubernetes VAP definition should easy and k8sgpt could then reconfigure the k8s cluster with accurate VAP config to avoid mistake in a pro-active way

IMHO=in my humble opinion というらしい.私の考えでは
ざっくりいうと,ビルトインされた分析からファイルに定義して読み込めるようにしたいというもの
それは Kubernetes の Validation Admission Policy の思想ににているとお話しされている