Open9
Amazon Bedrock を AWS CLI で触る
今日 (2023/10/01) 時点で最新の AWS CLI v1 1.29.57
で Bedrock 関連のコマンドが増えている。
.aws-cli-v1 ❯ aws help | grep bedrock
o bedrock
o bedrock-runtime
AWS CLI v2 2.13.22
ではまだ。
bedrock
コマンドの方は、model/model 一覧の取得や削除、model に対する呼び出しのログ取得、 model のカスタマイズ job の作成、停止、参照ができる。
NAME
bedrock -
DESCRIPTION
Describes the API operations for creating and managing Bedrock models.
AVAILABLE COMMANDS
o create-model-customization-job
o delete-custom-model
o delete-model-invocation-logging-configuration
o get-custom-model
o get-foundation-model
o get-model-customization-job
o get-model-invocation-logging-configuration
o help
o list-custom-models
o list-foundation-models
o list-model-customization-jobs
o list-tags-for-resource
o put-model-invocation-logging-configuration
o stop-model-customization-job
o tag-resource
o untag-resource
ここで言うカスタマイズというのは、いわゆる fine-tuning のこと。 create-model-customization-job
.aws-cli-v1 ❯ aws bedrock create-model-customization-job help
NAME
create-model-customization-job -
DESCRIPTION
Creates a fine-tuning job to customize a base model.
You specify the base foundation model and the location of the training
data. After the model-customization job completes successfully, your
custom model resource will be ready to use. Training data contains
input and output text for each record in a JSONL format. Optionally,
you can specify validation data in the same format as the training
data. Bedrock returns validation loss metrics and output generations
after the job completes.
Model-customization jobs are asynchronous and the completion time
depends on the base model and the training/validation data size. To
monitor a job, use the GetModelCustomizationJob operation to retrieve
the job status.
For more information, see Custom models in the Bedrock User Guide.
foundation model (FM) 一覧を取得してみる。
.aws-cli-v1 ❯ aws bedrock list-foundation-models --region us-east-1
{
"modelSummaries": [
{
"customizationsSupported": [
"FINE_TUNING"
],
"inferenceTypesSupported": [
"ON_DEMAND"
],
"inputModalities": [
"TEXT"
],
"modelArn": "arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-tg1-large",
"modelId": "amazon.titan-tg1-large",
"modelName": "Titan Text Large",
"outputModalities": [
"TEXT"
],
"providerName": "Amazon",
"responseStreamingSupported": true
},
{
"customizationsSupported": [],
"inferenceTypesSupported": [
"ON_DEMAND"
],
"inputModalities": [
"TEXT"
],
"modelArn": "arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-e1t-medium",
"modelId": "amazon.titan-e1t-medium",
"modelName": "Titan Text Embeddings",
"outputModalities": [
"EMBEDDING"
],
"providerName": "Amazon"
},
...
テキスト生成に使える model 一覧。
.aws-cli-v1 ❯ aws bedrock list-foundation-models --region us-east-1 \
--query 'modelSummaries[?contains(outputModalities[0], `TEXT`)].modelName'
[
"Titan Text Large",
"Titan Text G1 - Express",
"J2 Grande Instruct",
"J2 Jumbo Instruct",
"Jurassic-2 Mid",
"Jurassic-2 Mid",
"Jurassic-2 Ultra",
"Jurassic-2 Ultra",
"Claude Instant",
"Claude",
"Claude",
"Command"
]
embedding に使える model 一覧。
.aws-cli-v1 ❯ aws bedrock list-foundation-models --region us-east-1 \
--query 'modelSummaries[?contains(outputModalities[0], `EMBEDDING`)].modelName'
[
"Titan Text Embeddings",
"Titan Text Embeddings v2",
"Titan Embeddings G1 - Text"
]
FM の詳細取得。
.aws-cli-v1 ❯ aws bedrock get-foundation-model --model-identifier 'amazon.titan-tg1-large' --region 'us-east-1'
{
"modelDetails": {
"customizationsSupported": [
"FINE_TUNING"
],
"inferenceTypesSupported": [
"ON_DEMAND"
],
"inputModalities": [
"TEXT"
],
"modelArn": "arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-tg1-large",
"modelId": "amazon.titan-tg1-large",
"modelName": "Titan Text Large",
"outputModalities": [
"TEXT"
],
"providerName": "Amazon",
"responseStreamingSupported": true
}
}
bedrock-runtime
コマンドには、実際に model を指定して処理を実行する invoke-model
サブコマンドのみ存在する。
.aws-cli-v1 ❯ aws bedrock-runtime help
NAME
bedrock-runtime -
DESCRIPTION
Describes the API operations for running inference using Bedrock
models.
AVAILABLE COMMANDS
o help
o invoke-model
NAME
invoke-model -
DESCRIPTION
Invokes the specified Bedrock model to run inference using the input
provided in the request body. You use InvokeModel to run inference for
text models, image models, and embedding models.
For more information about invoking models, see Using the API in the
Bedrock User Guide .
For example requests, see Examples (after the Errors section).
See also: AWS API Documentation
SYNOPSIS
invoke-model
[--accept <value>]
--body <value>
[--content-type <value>]
--model-id <value>
<outfile>
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
例えば embedding する場合は下記のようにする。
aws bedrock-runtime invoke-model \
--model-id 'amazon.titan-embed-text-v1' \
--body '{"inputText": "Hello Bedrock"}' \
--region us-east-1 \
embd
❯ cat embd | jq '.inputTextTokenCount'
3
次元数は 1536 (OpenAI の embedidng と同じ)
❯ cat embd | jq '.embedding | length'
1536
❯ cat embd | jq '.embedding' | head -n 10
[
-0.73046875,
0.390625,
0.24511719,
0.111816406,
0.83203125,
0.79296875,
0.53515625,
-0.0009841919,
0.82421875,