Healthcare AI Agent - Guide to Building with Amazon Bedrock AgentCore
Introduction
AWS Samples
This Healthcare AI Agent project is sample code included in AWS Samples (github.com/aws-samples).
About this repository:
- GitHub: amazon-bedrock-agents-healthcare-lifesciences
- License: MIT License
- Purpose: Demonstrate AI agent implementation examples in healthcare and life sciences
Overall Project Structure

Diagram Explanation
This repository consists of four major components:
amazon-bedrock-agents-healthcare-lifesciences/
├── agents_catalog/ ← ① Ready-to-use specialized agents (28+ types)
│ ├── 01-biomarker-discovery/
│ ├── 02-clinical-trial/
│ └── ...
│
├── multi_agent_collaboration/ ← ② Multi-agent collaboration
│ ├── cancer_biomarker_discovery/
│ └── clinical_trial_protocol/
│
├── evaluations/ ← ④ Agent performance evaluation framework
│
├── ui/ ← ③ Web UI (for legacy method)
│
└── agentcore_template/ ← Target for deployment in this article
└── Empty template for creating diagram components (not directly shown in diagram)
② Multi-Agent Collaboration Example
Cancer Biomarker Discovery Agent
Overview:
The success rate of clinical trials in cancer treatment is very low at approximately 5%. Patient stratification using biomarkers (which treatment works for which patients) can improve clinical development success probability by 2-5 times.
Biomarkers (biological indicators) are biological molecules that indicate the presence of cancer, also called tumor markers. They are found in blood, body fluids, tissues, etc.
Three types of biomarkers in this project:
- Genomic: Gene expression data (RNA expression levels of 21 genes)
- Clinical: Demographics, tumor stage, survival status
- Radiology/Imaging: Tumor characteristics obtained from CT/PET-CT scans

Architecture Components:
Center: Strands Orchestrator Agent analyzes user questions, assigns tasks to four specialized agents, and integrates results
Four Specialized Agents:
- Agent 1: Database specialist - Natural language→SQL conversion, Redshift search
- Agent 2: Clinical evidence researcher - Literature search via PubMed (biomedical literature database provided by the U.S. National Library of Medicine) API
- Agent 3: Medical Imaging expert - CT scan analysis via SageMaker
- Agent 4: Statistician - Statistical analysis, Kaplan-Meier chart (survival rate changes over time) generation
Example Operation: "What are the top 5 biomarkers correlated with survival in chemotherapy patients? Generate a bar chart too"
1. User → Question
2. Orchestrator → LLM analysis
Decision: Database specialist + Statistician needed
3. Database specialist
• Natural language→SQL conversion
• Retrieve biomarker data for chemotherapy patients from Redshift
• Result: Gene expression values (numerical values indicating RNA quantity for each gene) and survival data (Dead/Alive)
4. Statistician
• Execute survival regression analysis
• Identify top 5 with lowest p-values
• Generate bar chart
5. Orchestrator → Integrate results
Generate final answer with LLM
6. User ← Receives "Top 5 biomarkers: GDF15 (p=0.001), POSTN (p=0.003)..."
+ Bar chart image
System Features:
- Multimodal: Integrates clinical, genomic, and CT scan image data
- Diverse Tools: Text2SQL, statistical models, literature search, image processing
- Transparency: Displays chain of thought to improve reliability
Relationship with agentcore_template:
This system uses the same technology stack as agentcore_template:
- Runs on AgentCore Runtime
- Uses Strands Agents framework
- Each specialized agent has its own tools
- Orchestrator coordinates everything
Using agentcore_template, you can build such advanced multi-agent systems.
Reference: multi_agent_collaboration/cancer_biomarker_discovery
Position of This Article
Content covered in this article:
1. Overall Picture Explanation (above section): Project structure and multi-agent collaboration example
2. AgentCore Template Explanation (this section onwards): Architecture explanation and deployment procedures
AgentCore Template
"Starter kit with infrastructure for creating enterprise-grade AI agents"
This template uses Amazon Bedrock AgentCore.
What is Amazon Bedrock AgentCore:
• Fully managed agent execution platform provided by AWS
• Five components: Runtime, Gateway, Memory, Identity, Observability
• Can be used in combination with Strands Agents framework
-
Existing Agents (
agents_catalog/): Ready-to-use specialized agents - AgentCore Template: Foundation for creating your own (empty for customization to your use case)
Immediately After Deployment:
- ✅ Infrastructure complete (Runtime, Gateway, Memory, Identity, Observability)
- ✅ Can converse with Claude
- ❌ No practical tools (samples only. Customize yourself.)
Strands Agents Framework
This template uses Strands Agents, an open-source SDK developed by AWS.
Key Features:
-
Model-first design
- Foundation Model (LLM) is the core of agent intelligence. Enables advanced autonomous reasoning.
-
MCP Integration
- Native support for Model Context Protocol (MCP)
-
Flexible Model Selection
- Anthropic Claude, Amazon Nova (Premier, Pro, Lite, Micro), etc.
Agentic Loop

The core concept of Strands Agents is the Agentic Loop. Agents repeat Model Loop (reasoning/tool selection) and Tool Loop (tool execution) to solve complex tasks step by step.
Importance: Autonomy, iterativity, flexibility (combining tools like MCP and program functions)
Concrete Example: Patient Information Retrieval Agent
User input
user_input = "Tell me the latest test results for patient ID 12345"
Agentic Loop (internal processing flow)
1. Prompt → Start Agent
2. [Model Loop 1]
LLM: "Need to retrieve patient information"
→ Tool selection: search_patient
3. [Tool Loop 1]
Execute search_patient(12345)
→ Result: {name: "Taro Yamada", age: 45}
4. [Model Loop 2]
LLM: "Next need to retrieve test results"
→ Tool selection: get_lab_results
5. [Tool Loop 2]
Execute get_lab_results(12345)
→ Result: {blood_pressure: "140/90"}
6. [Model Loop 3]
LLM: "Information complete, generate final answer"
→ Tool selection: None (end)
7. Return final response
"Latest test results for Taro Yamada (45 years old): Blood pressure 140/90"
This way, the Agentic Loop enables agents to autonomously select and execute necessary tools, solving complex tasks step by step.
AgentCore Template Overall Architecture

Amazon Bedrock AgentCore used in the template consists of five components.
By using Amazon Bedrock AgentCore, developers can rapidly deploy AI agents to production with scale, reliability, and security essential for real-world deployment.
1. Amazon Bedrock AgentCore Runtime
Role: Secure, serverless dedicated hosting environment (Docker container-based)
Key Benefits:
- Pay-as-you-go: Dynamic provisioning, typically no charges while agent waits for LLM response
- Long-running (up to 8 hours): Supports complex multi-step tasks, agent collaboration
- Session isolation: Isolate each user in dedicated microVM
- Large payload (up to 100MB): Supports multimodal data like images, audio, video
- Framework/model agnostic: Strands, LangGraph, CrewAI / Bedrock, OpenAI, Gemini, etc.
2. Amazon Bedrock AgentCore Gateway
Role: Standardized connection layer between agents and external tools
Key Benefits:
- Functions as MCP server: Single access point for agents to interact with tools
- Supports multiple tool types: Lambda functions, OpenAPI specifications, MCP servers, etc.
- Automatic conversion: Automatically converts REST APIs to MCP-compatible tools
- Secure authentication management: OAuth authentication (Gateway Authorizer), etc.
3. Amazon Bedrock AgentCore Memory
Role: Agent state management and context persistence
Key Benefits:
- Supports both short-term and long-term memory: Short-term (immediate) + Long-term (learning)
- User understanding: Mechanism to resume interrupted conversations and remember user preferences/tendencies
- Easy construction: Eliminates complex memory infrastructure management
4. Amazon Bedrock AgentCore Identity
Role: Comprehensive authentication and authorization management for agents
Key Benefits:
- Bidirectional authentication: Integrated management of inbound (user→agent) + outbound (agent→resource)
- Enterprise IdP integration: Seamless integration with Okta, Microsoft Entra ID, Cognito, etc.
- Audit trail and compliance support: Log all agent actions
- Hierarchical access control: Unique ARN per agent, apply policies at hierarchical levels
5. Amazon Bedrock AgentCore Observability
Role: Visualize and analyze agent-specific behavior
Key Benefits:
- Integrated operations dashboard: Tracing, debugging, monitoring in production
- Agent workflow: Detailed visualization of each step
- Comprehensive metrics: Latency, resource usage, error rates, request patterns, etc.
AgentCore Template Deployment Procedures
Position of This Section
In addition to the procedures in the README, this section documents problems actually encountered and their solutions.
Prerequisites
1. AWS Account
Requirements:
- Active AWS account
- Appropriate permissions (able to create CloudFormation, Lambda, Cognito, Bedrock, etc.)
2. Install and Configure AWS CLI
Refer to these procedures
Trial settings in this article:
- Default region: us-east-1
- Deployment destination: us-west-2 (overridden with environment variable)
3. Enable Bedrock Model Access
Access Amazon Bedrock Console, select "Model access" from left menu. Request access to the following models:
- Anthropic Claude 3.7 Sonnet
- Anthropic Claude 3.5 Haiku
Verification method:
aws bedrock list-foundation-models --by-provider Anthropic --region us-west-2
In this trial, Claude 3.7 Sonnet was used
4. Python 3.10 or Higher
Verification method:
python3 --version
Install Python 3.10 (for macOS):
brew install python@3.10
# Verify installation
python3.10 --version
Why 3.10+ is required:
-
agentcoreCLI dependency packages require Python 3.10+ - Some features may not work with 3.9 or below
Deployment Procedures
Step 0: Customize Tools (Pre-deployment Preparation)
Verify Sample Tools
Tools Included by Default:
1. Local Tool: get_system_info
from typing import Dict, Any
from strands import tool
@tool(
name="Get_system_info",
description="Gets basic system information"
)
def get_system_info() -> Dict[str, Any]:
"""Sample tool that returns Agent system information"""
return {
"status": "operational",
"version": "1.0.0",
"capabilities": ["text_processing", "memory_storage", "tool_execution"]
}
Role of this tool:
- Demo tool to verify Agent can call tools
- Executes within Agent container (no external access needed)
- Minimal implementation example
2. Gateway Lambda Tool: echo_message
def echo_message(message: str) -> str:
"""Sample tool that returns received message as-is"""
return f"Echo: {message}"
def lambda_handler(event, context):
"""Lambda entry point"""
extended_tool_name = context.client_context.custom["bedrockAgentCoreToolName"]
resource = extended_tool_name.split("___")[1]
if resource == "echo_message":
message = event.get("message")
if not message:
return {"statusCode": 400, "body": "❌ Please provide message"}
return {"statusCode": 200, "body": f"👤 Message echo: {echo_message(message)}"}
return {"statusCode": 400, "body": f"❌ Unknown toolname: {resource}"}
Role of this tool:
- Demo tool to verify Gateway can call Lambda
- Executes as Lambda function (outside Agent container)
- Implementation example of external tool integration
Add Patient Search Tool
This Implementation: Proceed by adding practical patient search tool for customization
1. Create healthcare_tools.py:
from typing import Dict, Any
from strands import tool
# Mock database for 5 patients
PATIENT_DATABASE = {
"12345": {
"patient_id": "12345",
"name": "Taro Yamada",
"age": 45,
"blood_pressure": "140/90",
"diagnosis": "Hypertension",
"notes": "Recommend low-salt diet and exercise therapy"
},
"12346": {
"patient_id": "12346",
"name": "Hanako Sato",
"age": 38,
"blood_pressure": "120/80",
"diagnosis": "Healthy",
"notes": "No abnormalities in regular checkup"
},
# ... (3 more patients)
}
@tool(
name="search_patient",
description="Search patient information by patient ID"
)
def search_patient(patient_id: str) -> Dict[str, Any]:
"""Search patient information by patient ID"""
patient = PATIENT_DATABASE.get(patient_id)
if patient:
return {"status": "success", "data": patient}
else:
return {
"status": "error",
"message": f"Patient ID {patient_id} not found",
"available_ids": list(PATIENT_DATABASE.keys())
}
2. Update agent_task.py:
from agent.agent_config.tools.sample_tools import get_system_info
from agent.agent_config.tools.healthcare_tools import search_patient # ← Add
# List of tools Agent can use
tools=[get_system_info, search_patient] # ← Add search_patient
For this trial, selected local tool as default sample tool (mock database is sufficient)
Note: For production with actual databases (RDS/DynamoDB), implement in lambda_function.py as Gateway Lambda tool and add schema to api_spec.json.
Step 1: Deploy Infrastructure
1-1. Setup Python Environment (Local)
Local
├── .venv/ ← Python virtual environment
│ └── agentcore CLI ← Developer tool calling AWS API
│
↓ Operate via AWS API
│
AWS Cloud
├── CloudFormation ← Create infrastructure
├── Lambda ← Execute Agent tools
└── AgentCore Runtime ← Agent main body runs (Docker container)
-
agentcoreCLI is developer tool (same as Terraform or AWS CDK) - Calls AWS API from local to create resources in cloud
- Agent main body will be Dockerized later and run in AWS cloud
Create virtual environment:
cd agentcore_template
# Explicitly specify Python 3.10 (verified/installed in prerequisites)
python3.10 -m venv .venv
source .venv/bin/activate
# Verify Python version
python --version
# → Verify it's Python 3.10.x
# Install dependency packages
uv pip install -r dev-requirements.txt
1-2. Deploy Infrastructure (CloudFormation)
⚠️ Note 1: Verify Deployment Region
prereq.sh automatically reads AWS CLI default region setting (verified in prerequisites).
This trial: Deploy to us-west-2
Verification result in prerequisites: Default region is us-east-1
Action: Specify us-west-2 with environment variable (AWS CLI global settings are used by other projects. Environment variables are only valid for this terminal session. Can use different regions per project)
# Valid only during terminal session (doesn't affect other projects)
export AWS_DEFAULT_REGION=us-west-2
export AWS_REGION=us-west-2
# Verify
echo $AWS_DEFAULT_REGION
# → us-west-2
⚠️ Note 2: Change Cognito MFA Settings
What is MFA (Multi-Factor Authentication):
- Two-factor authentication (password + SMS verification code)
Template default setting:
# prerequisite/cognito.yaml (original)
MfaConfiguration: 'OPTIONAL' # ← User can choose
Problem:
-
OPTIONALrequires SMS setup (phone number verification) - Deployment error without SMS setup
This action: Disable MFA for development environment
How to fix:
# Edit prerequisite/cognito.yaml
MfaConfiguration: 'OFF' # Change OPTIONAL → OFF
Impact:
- ✅ Easier deployment (no SMS setup needed)
- ✅ Sufficient for development/demo environment
- ❌ Two-factor authentication unavailable
Recommendation for production:
- Set MFA to
OPTIONALorON - Add SMS setup (Amazon SNS configuration required)
Execute deployment:
chmod +x scripts/prereq.sh
./scripts/prereq.sh myapp # Specify prefix
Processing flow:
-
Create S3 bucket:
myapp-<ACCOUNT_ID> -
ZIP Lambda code: ZIP
prerequisite/lambda/python -
Upload Lambda code to S3
-
CloudFormation Stack 1: Infrastructure
- Lambda Function (for Gateway tool:
echo_message) - IAM Role (for Gateway)
- IAM Role (for Runtime)
- SSM Parameters (save configuration values)
- Lambda Function (for Gateway tool:
-
CloudFormation Stack 2: Cognito
- Cognito User Pool
- Cognito User Pool Client (for Web)
- Cognito User Pool Client (for Machine: M2M authentication)
- Cognito Domain
- SSM Parameters (Cognito configuration values)
-
Create Knowledge Base (optional)
Verify deployment results:
chmod +x scripts/list_ssm_parameters.sh
./scripts/list_ssm_parameters.sh
Displayed content: 18 parameters
Step 2: Create AgentCore Gateway

Gateway role: Bridge connecting Agent and external tools (Lambda)
python scripts/agentcore_gateway.py create --name myapp-gw
Authentication configured in Gateway:
-
Inbound authentication (Agent → Gateway)
- Who can access Gateway
- Authentication method: Cognito JWT (JSON Web Token = digital ID)
- Allowed clients: Machine Client (created in Step 1)
-
Outbound authentication (Gateway → Lambda)
- Gateway's permission to call Lambda
- Authentication method: Gateway IAM Role (created in Step 1)
What's created:
- Gateway (MCP protocol compatible)
- Lambda target (connects to Lambda function created in Step 1)
Step 3: Configure AgentCore Identity (Credentials Provider)

Credentials Provider role: Mechanism to automatically obtain JWT tokens needed when Agent accesses Gateway
python scripts/cognito_credentials_provider.py create --name myapp-cp
Why needed:
- Agent Runtime needs JWT token when accessing Gateway
- Credentials Provider automatically obtains token from Cognito
- No need to manually manage tokens
Mechanism:
Agent Runtime
↓ "Want to access Gateway"
Credentials Provider
↓ Automatically access Cognito (Machine Client ID + Secret)
↓ Obtain JWT token
↓ Pass to Agent Runtime
Agent Runtime
↓ Access Gateway using JWT token
Step 4: Create AgentCore Memory

Purpose: Save Agent's conversation history and learning content
Memory types:
- Short-term Memory: Conversation history within session (used this time)
- Long-term Memory: Learning content across sessions (not used this time)
Three Memory strategies:
- SEMANTIC (fact extraction): "Patient ID 12345 is Taro Yamada"
- SUMMARY (conversation summary): "Asked about blood pressure in previous conversation"
- USER_PREFERENCE (user settings): "Prefers detailed explanations"
Create Memory:
python scripts/agentcore_memory.py create --name myapp
What's created:
- Memory ID:
myapp-<RANDOM_ID> - Event retention period: 30 days (default)
- Three strategies (SEMANTIC, SUMMARY, USER_PREFERENCE)
Example operation:
User: "My name is Taro"
Agent: "I'll remember that your name is Taro"
→ Save to Memory
User: "What is my name?"
Agent: "Your name is Taro"
→ Retrieve from Memory
Step 5: Setup and Deploy Agent Runtime

Purpose: Configure and deploy execution environment where Agent actually runs
⚠️ Note: Agent name must start with prefix (myapp)
5-1. Retrieve Values from SSM Parameters
./scripts/list_ssm_parameters.sh
Required values:
-
Required:
/app/myapp/agentcore/runtime_iam_role -
Only when using OAuth:
/app/myapp/agentcore/cognito_discovery_url/app/myapp/agentcore/web_client_id
5-2. Configure Agent Runtime
agentcore configure \
--entrypoint agent/main.py \
-er arn:aws:iam::<ACCOUNT_ID>:role/MyAppStackInfra-RuntimeAgentCoreRole-XXXXX \
--name myappHealthAgent
Interactive configuration:
-
Requirements file:
agent/requirements.txt(auto-detected) - ECR Repository: Select Auto-create
-
Authorization: Select IAM or OAuth
- This time: IAM (for development)
- OAuth: Use when publishing to end users
-
Memory: Select Short-term only
- Long-term takes time to process (120-180 seconds), unnecessary in development stage
IAM vs OAuth differences:
| Authentication | Use | Credentials | Example |
|---|---|---|---|
| IAM | Developers/Administrators | AWS CLI credentials |
agentcore invoke command |
| OAuth | End users | Cognito login | Streamlit UI (app_oauth.py) |
This selection: IAM (development/demo environment)
Important note:
- Runtime authentication (IAM/OAuth) and Gateway authentication (M2M) are independent
- Gateway connection is automated with M2M authentication (Credentials Provider)
Created configuration file (.bedrock_agentcore.yaml):
- Agent name: myappHealthAgent
- Platform: linux/arm64 (AWS Graviton)
- Memory: Short-term only (30-day retention)
- Observability: Enabled (CloudWatch Logs, X-Ray)
5-3. Deploy Agent Runtime
⚠️ Important: Delete Old Configuration File
rm -f .agentcore.yaml
Reason:
-
.agentcore.yaml= Old configuration file -
.bedrock_agentcore.yaml= New configuration file (created in 5-2) - New configuration won't be reflected if old configuration remains
Execute deployment:
agentcore launch
What happens:
-
Build with CodeBuild (approximately 61 seconds)
- Docker image build (ARM64, approximately 305MB)
- Push to ECR
-
Runtime deployment
- Deploy container to Agent Runtime
- Connect Memory (created in Step 4)
- Connect Gateway (created in Step 2)
- Enable Observability (CloudWatch Logs, X-Ray)
Deployed Agent structure:
agent/
├── main.py # Entry point
├── requirements.txt # Dependency packages
└── agent_config/
├── agent.py # Agent definition (Claude 3.7 Sonnet)
├── agent_task.py # Tool configuration
└── tools/
├── sample_tools.py # Default tool (get_system_info)
└── healthcare_tools.py # Added tool (search_patient) ← Added in Step 0
5-4. Verify Agent Operation
For IAM authentication (this time):
agentcore invoke '{"prompt": "Hello"}'
For OAuth authentication:
python tests/test_agent.py myappHealthAgent -p "Hello"
Test results:
- ✅ Agent responds normally
- ✅ Session ID issued
- ✅ Tools available via Gateway
5-5. Observability (Monitoring)
Auto-enabled: Automatically enabled in agentcore configure in Step 5-2
What's enabled:
- CloudWatch Logs: Agent execution logs
- X-Ray tracing: Records Agent reasoning steps, tool calls, model interactions
- GenAI Observability Dashboard: Visualizes Agent behavior
Display dashboard (optional):
- Enable Transaction Search in CloudWatch
- Access GenAI Observability Dashboard
Verify logs:
aws logs tail /aws/bedrock-agentcore/runtimes/myappHealthAgent-XXXXX-DEFAULT --follow
Tools Agent can use:
-
Local tools (execute within Agent container):
-
get_system_info: Get system information (default) -
search_patient: Patient search (added in Step 0)
-
-
Gateway tools (execute via Lambda):
-
echo_message: Message echo back (sample)
-
-
Strands built-in tools:
-
current_time: Get current time
-
Step 6: Launch Streamlit UI
Purpose: Local UI for calling Agent Runtime
Local
├── Streamlit UI (app.py)
│ ↓ Call AWS API with boto3
│ ↓
AWS Cloud
└── AgentCore Runtime
└── Agent container (running)
└── search_patient tool (added in Step 0)
Important: Streamlit UI runs locally, Agent runs on AWS
Launch Streamlit UI:
cd agentcore_template
source .venv/bin/activate
export AWS_DEFAULT_REGION=us-west-2
streamlit run app.py --server.port 8501
Open in browser: http://localhost:8501
How to use:
- Select Agent in sidebar (
myappHealthAgent) - Enter question in chat input field

What's happening:
- Streamlit UI sends question to Agent Runtime (via boto3)
- Agent calls
search_patienttool (tool added in Step 0) - Retrieve patient information from mock database
- Agent formats and returns result
- Display in Streamlit UI
Available patient IDs (created in Step 0):
12345: Taro Yamada (45 years old, hypertension)
12346: Hanako Sato (38 years old, healthy)
12347: Ichiro Suzuki (62 years old, hypertension/diabetes)
12348: Misaki Tanaka (29 years old, early pregnancy)
12349: Kenta Takahashi (55 years old, borderline hypertension)
Further Extensions
1. Gateway Lambda Tools (prerequisite/lambda/python/)
Add Lambda tools when external system integration is needed.
# Add to lambda_function.py
def get_lab_results(patient_id: str) -> dict:
# Call external API
response = requests.get(f"https://lab-api.example.com/results/{patient_id}")
return response.json()
Features:
- Execute as Lambda function
- External system integration
- Database access
Addition steps:
- Add function to
lambda_function.py - Add schema to
api_spec.json - Redeploy Lambda (
./scripts/prereq.sh) - Recreate Gateway
2. Knowledge Base (prerequisite/docs-rag/)
Add Knowledge Base when referencing medical literature or guidelines.
# Upload medical literature PDF
cp medical_guidelines.pdf prerequisite/docs-rag/
# Create Knowledge Base
python prerequisite/knowledge_base.py --mode create
Features:
- Document search (RAG)
- Medical literature, guideline reference
Important Update Information
⚠️ Important Update (September 17, 2025): The new recommended way to deploy agents is using the Strands framework and Amazon Bedrock AgentCore. This codebase is iteratively being updated shortly to reflect these new best practices. The existing deployment methods for individual agents and the entire stack are still functional, we also provide the following examples with the new framework.
What's happening?
This repository is in a transition period. Old and new methods use completely different services.
Technical Differences
| Item | Old Method | New Method |
|---|---|---|
| API | Amazon Bedrock Agents | Amazon Bedrock AgentCore |
| Agent entity | AWS resource | Docker container |
| Deployment method | CloudFormation one-click |
agentcore CLI (6 steps) |
| CloudFormation | All-in-one (14 agents) | Foundation only (S3, Lambda, IAM) |
| Tool connection | Lambda Action Group | Gateway + MCP |
| Framework | None (boto3 direct) | Strands |
| Execution environment | AWS managed | AgentCore Runtime (microVM) |
| UI | React App (full-featured) | Streamlit (simple) |
| Customization | Difficult | Easy |
Status: ✅ Recommended method ("recommended way" - since September 17, 2025)
- Selected new method this time
Important Notes
Important notes from amazon-bedrock-agents-healthcare-lifesciences README:
Quote from Legal Notes:
"This solution is for demonstrative purposes only. It is not for clinical use and is not a substitute for professional medical advice, diagnosis, or treatment. The associated notebooks, including the trained model and sample data, are not intended for production."
"This solution is for demonstration purposes only. It is not intended for clinical use and is not a substitute for professional medical advice, diagnosis, or treatment. Associated notebooks (including trained models and sample data) are not intended for production use."
Discussion