iTranslated by AI
re:Invent 2025: Practical Guide to Building Multi-tenant SaaS Applications with Kiro and MCP
Introduction
By transcribing various overseas presentations into Japanese articles, we aim to make hidden valuable information more accessible. The presentation featured in this project, which aims to achieve this concept, is here!
For re:Invent 2025 transcription articles, please refer to this Spreadsheet for consolidated information.
📖re:Invent 2025: AWS re:Invent 2025 - Kiro meets SaaS: Generating multi-tenant agentic applications with a GenAI IDE
In this video, Aman and Anthony from AWS showcase a practical example of building multi-tenant SaaS applications using Kiro. They developed a control plane and helpdesk application incorporating AI agents, based on the SaaS Well-Architected Lens and the SaaS Builder Toolkit. By crawling the SaaS Lens with Crawl4AI to create steering documents and integrating the SaaS Builder Toolkit as an MCP server with GitMCP, they imparted multi-tenancy knowledge to Kiro. The detailed process of how one developer built a production-ready SaaS in two and a half weeks without writing a single line of code, utilizing spec-driven development, agent hooks, and MCP servers, is discussed. They share how they achieved a comprehensive solution including 226 test files, 327 documents, and seven independently deployable components, along with practical lessons learned such as unifying the testing framework and clarifying requirements.
- Please note: This article was automatically generated while maintaining the content of the original presentation as much as possible. It may contain typos or inaccuracies.
Main Content
Session Start: The Challenge of Simultaneously Achieving AI Agents and Multi-tenant SaaS
Hello everyone. My name is Aman. I am a Cloud Architect at AWS. I work in Professional Services, and next to me is Anthony, who is a Senior Solutions Architect at AWS. Today, we're going to deliver SAS406, a session titled Kiro meets SaaS. This is listed as a 400-level talk track, so it will include a lot of video walkthroughs, but it does assume you have some level of SaaS knowledge and some level of AI tooling knowledge.
So, before we get started, let me ask you a question. Can I see a show of hands, how many of you have ever been in a meeting where someone has said, "Let's just add an AI agent"? Can I see a show of hands? Okay, a lot of you. Fantastic. How many of you, then, have ever been deep into developing features, only to have the business casually ask, "By the way, can we make it multi-tenant?" Anyone experience that? Fewer hands than the first question, which is good. So,
If you haven't been in either of those two meetings, congratulations, because you're in those meetings today. And what we're going to actually try and do is say yes to that. Not only that, we're going to do it the right way. And that includes tenant isolation, Well-Architected, multi-tenancy, all of it.
So, let's talk about what we actually committed to building as we get started. We committed to building a control plane on AWS. Not a prototype, not a demo, but the real deal. Multi-tenant from the very beginning, proper isolation, proper security, all of it. We also committed to building a demo tenant application, which is basically a service management product where tenants can create tickets, they can comment, they can search a knowledge base. And we wanted to embed AI agents into both of these applications. An AI agent for the control plane to help SaaS administrators manage tenant operations, and an AI agent within the tenant app to allow tenants to actually communicate with the AI agent and open tickets in natural language. And we decided to build all of this with the SaaS Builder Toolkit and the Well-Architected Framework. No shortcuts.
Now, you might be thinking this sounds like a massive team, six-month project, and frankly, so did we. But what if there was another way? That leads us to today's session. This is the roadmap. For the next 50 to 55 minutes, this will be our map to navigate this session. We're going to tell you exactly as we experienced it. We'll talk about the goals, dive deeper into the SaaS challenges, and talk about the tools we used: Kiro, the Well-Architected Lens, and the SaaS Builder Toolkit. Then we'll talk about the actual build, what we call our adventure, the true story of how we actually built this, what went well, what actually worked, and then our hard-learned lessons. And then we'll wrap up with where we are today and what this means for building multi-tenant SaaS with AI. Sound good? Let's get started.
So why is this quest so challenging? Because everyone building SaaS today faces a fundamental dilemma. We are asked to deliver features and innovate faster than ever before, all while managing complex architectures. At the same time, we have to manage tenant isolation, data partitioning, and all of those other things. We have to handle multi-tenancy, deliver perfect security, and maintain very high quality without accumulating technical debt. Balancing this speed and quality is the fundamental challenge that we, as builders, face today.
Overview of the Architecture to Build: Control Plane and Tenant Application
So, let's take a look at what we're actually going to build. On the left side, you see the control plane. We'll start with the necessary control plane elements. We need an entry point into the application. In our case, that will be a SaaS administration portal. This will be a React application hosted in an S3 bucket and delivered via CloudFront. The backend services will be placed behind an API authorizer using API Gateway and connected to an identity provider like Amazon Cognito. The backend will also host services for tenant registration, de-registration, user management, and all other SaaS operations.
Next, we need to incorporate some AI agents. To do that, we'll create a Lambda proxy and use the Amazon Bedrock Agent Core runtime.
We'll combine this with Amazon Agent Core's short-term memory to achieve session-level persistence. All images will be stored in a trusted ECR repository. This is great. All of this can handle many tenant registration and user registration requests, but how do we actually provision them? We need a provisioning engine. What we'll do is connect the API through an event bus and connect that service to a Step Function that can actually trigger Code Build jobs to run CDK code to provision tenant infrastructure. This is a snapshot of our control plane services, with tenant registration, de-registration, user management, and AI agents incorporated, connected to the provisioning or de-provisioning service via an event bus.
Next, we need to think about the tenant application. For this demo, we wanted to build a service management product. To enhance the demo's capabilities, we decided to use serverless technologies. We also added AI agents. Similar to the control plane, we placed a Lambda proxy in front of the Agent Core runtime with Agent Core's short-term memory, ensuring end-to-end observability through Agent Core Observability. And finally, this is our quest for today: you'll see the left side covering all the services required for the control plane, and the right side functioning as the tenant application plane.
Enter Kiro: Spec-Driven Development, Agent Hooks, and MCP Server Integration
So, how do we actually tackle all of this without a large team or a six-month timeline? That's where Kiro comes in. Here's the point: building a To-Do app with AI, many tools can do that. But building a secure multi-tenant SaaS platform with AI agents integrated, proper isolation, and well-architected principles – that's a whole different ballgame. Before we dive into SaaS, let's take a closer look at Kiro's capabilities. For those unfamiliar with Kiro, this will be a 200-to-400-level talk, so let's journey together.
First, there's spec-driven development. You describe what you want to build, and Kiro generates a comprehensive spec including requirements and architecture. We iterate together until it's right, and then we build from that approved blueprint. It's like having a meticulous little project manager right in your IDE. Everything is documented, everything is extremely well-structured. Tasks have links to requirements, and nothing falls through the cracks. Using Kiro for the first time, I felt it was a true AI partner, not a black box, because I could see exactly what Kiro was seeing.
Next, agent hooks. These are automated workflows that can run in the background. You save code, and tests are generated. You update a component, and documentation is refreshed. You change infrastructure, and security tests are run. Quality is automatically maintained. And third, Kiro can connect to any MCP server on the internet. What this actually allows you to do is extend Kiro's context and teach Kiro things it doesn't already know. Spec-driven planning, deep context, and automated quality—this is what makes Kiro so powerful.
Kiro in Action: From Steering Documents to Agent Hooks
Now, let's jump into the demo. I'm going to move to the podium and show you the demo. We'll start with a quick overview of the features. The ghost icon on the left provides a comprehensive menu of all Kiro's capabilities. As you can see, this includes specs, hooks, and steering documents, along with an advanced context management plan for MCP servers. Currently, the FetchMCP server is enabled, which allows Kiro to access any website on the internet to learn from it.
The next feature we're going to explore is agent steering. We'll start by generating steering documents. This is a brownfield project. By clicking this button, Kiro begins to reverse engineer the project's source code, creating detailed markdown files. These documents provide essential context for the specs and prompts you run throughout your project. With one click, Kiro has automatically generated these files. For this demo, we'll be using a project based on HTML, Python, and an SQLite database.
The first file Kiro generates is the product.markdown file. This document provides a clear, human-readable description of the source code's functionality. As a developer, reviewing this file helps you quickly grasp the application's purpose. The next important document is the tech.md file.
This document summarizes all the technical components within the source code. It lists the libraries used in the data layer, specific versions, package details, and some key commands needed to run the project. The final document Kiro creates from the steering docs feature is the structure.md file. This file acts as a roadmap for the AI, allowing it to understand the source code's structure. When you request a new feature spec, Kiro maintains the original project's structure and adheres to its conventions, thanks to this file and its contents.
Users can also create custom steering files. Users can create custom steering files to enhance Kiro's capabilities. You can do this manually by creating a new file, or you can tailor it yourself, or you can ask Kiro to create it for you. In this example, we're creating a style guide based on PEP 8, a very famous standard for Python code. Now, once we give this prompt, what Kiro is going to do next is use the Fetch MCP server to actually grab the information it needs for this PEP 8 style guide. And it'll ask for approval before any external calls.
It will then summarize this document and save it as a new style guide.md. As you can see, the steering file includes a detailed understanding of Python's PEP 8 standard and will now be included in every prompt you give Kiro for this project. Within the chat interface, developers can seamlessly switch between Spec for structured development and Vibe for more free-form interactions. In a spec-driven workflow, we'll begin with a simple prompt to add a new feature to this project for managing categories. From there, Kiro builds out detailed requirements, design documents, and a project plan. These can be iterated and refined with Kiro even before writing a single line of code.
To achieve this, Kiro ingests the steering documents we just created. This includes the product document, the structure document, the tech document, and our custom style guide we just made. It analyzes the source code and begins to generate the first part of the spec. This first part is called the requirements.md file, which expands your initial prompt into a fully realized feature concept. It utilizes EARS format, which stands for Easy Approach to Requirements Syntax. It's an industry standard, and you might have seen it before. Each requirement has clear acceptance criteria set.
Once the requirements are set and we're satisfied with them, we then move on to the design phase. Kiro converts the requirements into a technical implementation plan, the design.md file. This requires a deep understanding of the existing source code, and this file is the one I spend the most time with. This file is great for architects, and it gives you an architectural overview of what you're about to build. It also gives you a plan to build it with some specific examples. You'll see that it includes what the data model will be for this category feature, what the error handling approach will be, and all the other important elements.
Once the design is approved, we move to the final step. This final step breaks down the requirements and design documents into comprehensive project tasks that Kiro can execute and track. Each task includes sub-elements, status trackers, and clear links to the specific requirements it fulfills. What this does is what I said earlier, that meticulous project manager in your project. You actually get to see all the tasks, and as you proceed with the tasks, they get marked as complete.
And finally, let's look at a key feature, agent hooks. Hooks are easily configured from the Kiro panel using a very simple form, where you can describe in natural language what you want the hook to do. In this example, we ask Kiro to monitor changes in Python or HTML files, and when that file is saved, summarize the changes and append them to a separate file named changelog.md.
What I want is to have a tracking system that can log all my changes. From this natural language, Kiro generates the hook. The hook consists of several key elements: an expanded, more detailed prompt, a trigger event (in this case, on file save), and a file pattern to monitor (in this case, .py or HTML). This hook is actually placed in the .kiro folder within your IDE. It also includes a title, a description, and an easy toggle to turn it on and off. This is how you automate workflows within Kiro.
Kiro's Latest Features: Property-Based Testing, CLI, Workspaces, and Kiro Powers
Kiro is great, but it's still growing. There are some notable new releases, and I'd like to share them with you. First, property-based testing. Traditional unit tests check individual examples. Given this input, do I get this output? This tends to be heavily biased by the developer creating the feature, and developers can easily miss testing edge cases. Property-based testing is different. What it does is identify what properties of a specification must always be true. It then uses generators to generate hundreds or thousands of tests and runs them. This makes your system more robust because if even one test case fails, Kiro has found a bug. It then needs to ask you whether you want to change the source code or change the specification.
The second is the Kiro CLI. The Kiro CLI extends AI agents from your IDE to your terminal, allowing you to use the same settings and features. You can debug production issues, scaffold infrastructure, and automate workflows without switching context. AI assistance is available exactly where you need it. This is actually a facelift of the old Q developer CLI, now fully integrated into the Kiro CLI.
Third, when building complex systems, code often spans multiple repositories. This is especially true for SaaS. You have a frontend repository, a backend repository, specific libraries, infrastructure code, and so on. Kiro allows you to open all these repositories within a Kiro workspace, supporting multiple roots. This means each of these projects has its own .kiro, and Kiro contextually knows where you are.
And finally, checkpoints. This has really helped me. As a developer, I often get sidetracked and delve too deeply into features during a specific session, but these are autosave checkpoints that allow you to roll back within a session. And this is actually hot off the press. True story, I put these slides together yesterday, and I wanted to share them with you.
Kiro Powers is the latest addition to Kiro's capabilities. We haven't used it in the context of this presentation, but I'll talk a little bit about it. Think of Powers like Neo in the Matrix. What if you could download and use kung fu instantly and then forget it when you're done? That's the purpose of Kiro Powers. Loading a large number of MCP servers into memory consumes a lot of context from your session, but Kiro Powers makes this dynamic. When you say you want to work with a database, that triggers a Power you've installed, for example, Supabase, which helps you interact with that database using the necessary MCP servers. And when you say you want to deploy something or create a new agent with Strands, it downloads the Strands Power, giving you access to all its steering files and tools.
All available Powers can be viewed at kiro.dev/powers. There's already a wide range of Powers listed, allowing for API testing with Postman, design-to-code conversion with Figma, and more. Supabase is also available to power up your Supabase database. And there's a new SaaS Power I was playing with last night, and it looks very promising. We haven't used it in this demo, but definitely check it out.
When you update Kiro, you'll see the powers option like this. You can see the little ghost icon with Zeus's thunder showing off its power. I love this little ghost. With one click, you can install any power and start using it immediately today. It's awesome. So far, we've shown you some of Kiro's features. They're all powerful.
Establishing Design Guidelines with the SaaS Well-Architected Lens
But let's be honest. The risk with AI isn't that you can't build something. The risk is that you build the wrong thing really, really fast, and really, really well. And in multi-tenant SaaS, mistakes are costly. A tenant isolation bug isn't just a quick hotfix. It's a security incident. It's a matter of customer trust, and sometimes even your business's reputation. So we needed to give Kiro more than just technical capabilities.
We needed to give Kiro wisdom. Accumulated knowledge for how to make SaaS architecture work in production. And we needed a North Star to guide us. That's why we chose the SaaS Well-Architected Lens.
The SaaS Lens addresses the unique challenges of multi-tenancy and provides a common language for architectural decisions. And let me explain how this shaped our actual requirements. Operational excellence meant tenant-aware monitoring. We needed to understand the health and performance of each tenant, not just the system as a whole. Security was non-negotiable, and this lens forced us to recognize that tenant isolation was an absolute foundational principle. Any breach would be catastrophic, and our design had to be impeccable from day one.
Reliability meant blast radius containment. It was critical that a problem for one tenant didn't impact others. Nobody wants to be woken up at 2 AM because of a noisy neighbor. Performance efficiency guided us to implement tenant-based scaling, which is far more cost-effective than a one-size-fits-all approach. And finally, cost optimization. As we scale, we need to ensure that we protect our business's bottom line. This pillar helps us achieve that.
Crawling the SaaS Lens with Crawl4AI: Giving Kiro Wisdom
Now, the next challenge we faced was, we needed to somehow bring all of this into Kiro's context. The SaaS Lens is actually a collection of web pages on the internet. So we realized we needed a crawler. To achieve this, we'll use an open-source web crawler that has over 55,000 stars. Crawl4AI is my go-to open-source web crawler that I always use when I'm doing web crawling.
Why do I like it so much? First and foremost, it's markdown native. It scrapes all your documents and processes them into markdown files. This is great for LLMs, LLMs love markdown. Secondly, it's fast, efficient, and handles asynchronous processing. It can actually crawl multiple links simultaneously, extracting media tags such as images, audio, and video. It also handles infinite scrolling and other pesky JavaScript wizardry that's been popping up recently. And finally, my favorite reason: it's free and open-source, so you can use it too.
Now, starting in Vibe mode, we ask Kiro to crawl the SaaS Well-Architected Lens using Crawl4AI. We've passed it a URL, and what it's trying to do right now is use agentic capabilities to install Python and its pip dependencies, and Crawl4AI onto my laptop. It then uses its understanding of this library to write a Python script that will help it crawl the entire SaaS Well-Architected Lens. It quickly understands that it needs to grab not only the landing page but also the related links. We iterate several times to ensure the crawl's completeness.
Next, we need to optimize the relevant content and turn it into steering documents. After the crawl is complete, Kiro automatically cleans up the data, removing irrelevant text like cookie settings, to make the content usable. After a few iterations, the final steering document is complete. This gives Kiro the first superpower we needed, the wisdom we sought. We're able to stand on the shoulders of giants and build from there, rather than repeating the same mistakes.
So, we can see the crawled and optimized SaaS Well-Architected Lens now complete as a steering document within Kiro. The next thing we want to do is equip our IDE with critical tools. We're now going to edit the MCP.json file. This is where all our MCP configurations are stored. We're going to attach the Strands MCP server. This contains specific information on how to build agents using Strands. We're also going to add the AWS documentation server. This provides our IDE with up-to-date information on AWS docs. And finally, we're adding the Amazon Bedrock AgentCore MCP server. This is a new service, and we need to add it to circumvent foundation model context limits.
And to complement this, we're also going to give Kiro a prompt. We ask it to create a steering document that always prioritizes using these MCP servers whenever it's unsure. This prevents Kiro from hallucinating details and directs it to refer to the MCP servers to expand its context when uncertain. As you can see, Kiro is now loading all the steering documents.
It's also loading the MCP.json file, understanding all the MCP servers, and finally creating the actual steering document for the MCP-first approach. This is the actual steering document Kiro generated from that prompt. This ensures that whenever Kiro attempts to do something, it first refers to the MCP servers to ensure the code is high quality and follows best practices. Great. So we understand how to give Kiro guidance through steering documents.
Integrating the SaaS Builder Toolkit: Contextualizing Repositories with GitMCP
Next, what we wanted to do was actually something else. Knowing the principles is one thing, but just because you know the principles doesn't mean you have to rebuild the same thing over and over again. For this, we needed an accelerator. So we used the SaaS Builder Toolkit. The SaaS Builder Toolkit is an open-source infrastructure tool from AWS that solves problems common to all multi-tenant SaaS. Tenant onboarding, user management, isolation, metering – these are all built into the toolkit. So instead of rebuilding all of them, what we want to do is build on top of them.
And this is the architecture of the SaaS Builder Toolkit. The control plane handles all SaaS management operations, including tenant onboarding, user management, and billing. It can be operated via CLI or a management web page. The application plane is the actual application code, and here's the key point: SBT is completely independent of what you build. Any application code will work as long as it subscribes to relevant control plane messages and adheres to the required contracts.
These SBT utilities accelerate common tasks like tenant provisioning, de-provisioning, and user management, with more to be added as the toolkit evolves. We add application-specific settings to the core utilities. Tenant provisioning logic goes here, identity settings go here, and authorization rules go here. And tying all of this together is Amazon EventBridge. This is an event bus that sends messages from the control plane to the application plane and from the application plane to the control plane. It includes necessary helpers to make publishing and subscribing to these messages easy.
Now, this architecture is powerful, but it's also complex. So the next question is, how do we give Kiro a deep enough context to understand SBT? Ultimately, we need to teach Kiro about SBT, and again, these are prefabricated components. You can view them as L3 constructs, essentially a collection of AWS services used to build control plane services.
To achieve this, we'll use another open-source tool. This time, we're using GitMCP. This is a tool that can convert any Git repository on the internet into an MCP server. We generate a URL from GitMCP. And then, in Kiro, we first ask Kiro to understand this URL and then attach it to MCP.json.
Kiro goes and fetches information about this MCP link we just provided, using its fetch capability to understand the link. It then updates the MCP configuration file and the steering document we just created. It first loads them into context and then adds to the context. It also asks for confirmation every time it tries to perform a specific operation. If you don't want Kiro to always ask for permission, you can also trust it.
And as a final result, SBT.AWS is added here as a new MCP server. This is working through the GitMCP connection we just created. And the MCP steering document also has the necessary updates regarding the SaaS Builder Toolkit. Now Kiro knows about Well-Architected principles, and it also has an MCP server that can read SBT documentation.
Secret MCP Source Collection: From AWS Knowledge, CDK, Terraform to Chrome Dev Tools
Excellent. So, now that we've come this far, I feel it wouldn't be fair not to share this secret MCP source that I always use with all of you. You can take a photo at the end. I'm going to go through this really fast. If you're looking for AWS best practices, information from blogs, whitepapers, and you want your IDE to have this information, you can attach the AWS Knowledge MCP server. If your preferred tool is CDK, and you don't want any hallucinations in your code, you can use the AWS Lab CDK server.
And this will provide Kiro with a deep enough context about CDK and AWS so you can write more robust CDK code. If you're tired of Terraform resource name hallucinations, the Terraform MCP server is your friend. These are all servers available in AWS Labs, and we'll also show some links in the resources section at the end.
Here's a little secret, the AWS API MCP server gives Kiro superpowers, allowing it to run CLI commands in the background and query everything deployed in your AWS account. You can use it to search logs, list S3 buckets, or whatever you want to do. Context 7. If you're a developer like me who likes developing with Next.js, NestJS, FastAPI, FastMCP, but you're tired of your IDE agent always using old versions or hallucinated details, attach Context 7 to access the latest information.
Chrome Dev Tools? This gives Kiro another superpower. It can open a headless browser and actually traverse and check what you've built for yourself. By the way, I always use this to book really cheap airline tickets. And finally, the GitHub MCP server. No developer likes actually fixing conflicts, and that's where the GitHub MCP server comes to your rescue. You tell it what you want to do, and it handles all the complex Git commands in the background, commands you'd otherwise have to search Stack Overflow for hours to find. It does it for you instead.
So, this concludes Chapter 1 of our story, the foundational part. We've established the principles, gathered the tools, and charted our course. But planning an expedition is one thing; actually climbing the mountain is another. And to guide you through that climb, I want to share the practical story of what it was like to build this. I'd like to invite my co-presenter, Anthony. Thank you. Thanks, Aman, and hello everyone. It's great to have you all here today.
Anthony's Practical Report: Daily Development Process with Spec-Driven Development
So, with all that foundational context and those tools in place, the first task when creating a new project, in this case, we're starting greenfield, not brownfield, is to work directly with Kiro, from the ground up, and from a spec-driven development perspective. What's important to understand here is that you can't just throw a prompt and get a response, like a simple To-Do app. For a project of this complexity, you first need to set guidelines for what the overall project is, and then leverage conversation with Kiro to essentially create a collaborative planning session between me, the developer, and Kiro, the AI assistant.
Through this process, I explained my vision for this complex multi-component SaaS application to Kiro. And Kiro first returned a project plan. This was the spec itself, including requirements, design, and tasks. And those tasks were actually executed, ultimately creating a breakdown of the solution itself, eventually dividing into seven independently deployable components. This included the SBT infrastructure Aman explained to you all, details of the AI help desk we wanted to create on the application plane, and all the services like tenant onboarding.
This meta-level planning was absolutely critical for a project of this scale and complexity. It allowed us to visualize the architecture and identify integration points with Kiro even before starting to code. This upfront AI-assisted planning saved an enormous amount of time as we progressed through the process, and definitely reduced extra rework.
However, planning is more than just a list of components. Also, as part of this planning process, Kiro was able to generate documentation, such as detailed architectural diagrams like this one. This gave me an instant visual blueprint of the entire system I was trying to build. I want to point out a few key architectural decisions solidified at this stage. You can see a clear separation between the tenant management agent for SaaS administrators and the help desk agent for tenants. We also explicitly defined the need for a tenant context layer.
The tenant context layer included a data isolation enforcer. This, by the way, was a direct result of Kiro ingesting and understanding the Well-Architected SaaS Lens. It recognized that for SaaS applications, tenant isolation is a non-negotiable requirement that must be included. You also see at the bottom that we're leveraging a shared AI model across the application through Amazon Bedrock. And ultimately, having this complete visual blueprint allowed everyone looking at this with me to be on the same page with Kiro and have clarity on what we were ultimately building.
Now, with a solid plan and this clear architecture outlined, and the requirements documents ready to begin implementation, the next step in this journey is, of course, the reality of daily development with this coding assistant, and ultimately the lessons I learned and how I sometimes had to evolve when those changes occurred. As we transitioned from blueprint to implementation, the focus also shifted to ensuring quality and consistency across the scale of the solution being built. This became a story of continuous learning and adaptation for both myself and my AI partner.
I'll explain how agent hooks were ultimately used to enforce versioning, how requirements were strengthened with more specific details over time or when I discovered there might be missing gaps, and most importantly, how I navigated the testing challenges to build a robust automated testing strategy for this project. The key here is where the human-in-the-loop philosophy was truly put to the test and demonstrated its importance.
So, let's dive into the daily development process with Kiro. First, we start with a review of the specification. By thoroughly reviewing these with Kiro, we can confirm how that particular specification is outlined. We understand what the design, requirements, and tasks will be before execution. Also, at this stage, if we need to add additional requirements to that specific component with Kiro, we can continue to refine this.
Next, we begin to interact with Kiro, having it execute the tasks in the task portion of the requirements one by one. This iterative approach allows us to focus on the specific component we're working on at that time, ensuring that each part of the solution is built to the highest standards. Of course, Kiro also plays a crucial role here, generating code and providing guidance as we progress through each task.
Next, I need to review the results Kiro has created. After completing individual tasks, I reviewed what Kiro had created to ensure it aligned with both the design and requirements specified earlier. This step involved conducting code reviews, running unit tests and other tests, providing feedback to Kiro based on what I found, and also listening to feedback from Kiro. Kiro's insights here also helped identify potential issues and improve the overall code quality.
Next, as part of this process, we build and run unit tests. Of course, testing should be a critical part of the success of any project, especially for a project of this complexity. Kiro built and ran these tests, verifying that each component functioned as expected. It did a very good job in helping to create unit tests, and ultimately functional tests as well, to create a wide breadth of test coverage for the project, as you'll see here in a moment.
Of course, issues do arise. And when issues arose, especially during the testing process, I had to address them. So I collaborated with Kiro to jointly tackle these issues. Once all of this was completed, I was ready to move on to the next specification and its tasks. This iterative process continued until all requirements and all tasks were completed, and the final SaaS solution was built.
One of the things I implemented as part of this process was, at certain milestones, I worked with Kiro to generate CLI tools and create deployment scripts to my AWS account so I could test the final results in a real environment as I went. So ultimately, what did I get out of this? Of course, I got all the code and the interactions I had with Kiro, which we'll look at here in a moment. And there's also a working version of the application itself. We'll look at that as well.
Astounding Results: The Full Scope of a Production-Ready SaaS Completed in Two and a Half Weeks
So let's start with what Kiro built for us. What you see first here is my Kiro IDE. In the top left, there are the specification documents, then the agent hooks, these are the steering documents I've been working with, and finally the MCP servers. One thing to note here is that the number of specification documents is considerably more than the initial seven created as part of the project plan. This can actually be seen here in the project plan requirements document. This shows that over time, as I learned more and needed to add additional requirements, Kiro had the flexibility to accommodate that.
Scrolling through here, once we review the requirements, we can then go to the task list. One thing I want to note about the tasks is that you can actually add additional information to the task document. This serves as a kind of steering mechanism specific to that requirements document and the task you're executing. Meanwhile, the main steering document itself is global to the entire project, but this gives a bit more flexibility per requirement. So, you'll see here I have another document open. This is the SaaS administration dashboard requirements document, showing another example. And its tasks are similar.
Now, there were two agent hooks I set up. The first one opened here was for documentation generation. I asked Kiro to create documentation as I went along, thoroughly documenting not only what it was doing but also information that would be valuable to other developers in the long run. And the second hook was one I created for Kiro to automatically work with Git. This ensured that every time I completed a task, it would be committed to Git, so I wouldn't accidentally lose anything by not having it under source control.
And the agent steering documents, these also continued to grow over time as I learned more throughout the project. For example, one of the issues that came up during testing was that I needed Kiro to make sure it ran a TypeScript build before actually running tests, and sometimes it wouldn't do that. So I simply asked Kiro to create a steering document to provide that functionality. This was done in Kiro's early days, so of course Kiro has improved its capabilities since then, and now much of this TypeScript testing functionality is built in and can be fixed, but at that point in time, it was definitely needed.
Next, moving to the file side, you can see the documents created under the Kiro directory. This is similar to what Aman showed earlier, including the specification, steering documents, and agent hooks created. Below that, you can see the actual source code itself within this packages directory. The entire directory structure for this project was created jointly by Kiro and me, and it's a monorepo where all seven components are incorporated into the same repository. Again, if Kiro had new features, I might have split this into different repositories instead and worked with all of them today, but in this case, the monorepo worked very well.
Now, at this point, we've seen what Kiro can output. We've seen how we can interact with it. And looking at this final product and what it produced, it's actually quite impressive. Before I show you a demo of how the UI actually works and that this is actually a real product, a real solution we built, not just for show, I want to share some very impressive numbers with you. First, as I mentioned earlier, this is a monorepo. Seven packages were built within it. This allowed us, of course, to maintain consistency across all those different packages and solutions. However, they are all independently deployable, which was one of the requirements I asked Kiro for.
Also, as part of this repository, testing was very important. Kiro created 226 different test files for this project. The project itself was tens of thousands of lines of code, but we wanted to ensure truly effective and thorough testing for the solution. Ultimately, the code we ended up with here, meaning the ratio of test code to production code, was 1 line of test code for every 4 lines of production code. I've done a lot of development in my career, and I have to say that working with Kiro on this project allowed me to conduct more thorough testing than perhaps any other project, in terms of unit tests, functional tests, and component test scenarios. It's very thorough.
As another part, Kiro also created documentation. Again, not just documentation about what it was doing, but it also created API references. It created user guides for both tenant administrators and help desks, and of course, it created technical specifications that will be important for developers who come after me to maintain or work on this project in the future. Ultimately, 327 documents were created as part of that. And of course, it was deployed to AWS, to my AWS account. There was a separate CloudFormation stack for each component, which Kiro also created for me, and it even deployed it to AWS when asked.
Now, all of this is pretty impressive, but the most impressive thing about this entire solution - a full-blown production-ready SaaS, a complete tenant application that is essentially a help desk with ticket creation capabilities, all the associated data interactions, and thoroughly defined security including full tenant isolation - all of this was created by one person in two and a half weeks. And not only that, I didn't write a single line of code to achieve this. Kiro did it all for me. Was it always perfect? No. I had to work with Kiro and make sure to address issues when they arose, but overall I was very impressed with this capability. Basically, when creating features, you can reduce what used to take weeks to days, or what used to take months to weeks, which is a very impressive capability. I must say I was truly impressed as I went along.
So let's take a look at what this actually looks like from a UX perspective. What you're seeing first here is the administration screen on the SaaS side, for SaaS administrators. Specifically, you're looking at how you interact with the control plane from the SaaS. Here's a SaaS administrator. The SaaS administrator authenticates, and as part of this project, we're using Cognito as the IDP.
Here you can see the dashboard presented to this user. Since this user is an administrator, they have more capabilities than other users. Full dashboard support, the ability to search and investigate tenants, edit those tenants, create new tenants, and so on. One thing I want to say about this dashboard, which is quite interesting, is honestly, many of these are not actually connected to anything. Some of these dashboard features are a bit of a facade. But Kiro created all of this automatically. I didn't ask it to.
It's proposing suggestions for what a successful SaaS solution and the documentation included within it should look like. Here, you can also see one of the tenants I created and the ability to create new tenants. There's also a user management section, where you can see the tenant administrator and administrative administrators for the tenant I created. If I had done this differently, I wouldn't necessarily have mixed these two user bases together, but this demonstrates the functionality, and again, all of this is connected to Cognito.
However, there was one additional requirement: I wanted to have a full AI assistant and agent that could interact with the same APIs I was creating for managing and onboarding these tenants. So I created an agent here with tools to access these APIs and a communication protocol to interact with it. You can ask questions about tenants.
You can inquire about specific tenants, or you can have the AI agent create new tenants or onboard new tenants. And again, all of this was part of those first two weeks. Now, here we're looking at the other side. This is from a tenant's perspective. So this is actually the help desk interface that was created. You can see the user who was part of the tenant management user we saw earlier. And again, when they sign in, authentication is performed against Kiro, and a completely new, different interface is displayed.
Now, Kiro tends to favor a very similar design out of the box when creating UX, so here we have left navigation and a dashboard. This dashboard is actually quite a bit more functional than what we saw in the administration screen earlier. By the way, more features are connected. But as an individual user of this tenant, you can go in and review the help desk tickets created for you. You can manage those tickets. As you see on the screen, you can also submit and create new tickets, and of course, ultimately, you can also comment on those tickets. You can do everything you'd expect from a help desk.
And we added another agent here to allow interaction with the help desk in a very similar way to what we saw in tenant management. However, in this case, you can inquire about tickets. You can also create new tickets. For example, if you're dealing with a knowledge base, you now have the structure in place to connect that agent to the knowledge base and implement it as part of your ticket solution. It's very powerful.
Here you can see the CloudFormation stacks. Now, I mentioned there were seven as part of this project, but nine are listed here. That's because I already had a few in my account at the time, but they are fully deployed. Now moving to DynamoDB, here you can see many tables created as part of both the help desk application and the SaaS solution itself. And again, all of this was created by Kiro. I didn't write a single line of code to make these happen.
If we look into one of the tables here, opening the help desk tickets, the two tickets we saw in the help desk application are also listed here. And notice that in this case, the tenant ID is the key for this table. This is to ensure proper tenant isolation at the data level, as this is a pooled SaaS solution. Here, you can see the Lambda functions created for this. There are 47 of them, covering all the CRUD interfaces between the various different data types created and updated within the solution. And finally, moving to Amazon Bedrock Agent, to again show that we're not dealing with smoke and mirrors here, if we go to the agent runtime, you can see two agents listed: both a tenant agent and a help desk agent.
Now, this is quite powerful, and it's truly amazing that one person can accomplish all this in two weeks. Oh, and one more thing, by the way, most of this time, Kiro was just running in the background. Because I also have a day job. So, I was doing most of this in the background while I was doing other things. However, not everything was smooth sailing. In any project, issues arise. There are trials and tribulations that you have to encounter along the way, and that was certainly the case here.
Trials and Lessons Learned: Testing Challenges, Clarifying Requirements, and True Partnership
These challenges pushed me and, in some cases, even shook my resolve to the point where I wondered what I had gotten myself into, but I ultimately adapted, and as a result, became stronger and learned more about how Kiro works. Today, I want to share some of these trials I experienced. That way, you can learn from what I went through and hopefully avoid encountering similar issues. It also reconfirms that even though Kiro can do a lot on its own here, you still need to have a human in the loop to verify and confirm what's happening and to address challenges when they arise.
So, as I progressed through the development process, here are some of the challenges I had to address and that really forced me to adapt. First, testing and debugging issues. When I started the project, I made sure to communicate as part of the requirements that I wanted testing included in the solution. However, Kiro, if not instructed with hyper-specificity, and this applies to AI-assisted development in general and even vibe coding, can sometimes go off in its own direction from one session to another. The result was that, in some cases, I ended up with two completely different testing frameworks implemented concurrently. Now, this could have been fixed from the beginning. Implementing the appropriate steering document to have Kiro use a single, centralized testing framework, or explicitly specifying it in the requirements, could have prevented this situation. But because I didn't do that from the start, these were issues I had to address. My suggestion for when you embark on this process and seriously engage in spec-driven development is to take the time in the initial stages to truly analyze and understand what's included in your requirements, and be very explicit in describing them to help Kiro build in alignment with your vision. Then you won't necessarily encounter this particular issue.
The second thing I encountered was with AgentCore and Strands, specifically because when I started this project, it was very early in Kiro's development, but also very early in AgentCore and Strands' development. There was one conversation with Kiro where it actually told me that AgentCore didn't exist. Because Kiro had no knowledge of its existence at all. So, in this case, I had to take an additional step and use some of the techniques Aman talked about earlier to actually teach Kiro about AgentCore and Strands. Having these techniques really helped in this case, because I could simply show Kiro how to actually implement these solutions. In my case, I had a very early AgentCore getting started feature, which still exists by the way, and because it was on GitHub, I used GitMCP to create that MCP link, and Kiro learned everything it needed from there. It was ultimately a pretty powerful workaround, but it was certainly a challenge to overcome.
And finally, again, missing requirements. This was another issue that arose when I started working on the help desk portion of the solution. I had focused so much time, effort, and energy on building the SaaS solution, ensuring it was production-ready, that security was all set according to Well-Architected Lens best practices, and aligned with SaaS Builder Toolkit best practices, that I had overlooked some requirements that were genuinely needed within the help desk application. So again, I opened Kiro, started a new session, made sure I was on the spec side, and simply started working with Kiro to add new specifications to fill those gaps. This flexibility exists in Kiro at any stage of the project, even when you're nearing the end as in this case. You can always go back and ultimately address them.
To talk about some key takeaways here, it's important to know that these challenges weren't just obstacles for me to overcome. These were truly learning opportunities for me personally, to understand how to best work with Kiro. For all of you developers, or yourselves, when you work with these solutions, I want you to think about this, and although it can be very frustrating at times, I encourage you to embrace them as learning experiences and for the future as well.
Now, we've reached the end of our destination. We've come full circle, so to speak. At the end of our journey, we had ambitious goals we wanted to achieve. We had a mountain to climb, so to speak. Building a complete production-ready SaaS solution on AWS, made possible by a partnership with GenAI, in this case, Kiro. As you've seen, we succeeded in this. We built this solution with a comprehensive control plane and an agent-based help desk with next-generation capabilities, but the product we built was only half the story. The other half was what we learned.
First and foremost, we learned that quality is fundamental. When building these specifications, you need to focus on quality. Second, and as part of this, clarity is the ultimate accelerator. The more clarity you give Kiro in building your requirements, the better solution Kiro will build for you. And finally, this is a true partnership between you, the developer, and Kiro, the AI assistant. It's a partnership, not a magic button. Kiro cannot magically conjure things into existence.
As a result of this, of course, we can complete builds in days instead of weeks, and in weeks instead of months. This is very impressive in itself. And of course, as we move forward, it's important to note that the future, with agents in the IDE, is here now. Oh, and one last thing, SaaS is not dead. You may have heard that, but it just smells like GenTech now. This is the last slide I'll leave you with today. And thank you for coming. There are many resources Aman talked about earlier. Please grab these and use them in your own solutions.
On behalf of Aman and I, thank you very much for joining us today. Have a great day.
※ This article was automatically generated using Amazon Bedrock, maintaining as much information from the original video as possible.












































































































































Discussion