-
Notifications
You must be signed in to change notification settings - Fork 2.6k
[RFC] A Fully decoupled and auto-scaled rollout engine using AWS Bedrock AgentCore Runtime #4216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Youzhi Luo <yzluo@amazon.com> Co-authored-by: Danylo Vashchilenko <vdanylo@amazon.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This PR introduces a significant and well-designed feature to decouple the rollout engine using AWS Bedrock AgentCore. The architecture using S3 and SQS is robust, and the implementation is comprehensive, including extensive testing. My feedback focuses on improving robustness and maintainability. I've identified a couple of areas where the code could be made more resilient to external changes and another where a refactoring could simplify the main training loop's logic, especially for future extensions. Overall, this is a high-quality contribution.
| # seconds - AgentCore new session cold start time under 25 TPS for container deployment (2025-11) | ||
| SESSION_START_TIME = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constant SESSION_START_TIME is hardcoded. The comment suggests this value is environment-specific and crucial for performance tuning (as it determines max_inflight_requests in RequestDispatcher). Hardcoding such parameters reduces flexibility. It would be better to make this a configurable parameter within the agentcore section of the rollout configuration. This would involve adding it to AgentCoreConfig and the corresponding YAML files, then reading it from the config where it's used.
| # When timed out, the response is an error string instead of the actual endpoint arn | ||
| if self.agent_arn not in endpoint_response: | ||
| raise TimeoutError(endpoint_response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check if self.agent_arn not in endpoint_response: to detect a timeout is brittle because it relies on the specific string content of the error message returned by wait_for_agent_endpoint_ready. If the error message format changes in a future version of the bedrock-agentcore-starter-toolkit, this check will fail. A more robust approach would be to check the type of the response. Based on the comment, a timeout returns a string, while success returns a dictionary-like object.
| # When timed out, the response is an error string instead of the actual endpoint arn | |
| if self.agent_arn not in endpoint_response: | |
| raise TimeoutError(endpoint_response) | |
| # When timed out, the response is an error string instead of the actual endpoint arn | |
| if isinstance(endpoint_response, str): | |
| raise TimeoutError(endpoint_response) |
| if self.async_rollout_mode: | ||
| gen_batch_output = self.async_rollout_manager.generate_sequences(gen_batch_output) | ||
| elif self.agentcore_rollout_mode: | ||
| gen_batch_output = self.agentcore_rollout_manager.generate_sequences(gen_batch_output) | ||
| else: | ||
| gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch_output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This if/elif/else block for handling different rollout modes (async_rollout_mode, agentcore_rollout_mode, etc.) is repeated in several places within fit() and _validate(). This pattern makes the code harder to read and maintain. Adding a new rollout mode would require modifying all these blocks.
Consider refactoring this logic using the Strategy design pattern. You could define a RolloutStrategy interface and create concrete implementations for each mode (AsyncRolloutStrategy, AgentCoreRolloutStrategy, SyncRolloutStrategy). The RayPPOTrainer would then hold a single strategy object and delegate the mode-specific operations to it, cleaning up the control flow in fit() and _validate().
| # modularity & easier organization. | ||
| # Relevant configs can be passed in via command line args too. Using an env file here | ||
| # to avoid hardcoded values. | ||
| agentcore_envs = dotenv_values("agentcore.env") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we need to provide an example agentcore.env file with a publicly usable aws account here.
* implement reward and baseline computation for AgentCore mode in remax * fix indention error
What does this PR do?
At a high level, we propose a design where developers run their whole agentic application with whatever customization they desire in a separate container managed by AgentCore on the cloud, instead of in the same environment as veRL on the training cluster. The design is illustrated by the following architectural diagram.
The agent application hosted on AgentCore Runtime communicates with veRL in two ways:
Essentially, veRL sends a prompt to the rollout engine powered by AgentCore, and gets back a rollout and corresponding reward. All the rollout process (tool use, environment interaction, etc) happens on the cloud. This means developers don't have to migrate whatever agent application they've built to veRL to start training, while veRL doesn't have to anticipate all kinds of agentic use cases to accommodate in its design.
In addition to simplifying the developer experience and veRL architecture, AgentCore Runtime itself is also a perfect solution for generating rollouts. It will
AgentCore Runtime was originally designed as a deployment service for agent applications, and is repurposed in our design to generate rollouts scalably for RL training. We are also happy to learn recently that Cursor Composer training also adopts a similar design per the Ray Summit talk from @srush, where they leveraged Cursor Cloud agent to generate rollout for their large-scale RL training.
We think the solution in this PR can benefit both research projects and production scenarios. Under this paradigm, researchers and developers can focus on building their agentic applications with arbitrary frameworks, tools, and environments, whether for establishing a baseline or creating a deployable solution. Once they have a working agent and are ready for training, all they need to do on the veRL side is to provide a couple more configs (container URI, S3 bucket, etc). Of course they will still need to return the rollout and define the reward in their agent app, but we will release a sample repo with various agent examples soon to demonstrate how straightforward this process is. And when the training is done, the agent can be deployed with the exact harness and setup in the app so there is no mismatch between training and inference stage.
Co-authors of this PR: @luyuzhe111, @lyzustc, @hellodanylo.
Test
Unit tests are implemented in
tests/experimental/agentcore_loop/test_basic_agentcore_loop.py. E2E training was tested for GRPO. vLLM was used as the inference engine.API and Usage Example
Additional config args to the training script for any agent:
We will release concrete training examples for various agentic use cases soon!
Design & Code Changes
We implement the proposed rollout engine by adding a separate
AgentCoreLoopManagerinverl/experimental/agent_loop/agentcore_loop.py. Almost all code changes reside in this file.AgentCoreLoopManagerinitializes the inference servers similar toAgentLoopManagerand registers them to the SGLang Router.AgentCoreLoopManagerpasses the SGLang router address and model name to AgentCore Runtime when the container is first deployed, so that the agent knows where to get model response.RequestDispatcherinAgentCoreLoopManagerwill submit all requests to AgentCore Runtime endpoint in an asynchronous manner.RolloutBufferwill poll SQS for rollout completion messages and download rollouts from S3 once they are done. Saving the rollout to S3 and notifying SQS will be done on the agent app side from AgentCore. We will be open sourcing a wrapper for agent apps soon and demonstrate that developers won't have to worry about these services at all.AgentCoreLoopManagerwill return the available rollouts and terminate all sessions. The current design follows the synchronous RL paradigm but we plan to extend to async RL in the near future as AgentCore Runtime is naturally compatible.Checklist Before Submitting
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)