Skip to content

strands-rl/strands-env

Repository files navigation

strands-env

Awesome Strands Agents

CI PyPI License Ask DeepWiki

Standardizing environment infrastructure with Strands Agents — step, observe, reward.

Features

This package treats each env.step() as a full agent loop (prompt → (tool_call, tool_response)* → response), not a single model call.

  • Define Environments — Subclass Environment, add @tool functions, plug in RewardFunction
  • RL Training — Token-level observations for on-policy training with strands-sglang
  • Benchmarking — CLI and Evaluator with checkpointing, resume, and custom metrics

Install

pip install strands-env

For development:

git clone https://github.com/horizon-rl/strands-env.git && cd strands-env
pip install -e ".[dev]"

Quick Start

Define an Environment

Subclass Environment and add tools as @tool-decorated functions:

from strands import tool
from strands_env.core import Environment

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))

class MathEnv(Environment):
    def get_tools(self):
        return [calculator]

Run It

env = MathEnv(model_factory=factory, reward_fn=reward_fn)
result = await env.step(Action(message="What is 2^10?", task_context=TaskContext(ground_truth="1024")))

result.observation.final_response   # "The answer is 1024"
result.reward.reward                # 1.0
result.termination_reason           # TerminationReason.TASK_COMPLETE

See examples/calculator_demo.py for a complete example.

Run Evaluations

strands-env eval aime-2024 \
    --env examples.eval.simple_math.calculator_env \
    --backend sglang \
    --base-url http://localhost:30000 \
    --n-samples-per-prompt 8 \
    --max-concurrency 30

Tip: For a non-agentic benchmark (no tool use), simply don't override get_tools() in your environment — the base class returns [] by default.

Built-in Environments

Ready-to-use environments under src/strands_env/environments/. Each ships with its own README, system prompt, and requirements.txt.

Environment Description
calculator Simple environment with a calculator tool for math reasoning.
code_sandbox Sandboxed Python / shell execution via AWS Bedrock AgentCore Code Interpreter.
web_search Pluggable search (Serper / Google CSE) + Jina-based page scraping with optional LLM summarization, enlightened by OpenSeeker.
terminal_bench Run Terminal-Bench tasks against a Harbor-managed Docker/EKS container.
swe_bench SWE-bench task runner — thin subclass of terminal_bench with a SWE-bench-tuned system prompt.
mcp_atlas MCP-Atlas benchmark runner across 36 MCP servers with 500 tasks.
agent_world_model AgentWorldModel tasks with 1000 synthetic FastAPI + SQLite environments exposed as MCP tools.

Documentation

Development

# Lint
ruff check src/ && ruff format --check src/

# Unit tests
pytest tests/unit/ -v

# Integration tests (requires running SGLang server)
pytest tests/integration/ -v --sglang-base-url=http://localhost:30000

Or if using Claude Code, just use /run-unit-tests and /run-integration-tests slash commands.

License

Apache License 2.0 — see LICENSE.

About

Standardizing environment infrastructure with Strands Agents — step, observe, reward.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages