Build and scale adoption of AI agents for education with Strands Agents, Amazon Bedrock AgentCore, and LibreChat

Basic AI chat isn’t enough for most business applications. Institutions need AI that can pull from their databases, integrate with their existing tools, handle multi-step processes, and make decisions independently.

This post demonstrates how to quickly build sophisticated AI agents using Strands Agents, scale them reliably with Amazon Bedrock AgentCore, and make them accessible through LibreChat’s familiar interface to drive immediate user adoption across your institution.

Challenges with basic AI chat interfaces

Although basic AI chat interfaces can answer questions and generate content, educational institutions need capabilities that simple chat can’t provide:

  • Contextual decision-making – A student asking “What courses should I take?” needs an agent that can access their transcript, check prerequisites, verify graduation requirements, and consider schedule conflicts—not just generic course descriptions
  • Multi-step workflows – Degree planning requires analyzing current progress, identifying remaining requirements, suggesting course sequences, and updating recommendations as students make decisions
  • Institutional data integration – Effective educational AI must connect to student information systems, learning management services, academic databases, and institutional repositories to provide relevant, personalized guidance
  • Persistent memory and learning – Agents need to remember previous interactions with students, track their academic journey over semesters, and build understanding of individual learning patterns and needs

Combining open source flexibility with enterprise infrastructure

The integration presented in this post demonstrates how three technologies can work together to address these challenges:

  • Strands Agents – Build sophisticated multi-agent workflows in just a few lines of code
  • Amazon Bedrock AgentCore – Scale agents reliably with serverless, pay-per-use deployment
  • LibreChat – Provide users with a familiar chat interface that drives immediate adoption

Strands Agents overview

Strands Agents is an open source SDK that takes a model-driven approach to building and running AI agents in just a few lines of code. Unlike LibreChat’s simple agent implementation, Strands supports sophisticated patterns including multi-agent orchestration through workflow, graph, and swarm tools; semantic search for managing thousands of tools; and advanced reasoning capabilities with deep analytical thinking cycles. The framework simplifies agent development by embracing the capabilities of state-of-the-art models to plan, chain thoughts, call tools, and reflect, while scaling from local development to production deployment with flexible architectures and comprehensive observability.

Amazon Bedrock AgentCore overview

Amazon Bedrock AgentCore is a comprehensive set of enterprise-grade services that help developers quickly and securely deploy and operate AI agents at scale using the framework and model of your choice, hosted on Amazon Bedrock or elsewhere. The services are composable and work with popular open source frameworks and many models, so you don’t have to choose between open source flexibility and enterprise-grade security and reliability.

Amazon Bedrock AgentCore includes modular services that can be used together or independently: Runtime (secure, serverless runtime for deploying and scaling dynamic agents), Gateway (converts APIs and AWS Lambda functions into agent-compatible tools), Memory (manages both short-term and long-term memory), Identity (provides secure access management), and Observability (offers real-time visibility into agent performance).

The key Amazon Bedrock AgentCore service used in this integration is Amazon Bedrock AgentCore Runtime, a secure, serverless runtime purpose-built for deploying and scaling dynamic AI agents and tools using an open source framework including LangGraph, CrewAI, and Strands Agents; a protocol; and a model of your choosing. Amazon Bedrock AgentCore Runtime was built to work for agentic workloads with industry-leading extended runtime support, fast cold starts, true session isolation, built-in identity, and support for multimodal payloads. Rather than the typical serverless model where functions spin up, execute, and immediately terminate, Amazon Bedrock AgentCore Runtime provisions dedicated microVMs that can persist for up to 8 hours, enabling sophisticated multi-step agentic workflows where each subsequent call builds upon the accumulated context and state from previous interactions within the same session.

LibreChat overview

LibreChat has emerged as a leading open source alternative to commercial AI chat interfaces, offering educational institutions a powerful solution for deploying conversational AI at scale. Built with flexibility and extensibility in mind, LibreChat provides several key advantages for higher education:

  • Multi-model support – LibreChat supports integration with multiple AI providers, so institutions can choose the most appropriate models for different use cases while avoiding vendor lock-in
  • User management – Robust authentication and authorization systems help institutions manage access across student populations, faculty, and staff with appropriate permissions and usage controls
  • Conversation management – Students and faculty can organize their AI interactions into projects and topics, creating a more structured learning environment
  • Customizable interface – The solution can be branded and customized to match institutional identity and specific pedagogical needs

Integration benefits

Integrating Strands Agents with Amazon Bedrock AgentCore and LibreChat creates unique benefits that extend the capabilities of both services far beyond what either could achieve independently:

  • Seamless agent experience through familiar interface – LibreChat’s intuitive chat interface becomes a gateway to sophisticated agentic workflows. Users can trigger complex multi-step processes, data analysis, and external system integrations through natural conversation, without needing to learn new interfaces or complex APIs.
  • Dynamic agent loading and management – Unlike static AI chat implementations, this integration supports dynamic agent loading with access management. New agentic applications can be deployed separately and made available to users without requiring LibreChat updates or downtime, enabling rapid agent development.
  • Enterprise-grade security and scaling – Amazon Bedrock AgentCore Runtime provides complete session isolation for each user session, where each session runs with isolated CPU, memory, and filesystem resources. This creates complete separation between user sessions, safeguarding stateful agent reasoning processes and helping prevent cross-session data contamination. The service can scale up to thousands of agent sessions in seconds while developers only pay for actual usage, making it ideal for educational institutions that need to support large student populations with varying usage patterns.
  • Built-in AWS resource integration – Organizations already running infrastructure on AWS can seamlessly connect their existing resources—databases, data lakes, Lambda functions, and applications—to Strands Agents without complex integrations or data movement. Agents can directly access and surface insights through the LibreChat interface, turning existing AWS investments into intelligent, conversational experiences, such as querying an Amazon Relational Database Service (Amazon RDS) database, analyzing data in Amazon Simple Storage Service (Amazon S3), or integrating with existing microservices.
  • Cost-effective agentic computing – By using LibreChat’s efficient architecture with the Amazon Bedrock AgentCore pay-per-use model, organizations can deploy sophisticated agentic applications without the high fixed costs typically associated with enterprise AI systems. Users only pay for actual agent computation and tool usage.

Agent use cases in higher education settings

The integration of LibreChat with Strands Agents enables numerous educational applications that demonstrate the solution’s versatility and power:

  • A course recommendation agent can analyze a student’s academic history, current enrollment, and career interests to suggest relevant courses. By integrating with the student information system, the agent can make sure recommendations consider prerequisites, schedule conflicts, and graduation requirements.
  • A degree progress tracking agent can interact with students and help them understand their specific degree requirements and provide guidance on remaining coursework, elective options, and timeline optimization.
  • Agents can be configured with access to academic databases and institutional repositories, helping students and faculty discover relevant research papers and resources, providing guidance on academic writing, citation formats, and research methodology specific to different disciplines.
  • Agents can handle routine student inquiries about registration, deadlines, and campus resources, freeing up staff time for more complex student support needs.

Refer to the following GitHub repo for Strands Agent code examples for educational use cases.

Solution overview

The following architecture diagram illustrates the overall system design for deploying LibreChat with Strands Agents integration. Strands Agents is deployed using Amazon Bedrock AgentCore Runtime, a secure, serverless runtime purpose-built for deploying and scaling dynamic AI agents and tools using an open source framework including Strands Agents.

Architecture diagram of AgentCore integration with LibreChat

The solution architecture includes several key components:

  • LibreChat core services – The core chat interface runs in an Amazon Elastic Container Service (Amazon ECS) with AWS Fargate cluster, including LibreChat for the user-facing experience, Meilisearch for enhanced search capabilities, and Retrieval Augmented Generation (RAG) API services for document retrieval.
  • LibreChat supporting infrastructure – This solution uses Amazon Elastic File System (Amazon EFS) for storing Meilisearch’s indexes and user uploaded files; Amazon Aurora PostgreSQL-Compatible Edition for vector database used by the RAG API; Amazon S3 for storing LibreChat configurations; Amazon DocumentDB for user, session, conversation data management; and AWS Secrets Manager for managing access to the resources.
  • Strands Agents integration – This solution integrates Strands Agents (hosted by Amazon Bedrock AgentCore Runtime) with LibreChat through custom endpoints using Lambda and Amazon API Gateway. This integration pattern enables dynamic loading of agents in LibreChat for advanced generative AI capabilities. In particularly, the solution showcases a user activity analysis agent that draws insights from LibreChat logs.
  • Authentication and security – The integration between LibreChat and Strands Agents implements a multi-layered authentication approach that maintains security without compromising user experience or administrative simplicity. When a student or faculty member selects a Strands Agent from LibreChat’s interface, the authentication flow operates seamlessly in the background through several coordinated layers:
    • User authentication – LibreChat handles user login through your institution’s existing authentication system, with comprehensive options including OAuth, LDAP/AD, or local accounts as detailed in the LibreChat authentication documentation.
    • API Gateway security – After users are authenticated to LibreChat, the system automatically handles API Gateway security by authenticating each request using preconfigured API keys.
    • Service-to-service authentication – The underlying Lambda function uses AWS Identity and Access Management (IAM) roles to securely invoke Amazon Bedrock AgentCore Runtime where the Strands Agent is deployed.
    • Resource access control – Strands Agents operate within defined permissions to access only authorized resources.

Deployment process

This solution uses the AWS Cloud Development Kit (AWS CDK) and AWS CloudFormation to handle the deployment through several automated phases. We will use a log analysis agent as an example to demonstrate the deployment process. The agent makes it possible for the admin to perform LibreChat log analysis through natural language queries.

LibreChat is deployed as a containerized service with ECS Fargate clusters and is integrated with supporting services, including virtual private cloud (VPC) networking, Application Load Balancer (ALB), and the complete data layer with Aurora PostgreSQL-Compatible, DocumentDB, Amazon EFS, and Amazon S3 storage. Security is built in with appropriate IAM roles, security groups, and secrets management.

The user activity analysis agent provides valuable insights into how students interact with AI tools, identifying peak usage times, popular topics, and potential areas where students might need additional support. The agent is automatically provisioned using the following CloudFormation template, which deploys Strands Agents using Amazon Bedrock AgentCore Runtime, provisions a Lambda function that invokes the agent, API Gateway to make the agent a URL endpoint, and a second Lambda function that accesses LibreChat logs stored in DocumentDB. The second Lambda is used as a tool of the agent.

The following code shows how to configure LibreChat to make the agent a custom endpoint:

custom:
    - name: 'log-analysis-assitant'
      apiKey: '{AWS_API_GATEWAY_KEY}'
      baseURL: '{AWS_API_GATEWAY_URL}'
      models:
        default: ['Strands Agent']
        fetch: false
      headers:
        x-api-key: '{AWS_API_GATEWAY_KEY}'
      titleConvo: true
      titleModel: 'us.amazon.nova-lite-v1:0'
      modelDisplayLabel: 'log-analysis-assitant'
      forcePrompt: false
      stream: false
      iconURL: 'https://d1.awsstatic.com/onedam/marketing-channels/website/aws/en_US/
             product-categories/ai-ml/machine-learning/approved/images/256f3da1-3193-
             441c-b2641f33fdd6.a045b9b4c4f34545e1c79a405140ac0146699835.jpeg'

After the stack is deployed successfully, you can log in to LibreChat, select the agent, and start chatting. The following screenshot shows an example question that the user activity analysis agent can help answer, where it reads the LibreChat user activities from DocumentDB and generates an answer.

Demonstration of user experience of accessing Agent in LibreChat

Deployment considerations and best practices

When deploying this LibreChat and Strands Agents integration, organizations should carefully consider several key factors that can significantly impact both the success of the implementation and its long-term sustainability.

Security and compliance form the foundation of any successful deployment, particularly in educational environments where data protection is paramount. Organizations must implement robust data classification schemes to maintain appropriate handling of sensitive information, and role-based access controls make sure users only access AI capabilities and data appropriate to their roles. Beyond traditional perimeter security, a layered authorization approach becomes critical when deploying AI systems that might access multiple data sources with varying sensitivity levels. This involves implementing multiple authorization checks throughout the application stack, including service-to-service authorization, trusted identity propagation that carries the end-user’s identity through the system components, and granular access controls that evaluate permissions at each data access point rather than relying solely on broad service-level permissions. Such layered security architectures help mitigate risks like prompt injection vulnerabilities and unauthorized cross-tenant data access, making sure that even if one security layer is compromised, additional controls remain in place to protect sensitive educational data. Regular compliance monitoring becomes essential, with automated audits and checks maintaining continued adherence to relevant data protection regulations throughout the system’s lifecycle, while also validating that layered authorization policies remain effective as the system evolves.

Cost management requires a strategic approach that balances functionality with financial sustainability. Organizations must prioritize their generative AI spending based on business impact and criticality while maintaining cost transparency across customer and user segments. Implementing comprehensive usage monitoring helps organizations track AI service consumption patterns and identify optimization opportunities before costs become problematic. The human element of deployment often proves more challenging than the technical implementation. Faculty training programs should provide comprehensive guidance on integrating AI tools into teaching practices, focusing not just on how to use the tools but how to use them effectively for educational outcomes. Student onboarding requires clear guidelines and tutorials that promote both effective AI interaction and academic integrity. Perhaps most importantly, establishing continuous feedback loops makes sure the system evolves based on actual user experiences and measured educational outcomes rather than assumptions about what users need.Successful deployments also require careful attention to the dynamic nature of AI technology. The architecture’s support for dynamic agent loading enables organizations to add specialized agents for new departments or use cases without disrupting existing services. Version control systems should maintain different agent versions for testing and gradual rollout of improvements, and performance monitoring tracks both technical metrics and user satisfaction to guide continuous improvement efforts.

Conclusion

The integration of LibreChat with Strands Agents represents a significant step forward in democratizing access to advanced AI capabilities in higher education. By combining the accessibility and customization of open source systems with the sophistication and reliability of enterprise-grade AI services, institutions can provide students and faculty with powerful tools that enhance learning, research, and academic success.This architecture demonstrates that educational institutions don’t need to choose between powerful AI capabilities and institutional control. Instead, they can take advantage of the innovation and flexibility of open source solutions with the scalability and reliability of cloud-based AI services. The integration example showcased in this post illustrates the solution’s versatility and potential for customization as institutions expand and adapt the solution to meet evolving educational needs.

For future work, the LibreChat system’s Model Context Protocol (MCP) server integration capabilities offer exciting possibilities for enhanced agent architectures. A particularly promising avenue involves wrapping agents as MCP servers, transforming them into standardized tools that can be seamlessly integrated alongside other MCP-enabled agents. This approach would enable educators to compose sophisticated multi-agent workflows, creating highly personalized educational experiences tailored to individual learning styles.

The future of education is about having the right AI tools, properly integrated and ethically deployed, to enhance human learning and achievement through flexible, interoperable, and extensible solutions that can evolve with educational needs.

Acknowledgement

The authors extend their gratitude to Arun Thangavel, Ashish Rawat and Kosti Vasilakakis for their insightful feedback and review of the post.


About the authors

Dr. Changsha Ma is a Senior AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, and spending time with friends and families.

Sudheer Manubolu is a Solutions Architect at Amazon Web Services (AWS). He specializes in cloud architecture, enterprise solutions, and AI/ML implementations. He provides technical and architectural guidance to customers building transformative solutions on AWS, with particular emphasis on leveraging AWS’s AI/ML and container services to drive innovation and operational excellence.

Abhilash Thallapally is a Solutions Architect at AWS helping public sector customers design and build scalable AI/ML solutions using Amazon SageMaker. His work covers a wide range of ML use cases, with a primary interest in computer vision, Generative AI, IoT, and deploying cost optimized solutions on AWS.

Mary Strain leads strategy for artificial intelligence and machine learning for US education at AWS. Mary began her career as a middle school teacher in the Bronx, NY. Since that time, she has held leadership roles in education and public sector technology organizations. Mary has advised K12, higher education, and state and local government on innovative policies and practices in competency based assessment, curriculum design, micro credentials and workforce development initiatives. As an advisor to The Education Design Studio at The University of Pennsylvania, The Coalition of Schools Educating Boys of Color, and The State of NJ AI Task Force, Mary has been on the leading edge of bringing innovative solutions to education for two decades.