This repository contains the code for a modular Socratic chatbot to be used on Lambda-Feedback platform [written in Python].
More details about the chatbot's behaviour in User Documentation.
The chat function consumes the muEd API request schema — the context, user, and messages fields in incoming requests follow the muEd format and are translated into a tutoring prompt by src/agent/context.py.
To deploy to production.
This chapter helps you to quickly set up a new Python chat module function using this repository.
Note
To develop this function further, you will require the following environment variables in your .env file:
> If you use OpenAI:
OPENAI_API_KEY
OPENAI_MODEL
> If you use GoogleAI:
GOOGLE_AI_API_KEY
GOOGLE_AI_MODELNote
If you decide to use another endpoint such as Azure or Ollama or any other, please update the github workflow files to use the right secrets and variables for testing.
> If you use Azure-OpenAI:
AZURE_OPENAI_API_KEY
AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_VERSION
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
AZURE_OPENAI_EMBEDDING_3072_DEPLOYMENT
AZURE_OPENAI_EMBEDDING_1536_DEPLOYMENT
AZURE_OPENAI_EMBEDDING_3072_MODEL
AZURE_OPENAI_EMBEDDING_1536_MODEL
> For monitoring of the LLM calls (follow instructions on how to set up on langsmith online):
LANGCHAIN_TRACING_V2
LANGCHAIN_ENDPOINT
LANGCHAIN_API_KEY
LANGCHAIN_PROJECTIn GitHub, choose Use this template > Create a new repository in the repository toolbar.
Choose the owner, and pick a name for the new repository.
[!IMPORTANT] If you want to deploy the chat function to Lambda Feedback, make sure to choose the
Lambda Feedbackorganization as the owner.
Set the visibility to Public or Private.
[!IMPORTANT] If you want to use GitHub deployment protection rules, make sure to set the visibility to
Public.
Click on Create repository.
Clone the new repository to your local machine using the following command:
git clone <repository-url>You're ready to start developing your chat function. Head over to the Development section to learn more.
You will have to add your API key and LLM model name into the Github repo settings. Under Secrets and variables/Actions: the API key must be added as a secret and the LLM model must be added as a variable.
You must ensure the same namings as in your .env file. So, make sure to update the .github/workflows/{staging-deploy,production-deploy,test-lint}.yml files with the correct parameter names.
For more information, check the section below Deploy to Lambda Feedback.
In the README.md file, change the title and description so it fits the purpose of your chat function.
Also, don't forget to update or delete the Quickstart chapter from the README.md file after you've completed these steps.
To modify the behaviour of the chatbot, simply edit the prompts in src/agent/prompts.py. Or if you want to create a custom agent, copy or update the agent.py from src/agent/ and edit it to match your LLM agent requirements. Import the new invocation in the module.py file.
Your agent can be based on an LLM hosted anywhere. OpenAI, Google AI, Azure OpenAI, and Ollama are available out of the box via src/agent/llm_factory.py, and you can add your own provider there too.
The agent uses two separate LLM instances — self.llm for chat responses and self.summarisation_llm for conversation summarisation and style analysis. By default both use the same provider, but you can point them at different models (e.g. a cheaper or faster model for summarisation) by changing the class in agent.py.
.
├── .github/workflows/
│ ├── test-lint.yml # runs pytest on pull requests
│ ├── staging-deploy.yml # tests + deploys to STAGING on push to main
│ ├── production-deploy.yml # manual dispatch: version bump, tag, release, deploy to PROD
│ └── test-report.yml # gathers Pytest Report of function tests
├── docs/ # docs for devs and users
├── src/
│ ├── agent/
│ │ ├── agent.py # LangGraph stateful agent logic
│ │ ├── context.py # converts muEd context dicts to LLM prompt text
│ │ ├── llm_factory.py # factory classes for each LLM provider
│ │ └── prompts.py # system prompts defining the behaviour of the chatbot
│ └── module.py
└── tests/ # contains all tests for the chat function
├── example_inputs/ # muEd example payloads for end-to-end tests
├── manual_agent_requests.py # allows testing of the docker container through API requests
├── manual_agent_run.py # allows testing of any LLM agent on a couple of example inputs
├── utils.py # shared test helpers
├── test_example_inputs.py # pytests for the example input files
├── test_index.py # pytests
└── test_module.py # pytestsTo test your function, you can run the unit tests, call the code directly through a python script, or build the respective chat function docker container locally and call it through an API request. Below you can find details on those processes.
You can run the unit tests using pytest.
pytestYou can run the Python function itself. Make sure to have a main function in either src/module.py or index.py.
python src/module.pyYou can also use the manual_agent_run.py script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
python tests/manual_agent_run.pyTo build the Docker image, run the following command:
docker build -t llm_chat .To run the Docker image, use the following command:
docker run -e OPENAI_API_KEY={your key} -e OPENAI_MODEL={your LLM model name} -p 8080:8080 llm_chatdocker run --env-file .env -it --name my-lambda-container -p 8080:8080 llm_chatThis will start the chat function and expose it on port 8080 and it will be open to be curl:
curl --location 'http://localhost:8080/2015-03-31/functions/function/invocations' \
--header 'Content-Type: application/json' \
--data '{"body":"{\"messages\": [{\"role\": \"USER\", \"content\": \"hi\"}]}"}'In the tests/ folder you can find the manual_agent_requests.py script that calls the POST URL of the running docker container. It reads any kind of input files with the expected schema. You can use this to test your curl calls of the chatbot.
POST URL:
http://localhost:8080/2015-03-31/functions/function/invocationsPer the muEd ChatRequest schema, only messages is required; conversationId, user, context, and configuration are all optional.
Minimal request — only required components (stringified within body for the AWS Lambda Runtime Interface Emulator):
{"body":"{\"messages\": [{\"role\": \"USER\", \"content\": \"hi\"}]}"}Full request as Lambda Feedback sends it — all optional fields populated:
{
"conversationId": "<uuid>",
"messages": [
{ "role": "USER", "content": "<previous user message>" },
{ "role": "ASSISTANT", "content": "<previous assistant reply>" },
{ "role": "USER", "content": "<current message>" }
],
"user": {
"type": "LEARNER",
"preference": {
"conversationalStyle": "<stored style string>"
},
"taskProgress": {
"timeSpentOnQuestion": "30 minutes",
"accessStatus": "a good amount of time spent on this question today.",
"markedDone": "This question is still being worked on.",
"currentPart": {
"position": 0,
"timeSpentOnPart": "10 minutes",
"markedDone": "This part is not marked done.",
"responseAreas": [
{
"responseType": "EXPRESSION",
"totalSubmissions": 3,
"wrongSubmissions": 2,
"latestSubmission": {
"submission": "<student's last answer>",
"feedback": "<feedback text from evaluator>",
"answer": "<reference answer used for evaluation>"
}
}
]
}
}
},
"context": {
"summary": "<compressed conversation history>",
"set": {
"title": "Fundamentals",
"number": 2,
"description": "<set description>"
},
"question": {
"title": "Understanding Polymorphism",
"number": 3,
"guidance": "<teacher guidance>",
"content": "<master question content>",
"estimatedTime": "15-25 minutes",
"parts": [
{
"position": 0,
"content": "<part prompt>",
"answerContent": "<part answer>",
"workedSolutionSections": [
{ "position": 0, "title": "Step 1", "content": "..." }
],
"structuredTutorialSections": [
{ "position": 0, "title": "Hint", "content": "..." }
],
"responseAreas": [
{
"position": 0,
"responseType": "EXPRESSION",
"answer": "<reference answer>",
"preResponseText": "<label shown before input>"
}
]
}
]
}
}
}Response:
{
"output": {
"role": "ASSISTANT",
"content": "<assistant reply text>"
},
"metadata": {
"summary": "<updated conversation summary>",
"conversationalStyle": "<updated style string>",
"processingTimeMs": 1234
}
}Deploying the chat function to Lambda Feedback is simple and straightforward, as long as the repository is within the Lambda Feedback organization.
The pipeline has two environments: staging and production.
Staging — Pushing to the main branch triggers the Staging deploy workflow, which runs the test suite and (on success) deploys the chat function to AWS staging. After deploying, please contact one of the Lambda Feedback admins to allow the function to be accessible on staging.lambdafeedback.com.
[!WARNING] The staging environment of the platform is always under use and may include beta/in-testing features that can cause unexpected issues.
Production — Once you are happy with the staging deployment, run the Production deploy workflow manually from the GitHub Actions tab. Pick a version-bump (patch/minor/major); the workflow will redeploy staging, then pause on a manual approval gate (the production-override GitHub Environment, reviewed by a Lambda Feedback admin), then create a vX.Y.Z git tag + GitHub Release and deploy to the main Lambda Feedback platform.
Pull requests — The Test and Lint workflow runs the test suite on every PR; no deploy.
[!NOTE] Once a deployment has been successful, share your necessary environment variables (e.g. API key and LLM model) with one of the Lambda Feedback team members.
If your chat function is working fine when run locally, but not when containerized, there is much more to consider. Here are some common issues and solution approaches:
Run-time dependencies
Make sure that all run-time dependencies are installed in the Docker image.
- Python packages: Make sure to add the dependency to the
requirements.txtorpyproject.tomlfile, and runpip install -r requirements.txtorpoetry installin the Dockerfile. - System packages: If you need to install system packages, add the installation command to the Dockerfile.
- ML models: If your chat function depends on ML models, make sure to include them in the Docker image.
- Data files: If your chat function depends on data files, make sure to include them in the Docker image.
If you want to pull changes from the template repository to your repository, follow these steps:
- Add the template repository as a remote:
git remote add template https://github.com/lambda-feedback/chat-function-boilerplate.git- Fetch changes from all remotes:
git fetch --all- Merge changes from the template repository:
git merge template/main --allow-unrelated-historiesWarning
Make sure to resolve any conflicts and keep the changes you want to keep.