How to Use Claude Projects with Custom System Prompts and Knowledge Files for SaaS Customer Support
Build a Reusable Customer Support Assistant with Claude Projects
Claude Projects let you combine custom system prompts with uploaded knowledge files to create persistent, context-aware AI assistants. Instead of repeating instructions every conversation, you define your assistant’s behavior once and reuse it across your entire support team. This guide walks you through building a production-ready customer support assistant for your SaaS product using Claude Projects and the Anthropic API.
Prerequisites
- An Anthropic account with API access (Claude Pro, Team, or Enterprise plan for Projects via claude.ai; API key for programmatic access)- Your SaaS product documentation in text-based formats (TXT, PDF, MD, CSV)- Python 3.8+ installed (for API integration examples)- Basic familiarity with REST APIs and prompt engineering
Step 1: Create Your Claude Project
- Navigate to claude.ai and log in to your account.- In the left sidebar, click Projects, then click Create Project.- Name your project descriptively, for example:
Acme SaaS — Customer Support Bot.- Add a project description that clarifies the assistant’s purpose for your team members.
Step 2: Write a Custom System Prompt
The system prompt defines your assistant’s personality, boundaries, and response format. Click Set custom instructions inside your project and paste a prompt like the following:
You are the official customer support assistant for Acme SaaS, a project management platform.
RULES:
- Answer ONLY questions related to Acme SaaS features, billing, integrations, and troubleshooting.
- If a question falls outside your knowledge, say: “I don’t have that information. Let me connect you with our support team at support@acme-saas.com.”
- Never fabricate features or pricing that aren’t in your knowledge base.
- Use a friendly, professional tone. Keep answers concise (under 200 words unless a detailed walkthrough is needed).
- When providing steps, use numbered lists.
- For billing questions, always remind the user to verify details in their account dashboard.
RESPONSE FORMAT:
- Start with a direct answer to the question.
- Provide step-by-step instructions when applicable.
- End with: “Was this helpful? Reply with your follow-up question or contact support@acme-saas.com.”
CONTEXT: You have access to the complete Acme SaaS documentation, FAQ database, and changelog. Use these files as your single source of truth.
Step 3: Upload Knowledge Files
Knowledge files give your assistant grounded, accurate context. Inside the project, click **Add content** and upload your documentation.
Recommended File Structure
| File | Purpose | Format |
|---|---|---|
| product-docs.md | Complete feature documentation | Markdown |
| faq-database.csv | Common questions and verified answers | CSV |
| pricing-plans.txt | Current pricing tiers and limits | Plain text |
| changelog.md | Recent product updates and release notes | Markdown |
| troubleshooting-guide.md | Known issues and resolution steps | Markdown |
Step 4: Integrate via the Anthropic API
For programmatic access, use the Anthropic Python SDK to replicate the project behavior in your own application. Install the SDK first:
pip install anthropic
Then create your support assistant with a system prompt and context documents loaded at runtime:
import anthropic
client = anthropic.Anthropic(api_key=“YOUR_API_KEY”)
Load your knowledge base
with open(“product-docs.md”, “r”) as f:
product_docs = f.read()
with open(“faq-database.csv”, “r”) as f:
faq_data = f.read()
system_prompt = """You are the official customer support assistant for Acme SaaS.
Answer ONLY from the provided documentation. If unsure, escalate to support@acme-saas.com.
Keep answers concise and use numbered steps for instructions.
""" + product_docs + "\n" + faq_data + """
"""
def get_support_response(user_question: str) -> str:
message = client.messages.create(
model=“claude-sonnet-4-20250514”,
max_tokens=1024,
system=system_prompt,
messages=[
{“role”: “user”, “content”: user_question}
]
)
return message.content[0].text
Example usage
response = get_support_response(“How do I upgrade my plan?”)
print(response)
Step 5: Add Multi-Turn Conversation Support
Real support conversations span multiple messages. Maintain conversation history to enable follow-ups:
conversation_history = []
def chat(user_message: str) -> str:
conversation_history.append({“role”: “user”, “content”: user_message})
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
system=system_prompt,
messages=conversation_history
)
assistant_reply = message.content[0].text
conversation_history.append({"role": "assistant", "content": assistant_reply})
return assistant_reply
Multi-turn example
print(chat(“What integrations do you support?”))
print(chat(“How do I set up the Slack integration specifically?”))
Step 6: Test and Iterate
- Open your Claude Project and start a new conversation.- Ask 10-15 representative customer questions covering features, billing, troubleshooting, and edge cases.- Verify that answers reference your uploaded documents accurately.- Test boundary cases — ask questions outside your product scope to confirm the assistant declines gracefully.- Refine your system prompt based on any incorrect or off-tone responses.
Pro Tips for Power Users
- Version your system prompts: Keep prompt versions in a Git repository. When you update the prompt in Claude Projects, tag the version so you can roll back if response quality drops.- Use XML tags in knowledge files: Wrap distinct sections in tags like
orto help Claude retrieve the right context more reliably.- Set temperature to 0 for support: In API calls, addtemperature=0to reduce creative variation and get the most deterministic, factual answers.- Create separate projects per support tier: Use one project for general support and another for technical/escalation support with deeper API documentation.- Refresh knowledge files monthly: After each product release, update your changelog and docs files to keep responses current.
Troubleshooting Common Issues
| Problem | Cause | Solution |
|---|---|---|
| Assistant ignores uploaded documents | System prompt doesn't reference the knowledge base | Add explicit instructions like "Answer using the provided documentation" to your system prompt |
| Responses are too long or rambling | No length constraint in system prompt | Add a word limit rule, e.g., "Keep answers under 150 words unless a walkthrough is needed" |
| Assistant fabricates features | Knowledge files are incomplete or vague | Add a strict grounding rule: "If the answer is not in your knowledge base, say so." Update docs to cover gaps. |
| Token limit exceeded on upload | Knowledge files exceed the 200K token project limit | Consolidate files, remove redundancy, and prioritize high-traffic support topics |
| API returns 401 Unauthorized | Invalid or expired API key | Regenerate your key at console.anthropic.com and update YOUR_API_KEY |
Can I share a Claude Project with my entire support team?
Yes. On Claude Team and Enterprise plans, you can share projects with team members. Each team member gets access to the same system prompt and knowledge files, ensuring consistent responses across your support organization. Individual conversations remain private unless explicitly shared.
How often should I update the knowledge files in my project?
Update knowledge files whenever your product ships significant changes — new features, pricing updates, deprecated functionality, or newly discovered bugs. A good cadence is after every product release or at minimum once per month. Outdated knowledge files are the most common cause of inaccurate support responses.
What is the difference between using Claude Projects on claude.ai and the API approach?
Claude Projects on claude.ai provides a visual interface for uploading files, writing system prompts, and chatting — ideal for non-technical team members. The API approach using the Anthropic SDK gives you programmatic control to embed the same assistant behavior into your own application, chatbot widget, or helpdesk integration. Both methods use the same underlying model and can achieve equivalent results; choose based on whether your team needs a standalone tool or an embedded solution.