Support Center

Get help with Golden Retriever quickly and easily

Common Troubleshooting Steps

Bot not responding in Teams/Webex

+
  1. Check if the bot is properly installed in your Teams/Webex workspace
  2. Verify the bot has necessary permissions (read messages, send messages)
  3. Ensure you're using the correct command format: /ask [your question]
  4. Check service status: systemctl status golden-retriever
  5. Review logs: tail -f /var/log/golden-retriever/app.log
  6. Restart the service if needed: systemctl restart golden-retriever
Quick Fix: Try mentioning the bot directly with @GoldenRetriever before your question.

Slow response times (>10 seconds)

+
  1. Check system resources: htop or docker stats
  2. Verify Redis cache is running: redis-cli ping
  3. Check if embeddings are precomputed: python check_embeddings.py
  4. Monitor PostgreSQL performance: pg_stat_activity
  5. Ensure GPU is properly utilized (if using): nvidia-smi
  6. Consider scaling resources or using a smaller model
Performance Tip: Enable query caching in settings to speed up repeated questions.

Documents not being indexed

+
  1. Check supported file formats (PDF, DOCX, TXT, HTML, MD)
  2. Verify file size limits (default: 100MB per file)
  3. Check document processing queue: celery inspect active
  4. Review indexing logs: grep "ERROR" /var/log/golden-retriever/indexer.log
  5. Manually trigger reindexing: python manage.py reindex --force
  6. Check storage permissions: ls -la /data/documents/
Common Issue: Corrupted PDFs may fail silently. Try converting to text first.

Incorrect or hallucinated answers

+
  1. Verify RAG mode is enabled: grep "RAG_MODE" /etc/golden-retriever/config.yml
  2. Check retrieval threshold settings (should be >0.7)
  3. Review which documents were used: Enable debug mode in responses
  4. Ensure document quality: Remove conflicting or outdated documents
  5. Adjust chunk size and overlap parameters
  6. Consider switching to a more capable LLM model
Best Practice: Golden Retriever should ONLY answer from your documents. If it's making things up, RAG mode may be disabled.

Connection errors to LLM backend

+
  1. For local models: Check Ollama status: ollama list
  2. Verify model is downloaded: ollama pull llama3.2
  3. For cloud models: Validate API keys in /etc/golden-retriever/.env
  4. Test connectivity: curl -X POST http://localhost:11434/api/generate
  5. Check firewall rules if using external APIs
  6. Review proxy settings if behind corporate network
Fallback Option: Configure multiple LLM backends for automatic failover.

Memory/context issues in conversations

+
  1. Check conversation history limit (default: 5 exchanges)
  2. Clear Redis cache if corrupted: redis-cli FLUSHDB
  3. Verify session management: python check_sessions.py
  4. Increase context window size in model settings
  5. Enable conversation summarization for long chats
  6. Check user session timeout settings
Memory Tip: Use /clear command to reset conversation context.

Quick Installation Guide

Docker Installation (Recommended)

# Clone repository
git clone https://github.com/nololabs/golden-retriever.git
cd golden-retriever

# Configure environment
cp .env.example .env
nano .env  # Add your API keys and settings

# Start with Docker Compose
docker-compose up -d

# Verify installation
docker ps
curl http://localhost:8080/health

Manual Installation

# Install dependencies
sudo apt update
sudo apt install python3.10 postgresql redis-server

# Install Ollama (for local LLM)
curl -fsSL https://ollama.ai/install.sh | sh

# Setup Golden Retriever
pip install -r requirements.txt
python setup.py install

# Initialize database
python manage.py migrate
python manage.py init

# Start services
systemctl start golden-retriever

Still Need Help?

Enterprise Support

24/7 support for enterprise customers

📞 1-800-NOLO

Email Support

Response within 24 hours

✉️ support@nololabs.com

Community Forum

Get help from the community

💬 Visit Forum
All Systems Operational View Status Page →