This guide provides comprehensive installation instructions for the Trixy Voice Assistant project across different deployment scenarios and platforms.
# Clone the repository
git clone <repository-url>
cd trixy3/source
# Choose your installation based on use case:
# For basic functionality
pip install -r requirements.txt
# For client/satellite deployment
pip install -r requirements-client.txt
# For full server deployment
pip install -r requirements-server.txt
# For ML training and development
pip install -r requirements-ml.txt
# For development environment
pip install -r requirements-dev.txt
# For optional features (as needed)
pip install -r requirements-optional.txt
| File | Purpose | Size | Use Case |
|---|---|---|---|
requirements.txt |
Core functionality | ~200MB | Base system, all modes |
requirements-client.txt |
Minimal client | ~500MB | Satellite devices, edge deployment |
requirements-server.txt |
Full server | ~2-3GB | Central hub, full features + TUI |
requirements-ml.txt |
ML training | ~3-4GB | Model training, development |
requirements-dev.txt |
Development | ~4-5GB | Full development environment |
requirements-optional.txt |
Extended features | Variable | Cloud services, hardware-specific |
# System dependencies
sudo apt-get update
sudo apt-get install -y \
python3 python3-pip python3-venv \
build-essential cmake \
portaudio19-dev libasound2-dev \
ffmpeg libavcodec-extra \
git
# For server deployment
sudo apt-get install -y redis-server
# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate
# Install based on deployment mode
pip install --upgrade pip
pip install -r requirements-server.txt # or other requirements file
# System dependencies
sudo apt-get update
sudo apt-get install -y \
python3 python3-pip python3-venv \
portaudio19-dev python3-pyaudio \
libatlas-base-dev \
ffmpeg
# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate
# Install ARM-compatible PyTorch
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu
# Install client requirements
pip install -r requirements-client.txt
# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# System dependencies
brew install portaudio ffmpeg redis
# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate
# Install requirements
pip install --upgrade pip
pip install -r requirements-server.txt # or other requirements file
# Install Python 3.8+ from python.org
# Download and install Git
# Download and install Redis (optional, for server)
# Create virtual environment
python -m venv trixy-env
trixy-env\Scripts\activate
# Install requirements
pip install --upgrade pip
pip install -r requirements-server.txt
Minimal installation for satellite devices:
# Lightweight installation
pip install -r requirements-client.txt
# Verify installation
python main.py client --debug
Hardware Requirements:
Full server with TUI and all features:
# Full server installation
pip install -r requirements-server.txt
# Optional: Redis for caching
sudo systemctl start redis-server
sudo systemctl enable redis-server
# Verify installation
python main.py server --debug
Hardware Requirements:
For model training and development:
# Full ML environment
pip install -r requirements-ml.txt
# For GPU support (NVIDIA)
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
# Verify GPU support
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"
Hardware Requirements:
Complete development setup:
# Development environment
pip install -r requirements-dev.txt
# Set up pre-commit hooks
pre-commit install
# Verify development tools
black --version
mypy --version
pytest --version
# Install CUDA-compatible PyTorch
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118
# Install additional GPU utilities
pip install cupy-cuda11x # Match your CUDA version
pip install nvidia-ml-py3
# Verify CUDA setup
python -c "import torch; print(f'CUDA devices: {torch.cuda.device_count()}')"
# Intel Math Kernel Library
conda install mkl # or pip install mkl
# Intel Extension for PyTorch
pip install intel-extension-for-pytorch
Install optional features as needed:
# Cloud services
pip install boto3 google-cloud-speech azure-cognitiveservices-speech
# Computer vision
pip install opencv-python mediapipe face-recognition
# Advanced audio processing
pip install aubio essentia pyroomacoustics
# Web interface
pip install streamlit gradio dash
# IoT integration
pip install paho-mqtt gpiozero
# Full optional features
pip install -r requirements-optional.txt
# Run initial setup
python main.py server --debug
# This will create necessary directories:
# - config/
# - models/
# - plugins/
# - assets/
# - trainer/
The system will create default configuration files:
config/server_config.json - Server configurationconfig/client_config.json - Client configurationconfig/standalone_config.json - Standalone configuration# Test core functionality
python -c "from trixy_core import create_application; print('Core import successful')"
# Test audio processing
python -c "import torch, torchaudio, numpy; print('Audio processing ready')"
# Test network components
python -c "from trixy_core.network import *; print('Network components ready')"
# Run integration test
python test_integration.py
# Audio latency test
python -m trixy_core.ml.performance_monitor
# Memory usage test
python -m memory_profiler main.py server --debug
# Network throughput test
python network_demo.py
# Linux: Audio device permissions
sudo usermod -a -G audio $USER
# Logout and login again
# Test audio devices
python -c "import sounddevice; print(sounddevice.query_devices())"
# Uninstall and reinstall PyTorch
pip uninstall torch torchaudio
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu
# Fix directory permissions
chmod -R 755 trixy_core/
chmod +x main.py
# Test network connectivity
python -c "import socket; s=socket.socket(); s.bind(('localhost', 2101)); print('Port 2101 available')"
# Use memory-efficient PyTorch
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
# Monitor memory usage
pip install memory-profiler
python -m memory_profiler main.py
# Set optimal thread count
export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=4
# Use optimized BLAS
pip install intel-mkl
# Create systemd service file
sudo nano /etc/systemd/system/trixy-server.service
[Unit]
Description=Trixy Voice Assistant Server
After=network.target redis.service
[Service]
Type=simple
User=trixy
WorkingDirectory=/opt/trixy
ExecStart=/opt/trixy/trixy-env/bin/python main.py server
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
# Enable and start service
sudo systemctl enable trixy-server
sudo systemctl start trixy-server
For high-availability deployments:
# Install HAProxy or Nginx
sudo apt-get install nginx
# Configure load balancing for multiple server instances
# See nginx configuration examples in docs/deployment/
# Install monitoring stack
pip install prometheus-client grafana-api
# Set up monitoring endpoints
# See docs/monitoring/ for configuration examples
# Install development requirements
pip install -r requirements-dev.txt
# Set up pre-commit
pre-commit install
# Run pre-commit on all files
pre-commit run --all-files
# Run test suite
pytest tests/
# Run with coverage
pytest --cov=trixy_core tests/
# Run performance benchmarks
pytest --benchmark-only tests/
# Build documentation
cd docs/
make html
# Start documentation server
python -m http.server 8000 --directory _build/html/
For installation issues:
For development setup:
| Python Version | Supported | Notes |
|---|---|---|
| 3.8 | ✅ | Minimum required version |
| 3.9 | ✅ | Recommended |
| 3.10 | ✅ | Recommended |
| 3.11 | ✅ | Latest tested |
| 3.12 | ⚠️ | Some packages may have compatibility issues |
| Platform | Supported | Notes |
|---|---|---|
| Ubuntu 20.04+ | ✅ | Primary development platform |
| Debian 11+ | ✅ | Fully supported |
| Raspberry Pi OS | ✅ | ARM64 support for client mode |
| macOS 11+ | ✅ | Intel and Apple Silicon |
| Windows 10+ | ⚠️ | Basic support, some audio limitations |
This installation guide covers the most common deployment scenarios. For advanced configurations or specific use cases, refer to the detailed documentation in the docs/ directory.