INSTALL.md 11 KB

Trixy Voice Assistant - Installation Guide

This guide provides comprehensive installation instructions for the Trixy Voice Assistant project across different deployment scenarios and platforms.

Quick Start

Basic Installation

# Clone the repository
git clone <repository-url>
cd trixy3/source

# Choose your installation based on use case:

# For basic functionality
pip install -r requirements.txt

# For client/satellite deployment
pip install -r requirements-client.txt

# For full server deployment
pip install -r requirements-server.txt

# For ML training and development
pip install -r requirements-ml.txt

# For development environment
pip install -r requirements-dev.txt

# For optional features (as needed)
pip install -r requirements-optional.txt

Requirements Files Overview

File Purpose Size Use Case
requirements.txt Core functionality ~200MB Base system, all modes
requirements-client.txt Minimal client ~500MB Satellite devices, edge deployment
requirements-server.txt Full server ~2-3GB Central hub, full features + TUI
requirements-ml.txt ML training ~3-4GB Model training, development
requirements-dev.txt Development ~4-5GB Full development environment
requirements-optional.txt Extended features Variable Cloud services, hardware-specific

Platform-Specific Installation

Ubuntu/Debian Linux

# System dependencies
sudo apt-get update
sudo apt-get install -y \
    python3 python3-pip python3-venv \
    build-essential cmake \
    portaudio19-dev libasound2-dev \
    ffmpeg libavcodec-extra \
    git

# For server deployment
sudo apt-get install -y redis-server

# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate

# Install based on deployment mode
pip install --upgrade pip
pip install -r requirements-server.txt  # or other requirements file

Raspberry Pi (ARM64)

# System dependencies
sudo apt-get update
sudo apt-get install -y \
    python3 python3-pip python3-venv \
    portaudio19-dev python3-pyaudio \
    libatlas-base-dev \
    ffmpeg

# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate

# Install ARM-compatible PyTorch
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu

# Install client requirements
pip install -r requirements-client.txt

macOS

# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# System dependencies
brew install portaudio ffmpeg redis

# Create virtual environment
python3 -m venv trixy-env
source trixy-env/bin/activate

# Install requirements
pip install --upgrade pip
pip install -r requirements-server.txt  # or other requirements file

Windows

# Install Python 3.8+ from python.org
# Download and install Git
# Download and install Redis (optional, for server)

# Create virtual environment
python -m venv trixy-env
trixy-env\Scripts\activate

# Install requirements
pip install --upgrade pip
pip install -r requirements-server.txt

Deployment-Specific Installation

Client/Satellite Deployment

Minimal installation for satellite devices:

# Lightweight installation
pip install -r requirements-client.txt

# Verify installation
python main.py client --debug

Hardware Requirements:

  • Minimum 512MB RAM
  • ARM64 or x86_64 processor
  • Microphone access
  • Network connectivity

Server Deployment

Full server with TUI and all features:

# Full server installation
pip install -r requirements-server.txt

# Optional: Redis for caching
sudo systemctl start redis-server
sudo systemctl enable redis-server

# Verify installation
python main.py server --debug

Hardware Requirements:

  • Minimum 2GB RAM (4GB+ recommended)
  • x86_64 processor with AVX support
  • GPU support (optional, for acceleration)
  • Network interfaces for client connections

ML Training Environment

For model training and development:

# Full ML environment
pip install -r requirements-ml.txt

# For GPU support (NVIDIA)
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

# Verify GPU support
python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}')"

Hardware Requirements:

  • Minimum 8GB RAM (16GB+ recommended)
  • NVIDIA GPU with 6GB+ VRAM (optional but recommended)
  • 50GB+ free disk space for datasets and models

Development Environment

Complete development setup:

# Development environment
pip install -r requirements-dev.txt

# Set up pre-commit hooks
pre-commit install

# Verify development tools
black --version
mypy --version
pytest --version

GPU Acceleration Setup

NVIDIA CUDA

# Install CUDA-compatible PyTorch
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu118

# Install additional GPU utilities
pip install cupy-cuda11x  # Match your CUDA version
pip install nvidia-ml-py3

# Verify CUDA setup
python -c "import torch; print(f'CUDA devices: {torch.cuda.device_count()}')"

Intel Optimization

# Intel Math Kernel Library
conda install mkl  # or pip install mkl

# Intel Extension for PyTorch
pip install intel-extension-for-pytorch

Optional Features Installation

Install optional features as needed:

# Cloud services
pip install boto3 google-cloud-speech azure-cognitiveservices-speech

# Computer vision
pip install opencv-python mediapipe face-recognition

# Advanced audio processing
pip install aubio essentia pyroomacoustics

# Web interface
pip install streamlit gradio dash

# IoT integration
pip install paho-mqtt gpiozero

# Full optional features
pip install -r requirements-optional.txt

Configuration

Initial Setup

# Run initial setup
python main.py server --debug

# This will create necessary directories:
# - config/
# - models/
# - plugins/
# - assets/
# - trainer/

Configuration Files

The system will create default configuration files:

  • config/server_config.json - Server configuration
  • config/client_config.json - Client configuration
  • config/standalone_config.json - Standalone configuration

Verification

Test Installation

# Test core functionality
python -c "from trixy_core import create_application; print('Core import successful')"

# Test audio processing
python -c "import torch, torchaudio, numpy; print('Audio processing ready')"

# Test network components
python -c "from trixy_core.network import *; print('Network components ready')"

# Run integration test
python test_integration.py

Performance Testing

# Audio latency test
python -m trixy_core.ml.performance_monitor

# Memory usage test
python -m memory_profiler main.py server --debug

# Network throughput test
python network_demo.py

Troubleshooting

Common Issues

Audio Issues

# Linux: Audio device permissions
sudo usermod -a -G audio $USER
# Logout and login again

# Test audio devices
python -c "import sounddevice; print(sounddevice.query_devices())"

PyTorch Installation

# Uninstall and reinstall PyTorch
pip uninstall torch torchaudio
pip install torch torchaudio --index-url https://download.pytorch.org/whl/cpu

Permission Issues

# Fix directory permissions
chmod -R 755 trixy_core/
chmod +x main.py

Network Issues

# Test network connectivity
python -c "import socket; s=socket.socket(); s.bind(('localhost', 2101)); print('Port 2101 available')"

Performance Optimization

Memory Optimization

# Use memory-efficient PyTorch
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512

# Monitor memory usage
pip install memory-profiler
python -m memory_profiler main.py

CPU Optimization

# Set optimal thread count
export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=4

# Use optimized BLAS
pip install intel-mkl

Production Deployment

Service Configuration

# Create systemd service file
sudo nano /etc/systemd/system/trixy-server.service
[Unit]
Description=Trixy Voice Assistant Server
After=network.target redis.service

[Service]
Type=simple
User=trixy
WorkingDirectory=/opt/trixy
ExecStart=/opt/trixy/trixy-env/bin/python main.py server
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
# Enable and start service
sudo systemctl enable trixy-server
sudo systemctl start trixy-server

Load Balancing

For high-availability deployments:

# Install HAProxy or Nginx
sudo apt-get install nginx

# Configure load balancing for multiple server instances
# See nginx configuration examples in docs/deployment/

Monitoring

# Install monitoring stack
pip install prometheus-client grafana-api

# Set up monitoring endpoints
# See docs/monitoring/ for configuration examples

Development Setup

Pre-commit Hooks

# Install development requirements
pip install -r requirements-dev.txt

# Set up pre-commit
pre-commit install

# Run pre-commit on all files
pre-commit run --all-files

Testing

# Run test suite
pytest tests/

# Run with coverage
pytest --cov=trixy_core tests/

# Run performance benchmarks
pytest --benchmark-only tests/

Documentation

# Build documentation
cd docs/
make html

# Start documentation server
python -m http.server 8000 --directory _build/html/

Support

For installation issues:

  1. Check the troubleshooting section above
  2. Review system requirements
  3. Verify Python version compatibility (3.8+)
  4. Check platform-specific installation notes
  5. Review log files for error details

For development setup:

  1. Ensure all development requirements are installed
  2. Run the test suite to verify setup
  3. Check code formatting with pre-commit hooks
  4. Review the development documentation

Version Compatibility

Python Version Supported Notes
3.8 Minimum required version
3.9 Recommended
3.10 Recommended
3.11 Latest tested
3.12 ⚠️ Some packages may have compatibility issues
Platform Supported Notes
Ubuntu 20.04+ Primary development platform
Debian 11+ Fully supported
Raspberry Pi OS ARM64 support for client mode
macOS 11+ Intel and Apple Silicon
Windows 10+ ⚠️ Basic support, some audio limitations

This installation guide covers the most common deployment scenarios. For advanced configurations or specific use cases, refer to the detailed documentation in the docs/ directory.