Photo Sorter Overview / Integration guide.

Workflow:

  1. Guest uploads old Friendster photos
  2. Scraper downloads/processes locally
  3. Saves to /imported/
  4. Llama captions old photos
  5. Merges with cruise photos
  6. All stays on boat (private)
---

## COMPLETE ARCHITECTURE

┌──────────────────────────────────────────────┐
│ ONLINE (Internet Required) │
├──────────────────────────────────────────────┤
│ │
│ dragonladysf.com/sarahs-party-2025 │
│ ├─ Event info page (static) │
│ ├─ Waiver (posts to Google Sheets) │
│ ├─ Old photo upload │
│ └─ Pre-charter checklist │
│ │
│ Backend: │
│ • Google Apps Script (waiver) │
│ • Google Sheets (data storage) │
│ • Static file host (Hostinger VPS) │
│ │
└──────────────────────────────────────────────┘

[Guest uploads old photos] [Signs waiver online] ↓
┌──────────────────────────────────────────────┐
│ ON BOAT (Razer Stealth – Offline) │
├──────────────────────────────────────────────┤
│ │
│ Running: │
│ 1. Llama 3.1 70B (AMD Ryzen 8600 GPU) │
│ 2. Photo Watcher Script │
│ 3. Facial Recognition │
│ 4. Gallery Builder (from ACCID) │
│ 5. HTML Builder (standalone) │
│ 6. Site Scraper (for old photos) │
│ 7. USB Burner │
│ │
│ Process: │
│ • Guests connect to WiFi │
│ • Photos auto-sync │
│ • Llama processes each photo │
│ • Gallery builds in real-time │
│ • Final site generates at end │
│ • Burns to USB drives │
│ │
│ Storage: │
│ /incoming/ ← New photos │
│ /imported/ ← Old photos from upload │
│ /processed/ ← Llama results │
│ /galleries/ ← Built websites │
│ /usb-queue/ ← Ready to burn │
│ │
└──────────────────────────────────────────────┘

[USB Drives] Complete offline website

PHOTO AI SYSTEM ANALYSIS

What You Have Built:

python

# 3 Python Scripts for Google Vision API:

1. vision_test.py
   - Simple connection test
   - Verifies API credentials work

2. ai_reporter.py
   - Basic batch processor
   - Outputs Excel with 2 sheets:
     * PerImage (all photos with labels)
     * LabelCounts (frequency)

3. photo_ai_batch.py (FULL VERSION)
   - Complete processing pipeline
   - Maps AI labels → Yacht categories
   - SafeSearch flagging ("After Dark")
   - Outputs 3 reports:
     * photo_ai_report.xlsx (per image)
     * photo_ai_label_counts.xlsx (frequencies)
     * photo_ai_unmapped.xlsx (needs rules)

4. rules.csv
   - Maps Google Vision labels → your categories
   - Example: "Fun" → "Party"
   - Example: "Sunset" → "Sunsets"

KEY FEATURES YOU ALREADY HAVE

1. Label Mapping System

python

# From rules.csv
AI Label          → Yacht Category
"Fun"             → "Party"
"Boat"            → "Yacht"
"Sunset"          → "Sunsets"
"Food"            → "Dining"
"Party"           → "After Dark"
"Happiness"       → "Crew"
"Ocean"           → "At Sea"
"Marina"          → "At Anchor"
"Vacation"        → "Charter Life"
"Summer"          → "Lifestyle"

2. SafeSearch Flagging

python

# "After Dark" flag triggers if:
adult >= "POSSIBLE"
racy >= "POSSIBLE"
violence >= "POSSIBLE"

# Perfect for separating party photos

3. Batch Processing

python

# Can process entire folders
# Generates multiple report formats
# Tracks unmapped terms for rule updates
```

---

## INTEGRATION PLAN: Google Vision → Llama

### Current Flow (Google Vision):
```
Photo → Vision API → Labels → Map to Categories → Excel Report
```

### New Flow (Llama on Razer):
```
Photo → Llama Vision → Rich Captions + Labels → Auto-organize → Live Gallery

WHY SWITCH TO LLAMA?

Google Vision Gives You:

json

{
  "labels": ["Fun", "Boat", "Party", "Sunset", "Happiness"],
  "safeSearch": {
    "adult": "UNLIKELY",
    "racy": "POSSIBLE", 
    "violence": "VERY_UNLIKELY"
  }
}

Llama Gives You:

json

{
  "caption": "Guests celebrating with champagne toast on upper deck during golden hour as Dragon Lady passes under Golden Gate Bridge",
  "labels": ["celebration", "champagne", "sunset", "golden_gate", "upper_deck"],
  "categories": ["Highlights", "Golden Gate", "Celebrations", "Sunset"],
  "mood": "joyful celebration",
  "location": "Upper Deck",
  "landmark": "Golden Gate Bridge",
  "time_of_day": "golden_hour",
  "story_element": "The moment the sun hit the bridge, Sarah raised her glass and everyone cheered",
  "quality_score": 9.2,
  "is_highlight": true,
  "people_count": 8,
  "activity": "toasting"
}

Llama understands CONTEXT, not just labels.


HYBRID APPROACH (Best of Both)

Use BOTH Systems:

python

def process_photo(photo_path):
    """Process with both Google Vision AND Llama"""
    
    # 1. Google Vision (fast, cheap, reliable)
    vision_result = google_vision_api(photo_path)
    
    # 2. Llama (rich context, captions, stories)
    llama_result = llama_analyze(
        image=photo_path,
        context={
            "event": "Sarah's Birthday Party",
            "location": "Dragon Lady yacht, SF Bay",
            "vision_labels": vision_result['labels']  # Give Llama hints
        }
    )
    
    # 3. Combine results
    return {
        "filename": photo_path.name,
        "caption": llama_result['caption'],  # Rich human-readable
        "labels_google": vision_result['labels'],  # Reliable categorization
        "labels_llama": llama_result['labels'],  # Contextual
        "categories": map_to_yacht_categories(vision_result['labels']),
        "story": llama_result['story_element'],
        "location": llama_result['location'],
        "gps": extract_gps(photo_path),
        "after_dark": check_safesearch(vision_result['safeSearch'])
    }

UPDATED RAZER AUTOMATION SYSTEM

New Photo Processing Pipeline:

python


#!/usr/bin/env python3
# razer_photo_processor.py

import time
from pathlib import Path
from google.cloud import vision
import ollama

WATCH_FOLDER = "/incoming-photos"
EVENT_CONFIG = "/data/events/sarahs-party/config.json"

def process_new_photo(photo_path, event_data):
    """Process photo with both Vision API and Llama"""
    
    print(f"📸 Processing: {photo_path.name}")
    
    # 1. Google Vision (quick categorization)
    vision_client = vision.ImageAnnotatorClient()
    with open(photo_path, 'rb') as f:
        image = vision.Image(content=f.read())
    
    # Get labels
    label_response = vision_client.label_detection(image=image)
    google_labels = [l.description for l in label_response.label_annotations]
    
    # Get SafeSearch
    safe_response = vision_client.safe_search_detection(image=image)
    safe = safe_response.safe_search_annotation
    
    # Map to yacht categories (your existing rules.csv logic)
    yacht_categories = map_labels_to_categories(google_labels)
    
    # Check After Dark flag
    after_dark = check_after_dark_flag(safe)
    
    print(f"  ✓ Google Vision: {len(google_labels)} labels")
    print(f"  ✓ Categories: {yacht_categories}")
    if after_dark:
        print(f"  🌙 After Dark flag: YES")
    
    # 2. Llama (rich caption and story)
    exif = extract_exif(photo_path)
    
    llama_prompt = f"""
    Analyze this photo from a yacht cruise in San Francisco Bay.
    
    Event: {event_data['eventName']}
    Date: {event_data['date']}
    Time taken: {exif.get('timestamp', 'unknown')}
    GPS: {exif.get('gps', 'unknown')}
    
    Google Vision detected: {', '.join(google_labels)}
    
    Provide (as JSON):
    1. caption: Natural 1-2 sentence description
    2. location: Where on the boat (Upper Deck, Main Salon, etc.)
    3. landmark: If near SF landmark (Golden Gate, Alcatraz, etc.)
    4. mood: The emotional tone
    5. story_element: A brief narrative snippet
    6. is_highlight: true/false if this is a special moment
    7. people_count: estimated number of people visible
    8. activity: what people are doing
    """
    
    llama_response = ollama.chat(
        model='llama3.1:70b-vision',
        messages=[{
            'role': 'user',
            'content': llama_prompt,
            'images': [str(photo_path)]
        }]
    )
    
    llama_data = json.loads(llama_response['message']['content'])
    
    print(f"  ✓ Llama caption: {llama_data['caption'][:60]}...")
    
    # 3. Combine into metadata
    metadata = {
        "filename": photo_path.name,
        "timestamp": exif.get('timestamp'),
        "gps": exif.get('gps'),
        
        # Google Vision data
        "labels_google": google_labels,
        "categories": yacht_categories,
        "after_dark": after_dark,
        "safe_search": {
            "adult": safe.adult,
            "racy": safe.racy,
            "violence": safe.violence
        },
        
        # Llama enrichment
        "caption": llama_data['caption'],
        "location": llama_data['location'],
        "landmark": llama_data.get('landmark'),
        "mood": llama_data['mood'],
        "story": llama_data['story_element'],
        "is_highlight": llama_data['is_highlight'],
        "people_count": llama_data.get('people_count', 0),
        "activity": llama_data.get('activity')
    }
    
    # 4. Save to gallery JSON
    update_gallery_json(metadata)
    
    # 5. Generate thumbnails
    create_thumbnails(photo_path)
    
    return metadata

def watch_and_process():
    """Monitor folder and process new photos"""
    processed = set()
    
    while True:
        photos = list(Path(WATCH_FOLDER).glob("*.{jpg,jpeg,png,heic}"))
        
        for photo in photos:
            if photo not in processed:
                event_data = load_event_config()
                process_new_photo(photo, event_data)
                processed.add(photo)
        
        time.sleep(5)  # Check every 5 seconds

if __name__ == "__main__":
    watch_and_process()
```

---

## DIRECTORY STRUCTURE ON RAZER
```
/yacht-automation/
  ├── /incoming-photos/         ← WiFi uploads arrive here
  ├── /processed/
  │   └── sarahs-party/
  │       ├── gallery.json      ← Metadata from both APIs
  │       └── /images/
  │           ├── /originals/
  │           ├── /web/
  │           └── /thumbs/
  ├── /scripts/
  │   ├── photo_watcher.py
  │   ├── google_vision.py      ← Your existing scripts
  │   ├── llama_enrichment.py   ← New Llama integration
  │   └── gallery_builder.py
  ├── /data/
  │   ├── events/
  │   │   └── sarahs-party/
  │   │       └── config.json
  │   └── rules.csv             ← Your existing mapping
  └── /output/
      └── sarahs-party/
          └── website/          ← Built HTML site
```


Ready for those files whenever you can send them!

Where are the ACCID gallery, HTML builder, and scraper code?