feat: Initial commit - Train tracking system

Complete real-time train tracking system for Spanish railways (Renfe/Cercanías):

- Backend API (Node.js/Express) with GTFS-RT polling workers
- Frontend dashboard (React/Vite) with Leaflet maps
- Real-time updates via Socket.io WebSocket
- PostgreSQL/PostGIS database with Flyway migrations
- Redis caching layer
- Docker Compose configuration for development and production
- Gitea CI/CD workflows (lint, auto-tag, release)
- Production deployment with nginx + Let's Encrypt SSL

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Millaguie
2025-11-28 00:21:15 +01:00
commit 43d2ca5dcb
64 changed files with 15577 additions and 0 deletions

353
.claudeproject Normal file
View File

@@ -0,0 +1,353 @@
# Sistema de Tracking de Trenes en Tiempo Real - España
## Descripción del Proyecto
Este es un sistema web full-stack para visualizar en tiempo real la posición de todos los trenes operados por Renfe en España, con capacidad de consultar histórico mediante un timeline slider.
## Stack Tecnológico
### Backend
- **Runtime**: Node.js 20+ (ES Modules)
- **API**: Express.js con WebSocket (Socket.io)
- **Base de Datos**: PostgreSQL 15 + PostGIS (datos geoespaciales)
- **Cache**: Redis 7
- **Parser GTFS-RT**: gtfs-realtime-bindings
- **Logs**: Pino
### Frontend
- **Framework**: React 18 + Vite
- **Mapa**: Leaflet.js + React-Leaflet
- **WebSocket**: Socket.io-client
- **Estilos**: CSS vanilla (sin frameworks CSS)
### Infraestructura
- **Contenedores**: Docker + Docker Compose
- **Reverse Proxy**: Nginx
- **Migraciones**: Flyway
- **Gestión**: Makefile para comandos comunes
## Arquitectura
```
┌─────────────┐
│ Frontend │ ← React + Leaflet + Socket.io
│ (Vite) │
└──────┬──────┘
┌───▼────┐
│ Nginx │ ← Reverse Proxy
└───┬────┘
┌──────▼──────┐
│ Backend │ ← Express + Socket.io
│ API │
└─┬─────────┬─┘
│ │
┌─▼─┐ ┌──▼───┐
│DB │ │Redis │
└───┘ └──────┘
┌─┴──────┐
│ Worker │ ← Polling GTFS-RT cada 30seg
└────────┘
```
## Fuente de Datos Principal
- **URL**: https://gtfsrt.renfe.com/vehicle_positions.pb
- **Formato**: Protocol Buffer (GTFS Realtime)
- **Frecuencia**: Actualización cada 30 segundos
- **Contenido**: Posiciones GPS de trenes en tiempo real
## Estructura del Proyecto
```
trenes/
├── backend/ # Backend Node.js
│ ├── src/
│ │ ├── api/ # API REST + WebSocket
│ │ │ ├── routes/ # Endpoints (trains, routes, stations, stats)
│ │ │ └── server.js # Servidor principal
│ │ ├── worker/ # Workers de fondo
│ │ │ └── gtfs-poller.js # Polling GTFS-RT
│ │ ├── lib/ # Utilidades (db, redis, logger)
│ │ └── config/ # Configuración
│ ├── package.json
│ └── Dockerfile
├── frontend/ # Frontend React
│ ├── src/
│ │ ├── components/ # Componentes React
│ │ │ ├── TrainMap.jsx # Mapa Leaflet
│ │ │ ├── TrainInfo.jsx # Panel de info
│ │ │ └── Timeline.jsx # Timeline
│ │ ├── hooks/ # Custom hooks
│ │ │ └── useTrains.js # Hook WebSocket
│ │ ├── styles/ # CSS
│ │ ├── App.jsx # Componente principal
│ │ └── main.jsx # Entry point
│ ├── package.json
│ ├── vite.config.js
│ └── Dockerfile
├── database/
│ ├── init/ # Scripts de inicialización
│ └── migrations/ # Migraciones Flyway (V1, V2, V3, V4)
├── nginx/ # Configuración Nginx
│ ├── nginx.conf
│ └── conf.d/
├── docker-compose.yml # Orquestación de servicios
├── Makefile # Comandos simplificados
├── .env.example # Variables de entorno
└── README.md # Documentación principal
```
## Base de Datos
### Tablas Principales
1. **train_positions** (particionada por mes)
- Histórico de todas las posiciones GPS
- Incluye: lat/lon, velocidad, dirección, estado, timestamp
- Particiones: nov 2025 - mar 2027
2. **trains**
- Catálogo de trenes
- Estado activo/inactivo
3. **routes**
- Rutas/líneas (AVE, Cercanías, etc.)
4. **stations**
- Estaciones con coordenadas GPS
- Datos de accesibilidad y servicios
5. **alerts**
- Alertas e incidencias
### Vistas
- **current_train_positions**: Última posición de cada tren
- **active_trains**: Trenes activos en últimas 24h
### Funciones Útiles
- `get_train_path(train_id, from, to)`: Trayectoria de un tren
- `get_trains_in_area(minLat, minLon, maxLat, maxLon, time)`: Trenes en área
- `calculate_train_statistics(train_id, from, to)`: Estadísticas de viaje
- `cleanup_old_positions(days)`: Limpiar datos antiguos
- `create_next_partition()`: Crear siguiente partición
## Redis Cache
Estructura de claves:
```
trains:current:{train_id} → JSON con última posición (TTL 5min)
trains:active → SET con IDs de trenes activos
stats:last_update → Timestamp última actualización
```
## API Endpoints
### Trenes
- `GET /trains/current` - Todos los trenes activos
- `GET /trains/:id` - Info de tren específico
- `GET /trains/:id/history` - Histórico de posiciones
- `GET /trains/:id/path` - Trayectoria entre fechas
- `GET /trains/area` - Trenes en área geográfica
### Rutas
- `GET /routes` - Todas las rutas
- `GET /routes/:id` - Ruta específica
### Estaciones
- `GET /stations` - Todas las estaciones
- `GET /stations/:id` - Estación específica
### Estadísticas
- `GET /stats` - Estadísticas del sistema
- `GET /stats/train/:id` - Estadísticas de tren
## WebSocket Events
### Cliente → Servidor
- `subscribe:train` - Suscribirse a actualizaciones de un tren
- `unsubscribe:train` - Desuscribirse
### Servidor → Cliente
- `trains:update` - Actualización masiva (todos los trenes)
- `train:update` - Actualización individual (tren suscrito)
## Comandos Make Comunes
```bash
make help # Ver todos los comandos disponibles
make start # Iniciar servicios en producción
make stop # Detener servicios
make logs # Ver logs de todos los servicios
make logs-api # Ver logs del API
make logs-worker # Ver logs del worker
make migrate # Ejecutar migraciones
make psql # Conectar a PostgreSQL
make redis-cli # Conectar a Redis
make test-start # Iniciar entorno de testing
make backup-db # Crear backup de BD
make cleanup-old-data # Limpiar datos antiguos (>90 días)
```
## Flujo de Datos
1. **Worker** hace polling a GTFS-RT cada 30 segundos
2. **Worker** parsea Protocol Buffer y extrae posiciones
3. **Worker** almacena en PostgreSQL (histórico) y Redis (cache)
4. **API** lee de Redis para requests y WebSocket
5. **WebSocket** broadcast a clientes cada 2 segundos
6. **Frontend** recibe updates y actualiza mapa en tiempo real
## Variables de Entorno Importantes
```bash
# API
PORT=3000
NODE_ENV=development
# Database
DATABASE_URL=postgresql://user:pass@host:5432/db
# Redis
REDIS_URL=redis://:pass@host:6379
# GTFS-RT
GTFS_RT_URL=https://gtfsrt.renfe.com/vehicle_positions.pb
POLLING_INTERVAL=30000
# CORS
CORS_ORIGIN=http://localhost:3000,http://localhost:5173
# Frontend
VITE_API_URL=http://localhost/api
VITE_WS_URL=ws://localhost/ws
```
## Estado Actual del Proyecto
### ✅ Fase 1: MVP (COMPLETADA)
- [x] Arquitectura Docker completa
- [x] Worker GTFS-RT Vehicle Positions
- [x] API REST core
- [x] WebSocket server
- [x] Frontend React con mapa Leaflet
- [x] Panel de información de tren
- [x] Timeline básico (UI, funcionalidad pendiente)
### ⬜ Fase 2: Enriquecimiento (SIGUIENTE)
- [ ] Integración GTFS Static (rutas, horarios)
- [ ] Trip Updates (retrasos, cancelaciones)
- [ ] Service Alerts (incidencias)
- [ ] Monitor de puntualidad
- [ ] Timeline funcional con histórico
## Documentación Relevante
- **Arquitectura completa**: [arquitectura-sistema-tracking-trenes.md](arquitectura-sistema-tracking-trenes.md)
- **Fuentes de datos**: [FUENTES_DATOS.md](FUENTES_DATOS.md)
- **Fase 1 MVP**: [FASE1-MVP.md](FASE1-MVP.md)
- **README principal**: [README.md](README.md)
## Tareas Comunes
### Añadir un nuevo endpoint
1. Crear ruta en `backend/src/api/routes/`
2. Importar y usar en `backend/src/api/server.js`
3. Documentar en README
### Añadir una nueva vista en BD
1. Crear migración en `database/migrations/V{N}__descripcion.sql`
2. Ejecutar `make migrate`
3. Documentar en arquitectura
### Añadir un componente React
1. Crear en `frontend/src/components/`
2. Importar en `App.jsx` o componente padre
3. Añadir estilos en `styles/index.css`
### Añadir una nueva fuente de datos
1. Crear worker en `backend/src/worker/`
2. Parsear y almacenar datos
3. Actualizar `docker-compose.yml` si es necesario
4. Documentar en FUENTES_DATOS.md
## Debugging
### Ver logs en tiempo real
```bash
# Todos los servicios
make logs
# Servicio específico
docker-compose logs -f worker
docker-compose logs -f api
docker-compose logs -f postgres
```
### Inspeccionar base de datos
```bash
# Conectar a PostgreSQL
make psql
# Ver últimas posiciones
SELECT * FROM train_positions ORDER BY recorded_at DESC LIMIT 10;
# Contar trenes activos
SELECT COUNT(*) FROM trains WHERE is_active = true;
```
### Inspeccionar Redis
```bash
# Conectar a Redis
make redis-cli
# Ver trenes activos
SMEMBERS trains:active
# Ver posición de un tren
GET trains:current:TRAIN_ID
```
## Notas de Desarrollo
### Convenciones de Código
- **Backend**: ES Modules, camelCase para variables, PascalCase para clases
- **Frontend**: React funcional (hooks), componentes PascalCase
- **SQL**: snake_case para tablas y columnas, UPPERCASE para SQL keywords
- **Git**: Commits descriptivos en español
### Testing
- Backend: Tests con Jest (pendiente)
- Frontend: Tests con React Testing Library (pendiente)
- E2E: Cypress (pendiente)
### Performance
- **Particiones**: Crear nuevas particiones mensualmente
- **Cleanup**: Ejecutar `cleanup_old_positions()` mensualmente
- **Cache**: Redis TTL configurado a 5 minutos
- **Pooling**: PostgreSQL pool size 2-10
## Contacto y Recursos
- **GTFS Spec**: https://gtfs.org/documentation/realtime/
- **Renfe Data**: https://data.renfe.com/
- **PostGIS Docs**: https://postgis.net/
- **Leaflet Docs**: https://leafletjs.com/
- **Socket.io Docs**: https://socket.io/
---
**Última actualización**: 27 noviembre 2025
**Versión**: 1.0.0 (Fase 1 MVP)

56
.env.example Normal file
View File

@@ -0,0 +1,56 @@
# ============================================
# Configuración del Sistema de Tracking de Trenes
# ============================================
# Copiar a .env y configurar según el entorno
# ===========================================
# DESARROLLO LOCAL
# ===========================================
# --- Base de Datos PostgreSQL ---
POSTGRES_USER=trenes
POSTGRES_PASSWORD=trenes_password_change_me
POSTGRES_DB=trenes
# --- Redis ---
REDIS_PASSWORD=redis_password_change_me
# --- Backend API ---
JWT_SECRET=jwt_secret_change_me_min_32_chars
CORS_ORIGINS=http://localhost,http://localhost:5173,http://localhost:3000
LOG_LEVEL=info
# --- Frontend ---
# IMPORTANTE: Para desarrollo local
VITE_API_URL=http://localhost/api
VITE_WS_URL=http://localhost
# --- Worker ---
GTFS_RT_URL=https://gtfsrt.renfe.com/vehicle_positions.pb
POLLING_INTERVAL=30000
# --- Configuración de Entorno ---
NODE_ENV=development
# ===========================================
# PRODUCCIÓN (ejemplo para trenes.millaguie.net)
# ===========================================
# Descomentar y ajustar para producción:
#
# POSTGRES_USER=trenes
# POSTGRES_PASSWORD=<contraseña_segura_generada>
# POSTGRES_DB=trenes
#
# JWT_SECRET=<secreto_jwt_largo_y_aleatorio>
#
# # IMPORTANTE: CORS_ORIGINS debe incluir tu dominio
# CORS_ORIGINS=https://tudominio.com
#
# # IMPORTANTE sobre VITE_WS_URL:
# # - Socket.io añade /socket.io/ automáticamente
# # - NO incluir /ws o /socket.io en la URL
# # - Usar https:// (no wss://), Socket.io maneja el protocolo
# VITE_API_URL=https://tudominio.com/api
# VITE_WS_URL=https://tudominio.com
#
# NODE_ENV=production

37
.env.testing Normal file
View File

@@ -0,0 +1,37 @@
# ============================================
# Configuración de Testing del Sistema de Tracking de Trenes
# ============================================
# Este archivo se usa para el entorno de testing/pruebas
# No contiene datos sensibles ya que es solo para testing
# --- Base de Datos PostgreSQL ---
POSTGRES_PASSWORD=test_password_not_secure
# --- Redis ---
REDIS_PASSWORD=test_redis_password
# --- Backend API ---
JWT_SECRET=test_jwt_secret_for_testing_only_min_32_chars_12345
CORS_ORIGIN=http://localhost:80,http://localhost:3000,http://localhost:5173
LOG_LEVEL=debug
# --- Frontend ---
VITE_API_URL=http://localhost/api
VITE_WS_URL=ws://localhost/ws
# --- Worker ---
# URL del feed GTFS-RT de Renfe
GTFS_RT_URL=https://gtfsrt.renfe.com/vehicle_positions.pb
# Intervalo de polling más rápido para testing (15 segundos)
POLLING_INTERVAL=15000
# --- Configuración de Entorno ---
NODE_ENV=development
# --- Flags de Testing ---
# Generar datos de prueba automáticamente
GENERATE_TEST_DATA=true
# Habilitar endpoints de debug
ENABLE_DEBUG_ENDPOINTS=true
# Deshabilitar rate limiting para pruebas
DISABLE_RATE_LIMIT=true

View File

@@ -0,0 +1,98 @@
name: Auto Tag on Merge to Main
on:
push:
branches:
- main
paths-ignore:
- '*.md'
- 'docs/**'
- '.gitignore'
jobs:
auto-tag:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0 # Fetch all history for proper versioning
- name: Get latest tag
id: get_tag
run: |
# Get the latest tag, default to v0.0.0 if none exists
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0")
echo "latest_tag=$LATEST_TAG" >> $GITHUB_OUTPUT
echo "Latest tag: $LATEST_TAG"
- name: Determine version bump
id: bump
run: |
# Get commit messages since last tag
LATEST_TAG="${{ steps.get_tag.outputs.latest_tag }}"
# If no real tag exists (v0.0.0), get all commits
if [ "$LATEST_TAG" == "v0.0.0" ]; then
COMMITS=$(git log --pretty=format:"%s")
else
COMMITS=$(git log $LATEST_TAG..HEAD --pretty=format:"%s")
fi
# Determine version bump type based on conventional commits
BUMP_TYPE="patch"
if echo "$COMMITS" | grep -qiE "^(feat|feature)(\(.+\))?!:|^BREAKING CHANGE:"; then
BUMP_TYPE="major"
elif echo "$COMMITS" | grep -qiE "^(feat|feature)(\(.+\))?:"; then
BUMP_TYPE="minor"
elif echo "$COMMITS" | grep -qiE "^(fix|bugfix|perf|refactor)(\(.+\))?:"; then
BUMP_TYPE="patch"
fi
echo "bump_type=$BUMP_TYPE" >> $GITHUB_OUTPUT
echo "Version bump type: $BUMP_TYPE"
- name: Calculate new version
id: new_version
run: |
LATEST_TAG="${{ steps.get_tag.outputs.latest_tag }}"
BUMP_TYPE="${{ steps.bump.outputs.bump_type }}"
# Remove 'v' prefix and split version
VERSION=${LATEST_TAG#v}
IFS='.' read -r MAJOR MINOR PATCH <<< "$VERSION"
# Bump version based on type
if [ "$BUMP_TYPE" == "major" ]; then
MAJOR=$((MAJOR + 1))
MINOR=0
PATCH=0
elif [ "$BUMP_TYPE" == "minor" ]; then
MINOR=$((MINOR + 1))
PATCH=0
else
PATCH=$((PATCH + 1))
fi
NEW_VERSION="v$MAJOR.$MINOR.$PATCH"
echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
echo "New version: $NEW_VERSION"
- name: Create and push tag
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
NEW_VERSION="${{ steps.new_version.outputs.new_version }}"
# Configure git
git config user.name "Gitea Actions"
git config user.email "actions@gitea.local"
# Create annotated tag
git tag -a "$NEW_VERSION" -m "Release $NEW_VERSION"
# Push tag
git push origin "$NEW_VERSION"
echo "Created and pushed tag: $NEW_VERSION"

116
.gitea/workflows/ci.yml Normal file
View File

@@ -0,0 +1,116 @@
name: CI - Lint and Build
on:
push:
branches:
- main
- develop
pull_request:
branches:
- main
jobs:
lint-backend:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: backend/package-lock.json
- name: Install backend dependencies
run: |
cd backend
npm ci
- name: Run ESLint (backend)
run: |
cd backend
npm run lint || true
- name: Check formatting with Prettier
run: |
cd backend
npx prettier --check "src/**/*.js" || true
lint-frontend:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: frontend/package-lock.json
- name: Install frontend dependencies
run: |
cd frontend
npm ci
- name: Run ESLint (frontend)
run: |
cd frontend
npm run lint || true
build-frontend:
runs-on: ubuntu-latest
needs: lint-frontend
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
cache-dependency-path: frontend/package-lock.json
- name: Install frontend dependencies
run: |
cd frontend
npm ci
- name: Build frontend
run: |
cd frontend
npm run build
env:
VITE_API_URL: http://localhost/api
VITE_WS_URL: http://localhost
docker-build-test:
runs-on: ubuntu-latest
needs: [lint-backend, lint-frontend]
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build backend image (test)
uses: docker/build-push-action@v5
with:
context: ./backend
push: false
tags: trenes-backend:test
- name: Build frontend image (test)
uses: docker/build-push-action@v5
with:
context: ./frontend
push: false
build-args: |
VITE_API_URL=http://localhost/api
VITE_WS_URL=http://localhost
tags: trenes-frontend:test

View File

@@ -0,0 +1,76 @@
name: Release - Build and Publish Docker Images
on:
push:
tags:
- 'v*.*.*'
jobs:
build-and-publish:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "version=$VERSION" >> $GITHUB_OUTPUT
echo "Building version: $VERSION"
- name: Get current date
id: date
run: echo "date=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Gitea Container Registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.REGISTRY_URL }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push backend image
uses: docker/build-push-action@v5
with:
context: ./backend
push: true
build-args: |
APP_VERSION=${{ steps.version.outputs.version }}
BUILD_DATE=${{ steps.date.outputs.date }}
GIT_COMMIT=${{ github.sha }}
tags: |
${{ secrets.REGISTRY_URL }}/trenes/backend:${{ steps.version.outputs.version }}
${{ secrets.REGISTRY_URL }}/trenes/backend:latest
provenance: false
sbom: false
- name: Build and push frontend image
uses: docker/build-push-action@v5
with:
context: ./frontend
push: true
build-args: |
VITE_API_URL=${{ secrets.PROD_API_URL }}
VITE_WS_URL=${{ secrets.PROD_WS_URL }}
APP_VERSION=${{ steps.version.outputs.version }}
BUILD_DATE=${{ steps.date.outputs.date }}
GIT_COMMIT=${{ github.sha }}
tags: |
${{ secrets.REGISTRY_URL }}/trenes/frontend:${{ steps.version.outputs.version }}
${{ secrets.REGISTRY_URL }}/trenes/frontend:latest
provenance: false
sbom: false
- name: Summary
run: |
echo "### Docker Images Published 🐳" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Version:** ${{ steps.version.outputs.version }}" >> $GITHUB_STEP_SUMMARY
echo "" >> $GITHUB_STEP_SUMMARY
echo "**Images:**" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ secrets.REGISTRY_URL }}/trenes/backend:${{ steps.version.outputs.version }}\`" >> $GITHUB_STEP_SUMMARY
echo "- \`${{ secrets.REGISTRY_URL }}/trenes/frontend:${{ steps.version.outputs.version }}\`" >> $GITHUB_STEP_SUMMARY

59
.gitignore vendored Normal file
View File

@@ -0,0 +1,59 @@
# Node.js
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Build outputs
dist/
build/
.vite/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Logs
*.log
logs/
# OS
.DS_Store
Thumbs.db
# Environment files (may contain secrets)
.env
.env.local
.env.production
.env.*.local
# Testing
coverage/
.nyc_output/
# Docker volumes (local data)
postgres_data/
redis_data/
# GTFS data downloads
gtfs_data/
*.zip
*.pb
# SSL certificates (local dev only)
nginx/ssl/
# Temporary files
tmp/
temp/
*.tmp
# Database dumps
*.sql
*.dump
# Claude
.claude/

485
FASE1-MVP.md Normal file
View File

@@ -0,0 +1,485 @@
# Fase 1: MVP - Sistema de Tracking de Trenes en Tiempo Real
## Estado: ✅ COMPLETADO
La Fase 1 del roadmap ha sido implementada exitosamente. Este documento describe lo que se ha construido y cómo probarlo.
---
## ✨ Características Implementadas
### Backend
- ✅ Worker GTFS-RT que recolecta posiciones cada 30 segundos
- ✅ API REST con endpoints para trenes, rutas, estaciones y estadísticas
- ✅ WebSocket server para actualizaciones en tiempo real
- ✅ Integración con PostgreSQL + PostGIS
- ✅ Cache Redis para posiciones actuales
- ✅ Sistema de logs con Pino
- ✅ Gestión de errores y reconexión automática
### Frontend
- ✅ Mapa interactivo con Leaflet.js y OpenStreetMap
- ✅ Visualización de trenes en tiempo real
- ✅ Panel de información detallada de cada tren
- ✅ Conexión WebSocket con reconexión automática
- ✅ Timeline básico (UI preparado, funcionalidad fase 2)
- ✅ Estadísticas en header (trenes activos, última actualización)
- ✅ Diseño responsivo
---
## 📁 Estructura del Proyecto
```
trenes/
├── backend/
│ ├── src/
│ │ ├── api/
│ │ │ ├── routes/
│ │ │ │ ├── trains.js # Endpoints de trenes
│ │ │ │ ├── routes.js # Endpoints de rutas
│ │ │ │ ├── stations.js # Endpoints de estaciones
│ │ │ │ └── stats.js # Endpoints de estadísticas
│ │ │ └── server.js # Servidor API + WebSocket
│ │ ├── worker/
│ │ │ └── gtfs-poller.js # Worker GTFS-RT
│ │ ├── lib/
│ │ │ ├── db.js # Cliente PostgreSQL
│ │ │ ├── redis.js # Cliente Redis
│ │ │ └── logger.js # Logger Pino
│ │ └── config/
│ │ └── index.js # Configuración
│ ├── package.json
│ ├── Dockerfile
│ └── .env.example
├── frontend/
│ ├── src/
│ │ ├── components/
│ │ │ ├── TrainMap.jsx # Mapa Leaflet
│ │ │ ├── TrainInfo.jsx # Panel de información
│ │ │ └── Timeline.jsx # Timeline (UI)
│ │ ├── hooks/
│ │ │ └── useTrains.js # Hook WebSocket
│ │ ├── styles/
│ │ │ └── index.css # Estilos globales
│ │ ├── App.jsx # Componente principal
│ │ └── main.jsx # Entry point
│ ├── package.json
│ ├── Dockerfile
│ └── vite.config.js
├── database/
│ ├── init/ # Scripts iniciales
│ └── migrations/ # Migraciones Flyway
├── docker-compose.yml
├── Makefile
└── README.md
```
---
## 🚀 Cómo Ejecutar el MVP
### Prerrequisitos
- Docker y Docker Compose instalados
- Puerto 80, 3000, 5432, 6379 disponibles
- (Opcional) Make para comandos simplificados
### Opción 1: Usando Make (Recomendado)
```bash
# 1. Configurar variables de entorno
cp .env.example .env
# Editar .env si es necesario
# 2. Ejecutar migraciones
make migrate
# 3. Iniciar todos los servicios
make start
# 4. Ver logs
make logs
```
### Opción 2: Docker Compose Manual
```bash
# 1. Configurar variables de entorno
cp .env.example .env
# 2. Ejecutar migraciones
docker-compose --profile migration up flyway
# 3. Iniciar servicios
docker-compose up -d
# 4. Ver logs
docker-compose logs -f
```
### Opción 3: Desarrollo Local (sin Docker)
#### Backend
```bash
cd backend
# Instalar dependencias
npm install
# Configurar .env
cp .env.example .env
# Ajustar DATABASE_URL y REDIS_URL a localhost
# Ejecutar worker en una terminal
npm run dev:worker
# Ejecutar API en otra terminal
npm run dev
```
#### Frontend
```bash
cd frontend
# Instalar dependencias
npm install
# Ejecutar en modo desarrollo
npm run dev
```
---
## 🌐 Acceder a la Aplicación
Una vez iniciados los servicios:
- **Aplicación Web**: http://localhost
- **API REST**: http://localhost/api o http://localhost:3000
- **Health Check**: http://localhost/health o http://localhost:3000/health
---
## 📡 Endpoints de la API
### Trenes
```bash
# Obtener todos los trenes activos
GET /trains/current
# Obtener información de un tren específico
GET /trains/:id
# Obtener histórico de un tren
GET /trains/:id/history?from=2025-11-27T00:00:00Z&to=2025-11-27T23:59:59Z&limit=100
# Obtener trayectoria de un tren
GET /trains/:id/path?from=2025-11-27T10:00:00Z&to=2025-11-27T11:00:00Z
# Obtener trenes en un área geográfica
GET /trains/area?minLat=40.0&minLon=-4.0&maxLat=41.0&maxLon=-3.0
```
### Rutas
```bash
# Obtener todas las rutas
GET /routes
# Obtener ruta específica
GET /routes/:id
```
### Estaciones
```bash
# Obtener todas las estaciones
GET /stations
# Obtener estaciones por tipo
GET /stations?type=MAJOR
# Obtener estación específica
GET /stations/:id
```
### Estadísticas
```bash
# Obtener estadísticas del sistema
GET /stats
# Obtener estadísticas de un tren
GET /stats/train/:id?from=2025-11-27T00:00:00Z&to=2025-11-27T23:59:59Z
```
---
## 🔌 WebSocket Events
### Cliente → Servidor
```javascript
// Suscribirse a un tren específico
socket.emit('subscribe:train', trainId);
// Desuscribirse de un tren
socket.emit('unsubscribe:train', trainId);
```
### Servidor → Cliente
```javascript
// Actualización de todos los trenes (cada 2 segundos)
socket.on('trains:update', (positions) => {
console.log('Posiciones actualizadas:', positions);
});
// Actualización de un tren específico (si estás suscrito)
socket.on('train:update', (position) => {
console.log('Tren actualizado:', position);
});
```
---
## 🧪 Probar el Sistema
### 1. Verificar que el Worker está funcionando
```bash
# Ver logs del worker
make logs-worker
# O con docker-compose
docker-compose logs -f worker
# Deberías ver mensajes como:
# "Polling GTFS-RT feed..."
# "Processed vehicle positions: {trains: 50, duration: 1234}"
```
### 2. Verificar API
```bash
# Health check
curl http://localhost:3000/health
# Obtener trenes actuales
curl http://localhost:3000/trains/current | jq
# Obtener estadísticas
curl http://localhost:3000/stats | jq
```
### 3. Verificar Base de Datos
```bash
# Conectar a PostgreSQL
make psql
# Ver trenes almacenados
SELECT COUNT(*) FROM trains;
# Ver posiciones de las últimas 24 horas
SELECT COUNT(*) FROM train_positions WHERE recorded_at > NOW() - INTERVAL '24 hours';
# Ver estaciones
SELECT * FROM stations LIMIT 10;
```
### 4. Verificar Redis
```bash
# Conectar a Redis
make redis-cli
# Ver trenes activos
SMEMBERS trains:active
# Ver posición actual de un tren
GET trains:current:TRAIN_ID
```
---
## 🐛 Troubleshooting
### No se ven trenes en el mapa
**Causa**: El feed GTFS-RT puede no tener datos o el worker no está corriendo.
**Solución**:
```bash
# Verificar logs del worker
make logs-worker
# Verificar si hay trenes en Redis
make redis-cli
> SMEMBERS trains:active
# Si Redis está vacío, verificar PostgreSQL
make psql
> SELECT COUNT(*) FROM train_positions WHERE recorded_at > NOW() - INTERVAL '1 hour';
```
### Error de conexión WebSocket
**Causa**: CORS o URL incorrecta.
**Solución**:
```bash
# Verificar que VITE_WS_URL está configurado correctamente
# En .env.testing o variables de entorno del frontend
# Debería ser: http://localhost:3000 (desarrollo) o ws://localhost/ws (producción)
```
### La base de datos no tiene datos
**Causa**: Migraciones no ejecutadas o feed GTFS-RT sin datos.
**Solución**:
```bash
# Ejecutar migraciones
make migrate
# Verificar estado de migraciones
make migrate-info
# Ver datos iniciales
make psql
> SELECT * FROM stations LIMIT 5;
```
### Error "PostgreSQL not connected"
**Causa**: PostgreSQL no está corriendo o configuración incorrecta.
**Solución**:
```bash
# Verificar que PostgreSQL está corriendo
docker-compose ps postgres
# Reiniciar PostgreSQL
docker-compose restart postgres
# Verificar logs
docker-compose logs postgres
```
---
## 📊 Métricas y Monitorización
### Logs del Sistema
```bash
# Ver todos los logs
make logs
# Ver logs específicos
make logs-api # API
make logs-worker # Worker
make logs-db # PostgreSQL
```
### Estadísticas del Worker
El worker registra estadísticas cada 60 segundos:
```json
{
"totalPolls": 120,
"successfulPolls": 118,
"failedPolls": 2,
"totalTrains": 45,
"lastPollTime": "2025-11-27T10:30:00.000Z",
"successRate": "98.33%"
}
```
### Panel de Administración
Para acceder a herramientas de administración:
```bash
# Iniciar con modo debug
make debug-start
# Acceder a:
# - Adminer (PostgreSQL): http://localhost:8080
# - Redis Commander: http://localhost:8081
```
---
## 🎯 Próximos Pasos (Fase 2)
La Fase 2 incluirá:
- [ ] Integración GTFS Static (rutas, horarios)
- [ ] Trip Updates (retrasos, cancelaciones)
- [ ] Service Alerts (incidencias)
- [ ] Timeline funcional con reproducción histórica
- [ ] Monitor de puntualidad
- [ ] Panel de incidencias
Para más información, consultar el [roadmap completo](arquitectura-sistema-tracking-trenes.md#roadmap-de-features).
---
## 📝 Notas Técnicas
### Fuente de Datos
El sistema consume el feed GTFS-RT de Renfe:
- **URL**: https://gtfsrt.renfe.com/vehicle_positions.pb
- **Formato**: Protocol Buffer (GTFS Realtime)
- **Frecuencia**: 30 segundos
- **Cobertura**: Principalmente Cercanías
### Almacenamiento
- **PostgreSQL**: Histórico completo de posiciones (particionado por mes)
- **Redis**: Cache de últimas posiciones (TTL 5 minutos)
- **WebSocket**: Broadcast en tiempo real (cada 2 segundos)
### Rendimiento
- **Polling**: 30 segundos (configurable via `POLLING_INTERVAL`)
- **Broadcast WS**: 2 segundos
- **Particiones DB**: Mensuales (nov 2025 - mar 2027)
- **Retención**: 90 días (configurable, usar `cleanup_old_positions()`)
---
## 📚 Documentación Adicional
- [Arquitectura Completa](arquitectura-sistema-tracking-trenes.md)
- [Fuentes de Datos](FUENTES_DATOS.md)
- [README Principal](README.md)
- [Makefile Commands](Makefile) - Ver `make help`
---
## 🤝 Contribuir
Si encuentras bugs o quieres proponer mejoras:
1. Crea un issue describiendo el problema/mejora
2. Haz un fork del proyecto
3. Crea una rama para tu feature
4. Envía un pull request
---
**Estado**: Fase 1 MVP Completada ✅
**Fecha**: 27 noviembre 2025
**Próxima Fase**: Fase 2 - Enriquecimiento

703
FASE2-ENRIQUECIMIENTO.md Normal file
View File

@@ -0,0 +1,703 @@
# Fase 2: Enriquecimiento - GTFS Static, Trip Updates y Service Alerts
## Estado: 🚧 EN DESARROLLO
La Fase 2 añade enriquecimiento de datos mediante GTFS Static, actualizaciones de viajes en tiempo real (Trip Updates) y alertas de servicio (Service Alerts).
---
## ✨ Características Implementadas
### Backend
#### Base de Datos
- ✅ Migración V5: Tablas GTFS Static (trips, stop_times, calendar, shapes)
- ✅ Tablas para Trip Updates y Stop Time Updates
- ✅ Vistas: active_trips_today, delayed_trips
- ✅ Funciones: get_trip_schedule, get_next_departures
#### Workers
- ✅ GTFS Static Syncer: Sincronización diaria de datos estáticos
- ✅ Trip Updates Poller: Polling de retrasos y actualizaciones de viajes
- ✅ Service Alerts Poller: Polling de alertas e incidencias
#### API REST
- ✅ Endpoints de Alertas (`/alerts`)
- ✅ Endpoints de Trips y Delays (`/trips`)
### Frontend
- ⏳ Componente de Alertas (pendiente)
- ⏳ Monitor de Puntualidad (pendiente)
- ⏳ Timeline Funcional (pendiente)
---
## 📁 Nuevos Archivos Phase 2
### Base de Datos
```
database/migrations/
└── V5__gtfs_static_tables.sql # Tablas GTFS Static + Trip Updates
```
### Backend Workers
```
backend/src/worker/
├── gtfs-static-syncer.js # Sincronización GTFS Static
├── trip-updates-poller.js # Polling Trip Updates
└── alerts-poller.js # Polling Service Alerts
```
### Backend API
```
backend/src/api/routes/
├── alerts.js # Endpoints de alertas
└── trips.js # Endpoints de trips y delays
```
---
## 🚀 Ejecutar con Fase 2
### Usando Docker Compose
```bash
# 1. Ejecutar migraciones (incluye V5)
make migrate
# 2. Iniciar todos los servicios (incluye nuevos workers)
make start
# Los nuevos workers se inician automáticamente:
# - gtfs-static-syncer (sincronización diaria a las 3 AM)
# - trip-updates-poller (polling cada 30s)
# - alerts-poller (polling cada 30s)
```
### Desarrollo Local
```bash
cd backend
# Terminal 1: GTFS Static Syncer
npm run dev:gtfs-static
# Terminal 2: Trip Updates Poller
npm run dev:trip-updates
# Terminal 3: Service Alerts Poller
npm run dev:alerts
# Terminal 4: API Server
npm run dev
```
---
## 📡 Nuevos Endpoints API
### Alertas
#### GET /alerts
Obtener todas las alertas activas con filtros opcionales.
**Query Parameters:**
- `route_id` (opcional): Filtrar por ruta
- `severity` (opcional): Filtrar por severidad (LOW, MEDIUM, HIGH, CRITICAL)
- `type` (opcional): Filtrar por tipo (DELAY, CANCELLATION, INCIDENT, etc.)
**Ejemplo:**
```bash
# Todas las alertas activas
curl http://localhost:3000/alerts
# Alertas de una ruta específica
curl http://localhost:3000/alerts?route_id=AVE-MAD-BCN
# Alertas críticas
curl http://localhost:3000/alerts?severity=CRITICAL
# Alertas de cancelaciones
curl http://localhost:3000/alerts?type=CANCELLATION
```
**Respuesta:**
```json
[
{
"alert_id": 1,
"alert_type": "DELAY",
"severity": "MEDIUM",
"cause": "TECHNICAL_PROBLEM",
"effect": "SIGNIFICANT_DELAYS",
"header_text": "Retraso en AVE 03055",
"description_text": "Retraso de 15 minutos debido a problemas técnicos",
"url": null,
"route_id": "AVE-MAD-BCN",
"trip_id": "trip_12345",
"train_id": null,
"start_time": "2025-11-27T10:00:00Z",
"end_time": null,
"created_at": "2025-11-27T10:05:00Z",
"updated_at": "2025-11-27T10:05:00Z"
}
]
```
#### GET /alerts/:id
Obtener una alerta específica.
**Ejemplo:**
```bash
curl http://localhost:3000/alerts/1
```
#### GET /alerts/route/:routeId
Obtener todas las alertas activas de una ruta.
**Ejemplo:**
```bash
curl http://localhost:3000/alerts/route/AVE-MAD-BCN
```
#### GET /alerts/train/:trainId
Obtener todas las alertas activas de un tren.
**Ejemplo:**
```bash
curl http://localhost:3000/alerts/train/12345
```
---
### Trips y Delays
#### GET /trips
Obtener todos los viajes activos del día.
**Query Parameters:**
- `route_id` (opcional): Filtrar por ruta
- `service_id` (opcional): Filtrar por servicio
**Ejemplo:**
```bash
# Todos los viajes activos hoy
curl http://localhost:3000/trips
# Viajes de una ruta específica
curl http://localhost:3000/trips?route_id=AVE-MAD-BCN
```
**Respuesta:**
```json
[
{
"trip_id": "trip_12345",
"route_id": "AVE-MAD-BCN",
"service_id": "weekday",
"trip_headsign": "Barcelona Sants",
"direction_id": 0,
"block_id": null,
"shape_id": "shape_001"
}
]
```
#### GET /trips/:id
Obtener detalles completos de un viaje incluyendo su horario.
**Ejemplo:**
```bash
curl http://localhost:3000/trips/trip_12345
```
**Respuesta:**
```json
{
"trip_id": "trip_12345",
"route_id": "AVE-MAD-BCN",
"service_id": "weekday",
"trip_headsign": "Barcelona Sants",
"direction_id": 0,
"schedule": [
{
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_sequence": 1,
"arrival_time": "08:00:00",
"departure_time": "08:00:00",
"stop_headsign": null
},
{
"stop_id": "ZARAGOZA-DELICIAS",
"stop_sequence": 2,
"arrival_time": "09:25:00",
"departure_time": "09:27:00",
"stop_headsign": null
},
{
"stop_id": "BARCELONA-SANTS",
"stop_sequence": 3,
"arrival_time": "10:45:00",
"departure_time": "10:45:00",
"stop_headsign": null
}
]
}
```
#### GET /trips/:id/updates
Obtener actualizaciones en tiempo real de un viaje (retrasos, cancelaciones).
**Ejemplo:**
```bash
curl http://localhost:3000/trips/trip_12345/updates
```
**Respuesta:**
```json
{
"trip_id": "trip_12345",
"has_updates": true,
"update_id": 1,
"delay_seconds": 900,
"schedule_relationship": "SCHEDULED",
"start_date": "20251127",
"received_at": "2025-11-27T10:15:00Z",
"stop_time_updates": [
{
"stop_sequence": 2,
"stop_id": "ZARAGOZA-DELICIAS",
"arrival_delay": 900,
"departure_delay": 900,
"schedule_relationship": "SCHEDULED"
},
{
"stop_sequence": 3,
"stop_id": "BARCELONA-SANTS",
"arrival_delay": 900,
"departure_delay": null,
"schedule_relationship": "SCHEDULED"
}
]
}
```
#### GET /trips/:id/delays
Obtener información resumida de retrasos de un viaje.
**Ejemplo:**
```bash
curl http://localhost:3000/trips/trip_12345/delays
```
**Respuesta:**
```json
{
"trip_id": "trip_12345",
"delay_status": "MODERATE_DELAY",
"delay_seconds": 900,
"delay_formatted": "15 min 0 s",
"schedule_relationship": "SCHEDULED",
"received_at": "2025-11-27T10:15:00Z"
}
```
**Estados de Delay:**
- `NO_DATA`: Sin información de retrasos
- `ON_TIME`: Puntual (0 segundos)
- `MINOR_DELAY`: Retraso menor (1-5 minutos)
- `MODERATE_DELAY`: Retraso moderado (5-15 minutos)
- `MAJOR_DELAY`: Retraso mayor (>15 minutos)
- `EARLY`: Adelantado
#### GET /trips/route/:routeId
Obtener todos los viajes de una ruta.
**Ejemplo:**
```bash
curl http://localhost:3000/trips/route/AVE-MAD-BCN
```
#### GET /trips/delayed/all
Obtener todos los viajes actualmente retrasados.
**Query Parameters:**
- `min_delay` (opcional): Retraso mínimo en segundos (default: 0)
**Ejemplo:**
```bash
# Todos los viajes retrasados
curl http://localhost:3000/trips/delayed/all
# Solo retrasos mayores a 5 minutos
curl http://localhost:3000/trips/delayed/all?min_delay=300
```
**Respuesta:**
```json
[
{
"trip_id": "trip_12345",
"route_id": "AVE-MAD-BCN",
"trip_headsign": "Barcelona Sants",
"delay_seconds": 900,
"schedule_relationship": "SCHEDULED",
"received_at": "2025-11-27T10:15:00Z"
}
]
```
---
## 🔌 Nuevos WebSocket Events (Planeados)
### Servidor → Cliente
```javascript
// Nueva alerta creada
socket.on('alert:new', (alert) => {
console.log('Nueva alerta:', alert);
});
// Alerta actualizada
socket.on('alert:update', (alert) => {
console.log('Alerta actualizada:', alert);
});
// Retraso detectado
socket.on('trip:delay', (delayInfo) => {
console.log('Retraso en viaje:', delayInfo);
});
// Cancelación de viaje
socket.on('trip:cancelled', (tripInfo) => {
console.log('Viaje cancelado:', tripInfo);
});
```
---
## 🗄️ Estructura de Datos
### Tablas GTFS Static
#### trips
Información de viajes planificados.
```sql
CREATE TABLE trips (
trip_id VARCHAR(100) PRIMARY KEY,
route_id VARCHAR(50),
service_id VARCHAR(50),
trip_headsign VARCHAR(200),
trip_short_name VARCHAR(50),
direction_id INTEGER,
block_id VARCHAR(50),
shape_id VARCHAR(100),
wheelchair_accessible INTEGER,
bikes_allowed INTEGER
);
```
#### stop_times
Horarios de parada de cada viaje.
```sql
CREATE TABLE stop_times (
trip_id VARCHAR(100),
arrival_time TIME,
departure_time TIME,
stop_id VARCHAR(100),
stop_sequence INTEGER,
stop_headsign VARCHAR(200),
pickup_type INTEGER,
drop_off_type INTEGER,
shape_dist_traveled FLOAT,
PRIMARY KEY (trip_id, stop_sequence)
);
```
#### calendar
Calendario de servicio (días de operación).
```sql
CREATE TABLE calendar (
service_id VARCHAR(50) PRIMARY KEY,
monday BOOLEAN,
tuesday BOOLEAN,
wednesday BOOLEAN,
thursday BOOLEAN,
friday BOOLEAN,
saturday BOOLEAN,
sunday BOOLEAN,
start_date DATE,
end_date DATE
);
```
#### shapes
Geometría de las rutas (trayectorias).
```sql
CREATE TABLE shapes (
shape_id VARCHAR(100),
shape_pt_lat DOUBLE PRECISION,
shape_pt_lon DOUBLE PRECISION,
shape_pt_sequence INTEGER,
shape_dist_traveled FLOAT,
PRIMARY KEY (shape_id, shape_pt_sequence)
);
```
### Tablas de Actualizaciones en Tiempo Real
#### trip_updates
Actualizaciones de viajes (retrasos, cancelaciones).
```sql
CREATE TABLE trip_updates (
update_id SERIAL PRIMARY KEY,
trip_id VARCHAR(100),
route_id VARCHAR(50),
start_date VARCHAR(10),
schedule_relationship VARCHAR(20),
delay_seconds INTEGER,
received_at TIMESTAMP DEFAULT NOW()
);
```
#### stop_time_updates
Actualizaciones de paradas específicas.
```sql
CREATE TABLE stop_time_updates (
update_id INTEGER REFERENCES trip_updates(update_id),
stop_sequence INTEGER,
stop_id VARCHAR(100),
arrival_delay INTEGER,
departure_delay INTEGER,
schedule_relationship VARCHAR(20),
PRIMARY KEY (update_id, stop_sequence)
);
```
---
## 🧪 Probar Phase 2
### 1. Verificar GTFS Static Sync
```bash
# Ver logs del syncer
docker-compose logs -f gtfs-static-syncer
# Deberías ver:
# "Starting GTFS Static synchronization..."
# "GTFS data downloaded successfully"
# "Imported X routes, Y trips, Z stops"
# Verificar datos en PostgreSQL
make psql
> SELECT COUNT(*) FROM trips;
> SELECT COUNT(*) FROM stop_times;
> SELECT * FROM active_trips_today LIMIT 5;
```
### 2. Verificar Trip Updates
```bash
# Ver logs del poller
docker-compose logs -f trip-updates-poller
# Deberías ver:
# "Polling Trip Updates..."
# "Processed X trip updates"
# Verificar en PostgreSQL
make psql
> SELECT * FROM delayed_trips;
> SELECT * FROM trip_updates ORDER BY received_at DESC LIMIT 5;
```
### 3. Verificar Service Alerts
```bash
# Ver logs del poller
docker-compose logs -f alerts-poller
# Deberías ver:
# "Polling Service Alerts..."
# "Processed X alerts"
# Probar API
curl http://localhost:3000/alerts | jq
curl http://localhost:3000/alerts?severity=HIGH | jq
```
### 4. Verificar Trip Delays API
```bash
# Obtener viajes retrasados
curl http://localhost:3000/trips/delayed/all | jq
# Obtener delay de un viaje específico
curl http://localhost:3000/trips/trip_12345/delays | jq
# Obtener schedule completo
curl http://localhost:3000/trips/trip_12345 | jq
```
---
## 🐛 Troubleshooting Phase 2
### No hay datos de GTFS Static
**Causa**: El syncer no se ha ejecutado o el ZIP no está disponible.
**Solución**:
```bash
# Ejecutar sync manual
docker-compose exec gtfs-static-syncer node src/worker/gtfs-static-syncer.js
# Verificar logs
docker-compose logs gtfs-static-syncer
# Verificar conectividad
curl -I https://data.renfe.com/dataset/horarios-trenes-largo-recorrido-ave/resource/horarios-trenes-largo-recorrido-ave-gtfs.zip
```
### No se reciben Trip Updates
**Causa**: Feed GTFS-RT no disponible o URL incorrecta.
**Solución**:
```bash
# Verificar logs
docker-compose logs trip-updates-poller
# Probar feed manualmente
curl https://gtfsrt.renfe.com/trip_updates.pb > /tmp/test.pb
file /tmp/test.pb # Debe ser "data" (Protocol Buffer)
# Verificar variables de entorno
docker-compose exec trip-updates-poller env | grep GTFS
```
### Alerts no aparecen
**Causa**: No hay alertas activas o el poller no está corriendo.
**Solución**:
```bash
# Verificar worker
docker-compose ps alerts-poller
# Ver logs
docker-compose logs alerts-poller
# Verificar en BD
make psql
> SELECT COUNT(*) FROM alerts WHERE end_time IS NULL OR end_time > NOW();
```
---
## 📊 Monitorización Phase 2
### Logs de Workers
```bash
# Todos los workers Phase 2
docker-compose logs -f gtfs-static-syncer trip-updates-poller alerts-poller
# Worker específico
docker-compose logs -f trip-updates-poller
```
### Estadísticas en Redis
```bash
make redis-cli
# Viajes retrasados
> SMEMBERS trips:delayed
# Alertas activas por ruta
> SMEMBERS alerts:route:AVE-MAD-BCN
# Última sincronización GTFS
> GET gtfs:last_sync
```
### Métricas en PostgreSQL
```sql
-- Total de viajes del día
SELECT COUNT(*) FROM active_trips_today;
-- Viajes retrasados
SELECT COUNT(*) FROM delayed_trips;
-- Alertas activas
SELECT alert_type, COUNT(*)
FROM alerts
WHERE end_time IS NULL OR end_time > NOW()
GROUP BY alert_type;
-- Retraso promedio
SELECT AVG(delay_seconds) / 60 as avg_delay_minutes
FROM trip_updates
WHERE received_at > NOW() - INTERVAL '1 hour';
```
---
## 🎯 Próximos Pasos
Funcionalidades pendientes de Phase 2:
- [ ] WebSocket events para alertas y delays en tiempo real
- [ ] Frontend: Componente de alertas
- [ ] Frontend: Monitor de puntualidad
- [ ] Frontend: Timeline funcional con reproducción histórica
- [ ] Frontend: Panel de incidencias
- [ ] Notificaciones push (opcional)
- [ ] Exportar reportes de puntualidad (opcional)
---
## 📝 Notas Técnicas
### Fuentes de Datos Phase 2
- **GTFS Static**: https://data.renfe.com/dataset/horarios-trenes-largo-recorrido-ave/resource/horarios-trenes-largo-recorrido-ave-gtfs.zip
- **Trip Updates**: https://gtfsrt.renfe.com/trip_updates.pb
- **Service Alerts**: https://gtfsrt.renfe.com/service_alerts.pb
### Frecuencias
- **GTFS Static Sync**: Diariamente a las 3 AM (configurable via `SYNC_SCHEDULE`)
- **Trip Updates**: Cada 30 segundos
- **Service Alerts**: Cada 30 segundos
### Retención de Datos
- **Trip Updates**: 7 días (usar función `cleanup_old_trip_updates()`)
- **Alerts**: Se mantienen hasta `end_time` + 7 días
- **GTFS Static**: Se sobrescribe en cada sync
---
## 📚 Documentación Relacionada
- [Arquitectura Completa](arquitectura-sistema-tracking-trenes.md)
- [Fase 1 - MVP](FASE1-MVP.md)
- [Fuentes de Datos](FUENTES_DATOS.md)
- [README Principal](README.md)
---
**Estado**: Fase 2 - Backend Completo, Frontend Pendiente
**Fecha**: 27 noviembre 2025
**Próxima Fase**: Frontend Phase 2 + Fase 3 (Analytics)

958
FASE3-ANALYTICS.md Normal file
View File

@@ -0,0 +1,958 @@
# Fase 3: Analytics y Exploración Avanzada
## Estado: ✅ IMPLEMENTADO (Backend)
La Fase 3 añade capacidades avanzadas de análisis, exploración de rutas, planificación de viajes y exportación de datos.
---
## ✨ Características Implementadas
### Backend
#### Base de Datos
- ✅ Migración V6: Vistas materializadas para analytics
- ✅ Funciones para heatmaps, estadísticas y análisis
- ✅ Tablas para cache de exportaciones
- ✅ Vistas de sistema: traffic_by_hour, traffic_by_route, daily_statistics, route_performance
#### Workers
- ✅ Analytics Refresher: Refresco automático de vistas materializadas cada 15 minutos
#### API REST
- ✅ Analytics API (`/analytics`) - Heatmaps, estadísticas, performance
- ✅ Explorer API (`/explorer`) - Explorador de rutas, planificador de viajes, búsqueda
### Frontend
- ⏳ Componentes de Analytics (pendiente)
- ⏳ Heatmap de tráfico (pendiente)
- ⏳ Dashboard de estadísticas (pendiente)
- ⏳ Planificador de viajes UI (pendiente)
---
## 📁 Nuevos Archivos Phase 3
### Base de Datos
```
database/migrations/
└── V6__analytics_and_statistics.sql # Vistas y funciones de analytics
```
### Backend Workers
```
backend/src/worker/
└── analytics-refresher.js # Refresco de vistas materializadas
```
### Backend API
```
backend/src/api/routes/
├── analytics.js # Endpoints de analytics y exportación
└── explorer.js # Explorador de rutas y planificador
```
---
## 🚀 Ejecutar con Fase 3
### Usando Docker Compose
```bash
# 1. Ejecutar migraciones (incluye V6)
make migrate
# 2. Iniciar todos los servicios (incluye analytics-refresher)
make start
# El worker analytics-refresher se inicia automáticamente
# y refresca las vistas cada 15 minutos
```
### Desarrollo Local
```bash
cd backend
# Terminal 1: Analytics Refresher
npm run dev:analytics
# Terminal 2: API Server
npm run dev
```
---
## 📡 Nuevos Endpoints API
### Analytics - Traffic
#### GET /analytics/traffic/heatmap
Obtener datos de heatmap de tráfico.
**Query Parameters:**
- `start_date` (opcional): Fecha inicio (ISO 8601)
- `end_date` (opcional): Fecha fin (ISO 8601)
- `grid_size` (opcional): Tamaño de celda en grados (default: 0.1 ≈ 11km)
**Ejemplo:**
```bash
# Heatmap últimos 7 días
curl http://localhost:3000/analytics/traffic/heatmap
# Heatmap con grid más fino
curl http://localhost:3000/analytics/traffic/heatmap?grid_size=0.05
# Rango personalizado
curl "http://localhost:3000/analytics/traffic/heatmap?start_date=2025-11-20T00:00:00Z&end_date=2025-11-27T23:59:59Z"
```
**Respuesta:**
```json
[
{
"lat": 40.4,
"lon": -3.7,
"intensity": 125,
"avgSpeed": 78.5
},
{
"lat": 41.3,
"lon": 2.1,
"intensity": 98,
"avgSpeed": 65.3
}
]
```
#### GET /analytics/traffic/hourly
Obtener patrón de tráfico por hora del día.
**Query Parameters:**
- `days` (opcional): Número de días para analizar (default: 7)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/traffic/hourly?days=30
```
**Respuesta:**
```json
[
{
"hour_of_day": 0,
"avg_trains": 15.3,
"avg_speed": 45.2,
"total_observations": 45230
},
{
"hour_of_day": 7,
"avg_trains": 87.5,
"avg_speed": 68.7,
"total_observations": 125890
}
]
```
#### GET /analytics/traffic/by-hour
Obtener estadísticas de tráfico agregadas por hora.
**Query Parameters:**
- `limit` (opcional): Número de horas (default: 24)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/traffic/by-hour?limit=48
```
#### GET /analytics/traffic/by-route
Obtener estadísticas de tráfico por ruta.
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/traffic/by-route
```
**Respuesta:**
```json
[
{
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"route_type": "HIGH_SPEED",
"total_trains": 234,
"active_days": 30,
"avg_speed": 185.4,
"total_positions": 125890
}
]
```
---
### Analytics - Statistics
#### GET /analytics/statistics/daily
Obtener estadísticas diarias del sistema.
**Query Parameters:**
- `days` (opcional): Número de días (default: 30)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/statistics/daily?days=90
```
**Respuesta:**
```json
[
{
"date": "2025-11-27",
"unique_trains": 456,
"total_positions": 65432,
"avg_speed": 72.3,
"stopped_count": 12890,
"moving_count": 52542
}
]
```
#### GET /analytics/statistics/system
Obtener estado actual del sistema.
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/statistics/system
```
**Respuesta:**
```json
{
"active_trains": 234,
"active_alerts": 5,
"delayed_trips": 12,
"avg_delay_seconds": 450,
"active_routes": 45,
"last_update": "2025-11-27T15:30:00Z"
}
```
---
### Analytics - Performance
#### GET /analytics/performance/routes
Obtener métricas de rendimiento de rutas (puntualidad, retrasos).
**Query Parameters:**
- `limit` (opcional): Número de rutas (default: 20)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/performance/routes?limit=10
```
**Respuesta:**
```json
[
{
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"total_trips": 345,
"delayed_trips": 23,
"on_time_trips": 322,
"avg_delay_seconds": 180,
"median_delay_seconds": 120,
"max_delay_seconds": 1800,
"punctuality_percentage": 93.33
}
]
```
#### GET /analytics/performance/route/:routeId
Obtener estadísticas detalladas de una ruta específica.
**Query Parameters:**
- `days` (opcional): Número de días para analizar (default: 7)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/performance/route/AVE-MAD-BCN?days=30
```
**Respuesta:**
```json
{
"total_trips": 234,
"unique_trains": 45,
"avg_speed": 185.4,
"max_speed": 298.7,
"total_distance_km": 98765.4,
"avg_delay_seconds": 120,
"on_time_percentage": 92.5
}
```
---
### Analytics - Delays
#### GET /analytics/delays/top-routes
Obtener rutas con más retrasos.
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/delays/top-routes
```
**Respuesta:**
```json
[
{
"route_id": "MD-MAD-VAL",
"route_name": "Madrid - Valencia",
"delayed_count": 45,
"avg_delay": 600,
"max_delay": 2400
}
]
```
---
### Analytics - Stations
#### GET /analytics/stations/busiest
Obtener estaciones más transitadas.
**Query Parameters:**
- `limit` (opcional): Número de estaciones (default: 20)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/stations/busiest?limit=10
```
**Respuesta:**
```json
[
{
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_name": "Madrid Puerta de Atocha",
"daily_trips": 456,
"routes_count": 34
}
]
```
#### GET /analytics/stations/:stationId/statistics
Obtener estadísticas de una estación específica.
**Query Parameters:**
- `days` (opcional): Número de días (default: 7)
**Ejemplo:**
```bash
curl http://localhost:3000/analytics/stations/MADRID-PUERTA-DE-ATOCHA/statistics?days=30
```
**Respuesta:**
```json
{
"total_departures": 3450,
"total_arrivals": 3420,
"unique_routes": 45,
"avg_delay_minutes": 2.5,
"busiest_hour": 8
}
```
---
### Analytics - Trains
#### GET /analytics/trains/:trainId/distance
Calcular distancia recorrida por un tren.
**Query Parameters:**
- `start_time` (opcional): Timestamp inicio
- `end_time` (opcional): Timestamp fin
**Ejemplo:**
```bash
curl "http://localhost:3000/analytics/trains/12345/distance?start_time=2025-11-27T00:00:00Z&end_time=2025-11-27T23:59:59Z"
```
**Respuesta:**
```json
{
"train_id": "12345",
"start_time": "2025-11-27T00:00:00Z",
"end_time": "2025-11-27T23:59:59Z",
"distance_km": 1234.56
}
```
---
### Analytics - Export
#### GET /analytics/export
Exportar datos en diferentes formatos.
**Query Parameters:**
- `table` (requerido): Tabla a exportar
- `format` (opcional): json, csv, geojson (default: json)
- `start_date` (opcional): Fecha inicio
- `end_date` (opcional): Fecha fin
- `limit` (opcional): Límite de registros (default: 1000)
**Tablas permitidas:**
- train_positions
- trains
- routes
- stations
- alerts
- trip_updates
- traffic_by_hour
- daily_statistics
**Ejemplo JSON:**
```bash
curl "http://localhost:3000/analytics/export?table=trains&format=json"
```
**Ejemplo CSV:**
```bash
curl "http://localhost:3000/analytics/export?table=train_positions&format=csv&limit=5000" > positions.csv
```
**Ejemplo GeoJSON:**
```bash
curl "http://localhost:3000/analytics/export?table=stations&format=geojson" > stations.geojson
```
---
### Analytics - Refresh
#### POST /analytics/refresh
Refrescar manualmente las vistas materializadas.
**Ejemplo:**
```bash
curl -X POST http://localhost:3000/analytics/refresh
```
**Respuesta:**
```json
{
"success": true,
"message": "Analytics views refreshed successfully",
"timestamp": "2025-11-27T15:30:00Z"
}
```
---
## 📍 Explorer - Route Explorer
### GET /explorer/routes/:routeId
Obtener información completa de una ruta (trips, stops, shape).
**Ejemplo:**
```bash
curl http://localhost:3000/explorer/routes/AVE-MAD-BCN
```
**Respuesta:**
```json
{
"route": {
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"route_type": "HIGH_SPEED"
},
"trips": [
{
"trip_id": "trip_001",
"trip_headsign": "Barcelona Sants",
"direction_id": 0
}
],
"stops": [
{
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_name": "Madrid Puerta de Atocha",
"stop_lat": 40.4067,
"stop_lon": -3.6906
}
],
"shape": {
"shape_id": "shape_001",
"points": [
{
"lat": 40.4067,
"lon": -3.6906,
"sequence": 1,
"distance": 0
}
]
},
"total_trips": 45,
"total_stops": 8
}
```
### GET /explorer/trips/:tripId/schedule
Obtener horario completo de un viaje.
**Ejemplo:**
```bash
curl http://localhost:3000/explorer/trips/trip_12345/schedule
```
**Respuesta:**
```json
[
{
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_name": "Madrid Puerta de Atocha",
"arrival_time": "08:00:00",
"departure_time": "08:00:00",
"stop_sequence": 1
},
{
"stop_id": "BARCELONA-SANTS",
"stop_name": "Barcelona Sants",
"arrival_time": "10:45:00",
"departure_time": "10:45:00",
"stop_sequence": 3
}
]
```
---
## 🚉 Explorer - Stations
### GET /explorer/stations/:stationId
Obtener información completa de una estación.
**Ejemplo:**
```bash
curl http://localhost:3000/explorer/stations/MADRID-PUERTA-DE-ATOCHA
```
**Respuesta:**
```json
{
"station": {
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_name": "Madrid Puerta de Atocha",
"stop_lat": 40.4067,
"stop_lon": -3.6906
},
"next_departures": [
{
"trip_id": "trip_001",
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"headsign": "Barcelona Sants",
"scheduled_departure": "15:00:00",
"estimated_delay": 180,
"status": "DELAYED"
}
],
"routes": [
{
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona"
}
],
"statistics": {
"total_departures": 456,
"total_arrivals": 450,
"unique_routes": 34,
"avg_delay_minutes": 2.5,
"busiest_hour": 8
}
}
```
### GET /explorer/stations/:stationId/nearby
Obtener estaciones cercanas.
**Query Parameters:**
- `radius` (opcional): Radio en km (default: 5)
**Ejemplo:**
```bash
curl "http://localhost:3000/explorer/stations/MADRID-PUERTA-DE-ATOCHA/nearby?radius=10"
```
**Respuesta:**
```json
[
{
"stop_id": "MADRID-CHAMARTIN",
"stop_name": "Madrid Chamartín",
"stop_lat": 40.4728,
"stop_lon": -3.6797,
"distance_km": 7.8
}
]
```
---
## 🗺️ Explorer - Trip Planner
### GET /explorer/planner
Planificador de viajes entre dos estaciones.
**Query Parameters:**
- `origin` (requerido): ID de estación origen
- `destination` (requerido): ID de estación destino
- `time` (opcional): Hora de salida (HH:MM:SS)
- `date` (opcional): Fecha (YYYY-MM-DD)
**Ejemplo:**
```bash
curl "http://localhost:3000/explorer/planner?origin=MADRID-PUERTA-DE-ATOCHA&destination=BARCELONA-SANTS&time=08:00:00"
```
**Respuesta:**
```json
{
"origin": "MADRID-PUERTA-DE-ATOCHA",
"destination": "BARCELONA-SANTS",
"requested_time": "08:00:00",
"requested_date": "today",
"direct_trips": [
{
"trip_id": "trip_001",
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"trip_headsign": "Barcelona Sants",
"origin_departure": "08:00:00",
"destination_arrival": "10:45:00",
"duration_minutes": 165,
"delay": {
"delay_seconds": 180,
"schedule_relationship": "SCHEDULED"
}
}
],
"trips_with_transfer": [
{
"trip1_id": "trip_002",
"route1_name": "Madrid - Zaragoza",
"trip2_id": "trip_003",
"route2_name": "Zaragoza - Barcelona",
"origin_departure": "08:30:00",
"transfer_arrival": "09:50:00",
"transfer_departure": "10:10:00",
"destination_arrival": "11:30:00",
"transfer_station": "ZARAGOZA-DELICIAS",
"transfer_station_name": "Zaragoza Delicias",
"total_duration_minutes": 180
}
],
"total_options": 3
}
```
### GET /explorer/routes/between
Encontrar rutas que conectan dos estaciones.
**Query Parameters:**
- `origin` (requerido): ID de estación origen
- `destination` (requerido): ID de estación destino
**Ejemplo:**
```bash
curl "http://localhost:3000/explorer/routes/between?origin=MADRID-PUERTA-DE-ATOCHA&destination=BARCELONA-SANTS"
```
**Respuesta:**
```json
[
{
"route_id": "AVE-MAD-BCN",
"route_name": "Madrid - Barcelona",
"route_type": "HIGH_SPEED",
"route_color": "FF6600",
"daily_trips": 25
}
]
```
---
## 🔍 Explorer - Search
### GET /explorer/search
Buscar estaciones por nombre.
**Query Parameters:**
- `query` (requerido): Término de búsqueda (mínimo 2 caracteres)
- `limit` (opcional): Número de resultados (default: 10)
**Ejemplo:**
```bash
curl "http://localhost:3000/explorer/search?query=madrid&limit=5"
```
**Respuesta:**
```json
[
{
"stop_id": "MADRID-PUERTA-DE-ATOCHA",
"stop_name": "Madrid Puerta de Atocha",
"stop_lat": 40.4067,
"stop_lon": -3.6906,
"location_type": 1,
"parent_station": null
},
{
"stop_id": "MADRID-CHAMARTIN",
"stop_name": "Madrid Chamartín",
"stop_lat": 40.4728,
"stop_lon": -3.6797,
"location_type": 1,
"parent_station": null
}
]
```
---
## 🗄️ Vistas Materializadas
### traffic_by_hour
Tráfico agregado por hora (últimos 30 días).
**Columnas:**
- hour: Hora (timestamp)
- active_trains: Número de trenes activos
- total_positions: Total de posiciones registradas
- avg_speed: Velocidad promedio
- median_speed: Velocidad mediana
- max_speed: Velocidad máxima
### traffic_by_route
Tráfico por ruta (últimos 30 días).
**Columnas:**
- route_id, route_name, route_type
- total_trains: Total de trenes diferentes
- active_days: Días con actividad
- avg_speed: Velocidad promedio
- total_positions: Posiciones registradas
### daily_statistics
Estadísticas diarias del sistema (últimos 90 días).
**Columnas:**
- date: Fecha
- unique_trains: Trenes únicos
- total_positions: Posiciones totales
- avg_speed: Velocidad promedio
- stopped_count: Posiciones paradas
- moving_count: Posiciones en movimiento
### route_performance
Rendimiento y puntualidad por ruta (últimos 30 días).
**Columnas:**
- route_id, route_name
- total_trips: Total de viajes
- delayed_trips: Viajes retrasados
- on_time_trips: Viajes puntuales
- avg_delay_seconds: Retraso promedio
- median_delay_seconds: Retraso mediano
- max_delay_seconds: Retraso máximo
- punctuality_percentage: % de puntualidad
**Refresco:** Cada 15 minutos automáticamente via analytics-refresher worker.
---
## 🧪 Probar Phase 3
### 1. Verificar Analytics Refresher
```bash
# Ver logs del worker
docker-compose logs -f analytics-refresher
# Deberías ver cada 15 minutos:
# "Starting analytics views refresh..."
# "Analytics views refreshed successfully"
```
### 2. Probar Analytics API
```bash
# Sistema general
curl http://localhost:3000/analytics/statistics/system | jq
# Heatmap
curl http://localhost:3000/analytics/traffic/heatmap | jq
# Performance de rutas
curl http://localhost:3000/analytics/performance/routes | jq
# Rutas más retrasadas
curl http://localhost:3000/analytics/delays/top-routes | jq
# Estaciones más transitadas
curl http://localhost:3000/analytics/stations/busiest | jq
```
### 3. Probar Explorer API
```bash
# Buscar estación
curl "http://localhost:3000/explorer/search?query=madrid" | jq
# Info de estación
curl http://localhost:3000/explorer/stations/MADRID-PUERTA-DE-ATOCHA | jq
# Info de ruta
curl http://localhost:3000/explorer/routes/AVE-MAD-BCN | jq
# Planificar viaje
curl "http://localhost:3000/explorer/planner?origin=MADRID-PUERTA-DE-ATOCHA&destination=BARCELONA-SANTS" | jq
```
### 4. Probar Exportación
```bash
# Exportar a CSV
curl "http://localhost:3000/analytics/export?table=routes&format=csv" > routes.csv
# Exportar a GeoJSON
curl "http://localhost:3000/analytics/export?table=stations&format=geojson" > stations.geojson
# Exportar posiciones filtradas
curl "http://localhost:3000/analytics/export?table=train_positions&start_date=2025-11-27T00:00:00Z&limit=1000" > positions.json
```
---
## 📊 Use Cases
### Dashboard de Control
```bash
# Obtener datos para dashboard
curl http://localhost:3000/analytics/statistics/system
curl http://localhost:3000/analytics/traffic/by-route
curl http://localhost:3000/analytics/delays/top-routes
curl http://localhost:3000/analytics/stations/busiest?limit=5
```
### Análisis de Rendimiento
```bash
# Análisis de una ruta específica
curl http://localhost:3000/analytics/performance/route/AVE-MAD-BCN?days=30
# Estadísticas diarias últimos 30 días
curl http://localhost:3000/analytics/statistics/daily?days=30
# Distancia recorrida por tren
curl "http://localhost:3000/analytics/trains/12345/distance?start_time=2025-11-01T00:00:00Z"
```
### Planificación de Viajes
```bash
# Buscar estación
curl "http://localhost:3000/explorer/search?query=barcelona"
# Ver info completa de estación
curl http://localhost:3000/explorer/stations/BARCELONA-SANTS
# Planificar viaje
curl "http://localhost:3000/explorer/planner?origin=MADRID-PUERTA-DE-ATOCHA&destination=BARCELONA-SANTS&time=08:00:00"
# Ver rutas que conectan dos puntos
curl "http://localhost:3000/explorer/routes/between?origin=MADRID-PUERTA-DE-ATOCHA&destination=BARCELONA-SANTS"
```
---
## 🐛 Troubleshooting Phase 3
### Las vistas materializadas están vacías
**Causa**: No hay suficientes datos históricos o no se han refrescado.
**Solución**:
```bash
# Refrescar manualmente
curl -X POST http://localhost:3000/analytics/refresh
# Verificar en PostgreSQL
make psql
> SELECT COUNT(*) FROM traffic_by_hour;
> SELECT COUNT(*) FROM traffic_by_route;
```
### El planificador no encuentra viajes
**Causa**: Datos GTFS Static no cargados o formato de parámetros incorrecto.
**Solución**:
```bash
# Verificar que hay trips y stop_times
make psql
> SELECT COUNT(*) FROM trips;
> SELECT COUNT(*) FROM stop_times;
# Verificar formato de parámetros
curl "http://localhost:3000/explorer/planner?origin=STOP_ID_ORIGIN&destination=STOP_ID_DEST"
```
### Export devuelve error
**Causa**: Tabla no permitida o parámetros inválidos.
**Solución**:
```bash
# Ver tablas permitidas
curl "http://localhost:3000/analytics/export" | jq '.allowed_tables'
# Usar tabla válida
curl "http://localhost:3000/analytics/export?table=routes&format=json"
```
---
## 📝 Próximos Pasos
Funcionalidades pendientes de Phase 3:
- [ ] Frontend: Heatmap de tráfico en mapa
- [ ] Frontend: Dashboard de estadísticas con gráficos
- [ ] Frontend: UI del planificador de viajes
- [ ] Frontend: Explorador de rutas interactivo
- [ ] WebSocket para updates de analytics en tiempo real
- [ ] Overlay de infraestructura ADIF (WMS)
- [ ] Predicciones ML (Fase 4)
---
## 📚 Documentación Relacionada
- [Arquitectura Completa](arquitectura-sistema-tracking-trenes.md)
- [Fase 1 - MVP](FASE1-MVP.md)
- [Fase 2 - Enriquecimiento](FASE2-ENRIQUECIMIENTO.md)
- [Fuentes de Datos](FUENTES_DATOS.md)
- [README Principal](README.md)
---
**Estado**: Fase 3 - Backend Completo, Frontend Pendiente
**Fecha**: 27 noviembre 2025
**Próxima Fase**: Frontend Phase 3 + Fase 4 (ML y Predicciones)

395
FUENTES_DATOS.md Normal file
View File

@@ -0,0 +1,395 @@
# Fuentes de Datos Abiertas para el Sistema de Tracking de Trenes
## Resumen
Este documento recopila todas las fuentes de datos abiertas identificadas para alimentar el sistema de tracking de trenes en España, incluyendo estaciones, rutas, horarios e información en tiempo real.
---
## 1. Renfe Data (Oficial)
### Portal Principal
- **URL**: https://data.renfe.com/
- **Tipo**: Portal de datos abiertos oficial de Renfe
- **Actualización**: Continua
- **Formatos**: GTFS, JSON, CSV, XML
### Datasets Disponibles
#### Estaciones
- **Endpoint**: https://data.renfe.com/dataset
- **Descripción**: Información de todas las estaciones donde opera Renfe
- **Incluye**:
- Estaciones con servicio Atendo (movilidad reducida)
- Coordenadas GPS
- Servicios disponibles
- Accesibilidad
#### Horarios (GTFS Static)
- **Cobertura**: 366 rutas, 769 paradas
- **Vigencia**: 18 marzo 2025 - 15 diciembre 2025
- **Servicios**:
- Alta Velocidad (AVE)
- Larga Distancia
- Media Distancia
- Cercanías y Rodalies
**URLs de descarga**:
- Renfe general: `https://data.renfe.com/dataset?res_format=GTFS/`
- Via datos.gob.es: https://datos.gob.es/es/catalogo
#### Tiempo Real (GTFS-RT)
**1. Posiciones de vehículos (Vehicle Positions)**
- **URL**: https://gtfsrt.renfe.com/vehicle_positions.pb
- **Formato**: Protocol Buffer (GTFS-RT)
- **Servicios**: Cercanías
- **Frecuencia**: Cada 30 segundos
- **Información**:
- Posición GPS (lat/lon)
- Estado (parado, en movimiento)
- Identificadores de tren y viaje
- Velocidad y dirección
**2. Actualizaciones de viaje (Trip Updates)**
- **Cercanías**: https://gtfsrt.renfe.com/trip_updates_cercanias.pb
- Cancelaciones
- Cambios de horario
- Retrasos
- Frecuencia: 30 segundos
- **Alta Velocidad / Larga Distancia / Media Distancia**:
- URL similar (verificar en portal)
- Frecuencia: 30 segundos
**3. Alertas (Service Alerts)**
- **URL**: https://gtfsrt.renfe.com/alerts.pb
- **Información**:
- Incidencias
- Servicios de autobús sustitutorio
- Problemas de vías
- Problemas de accesibilidad
### Integración con otros portales
- **datos.gob.es**: Federado con el portal nacional de datos abiertos
- **European Data Portal**: Accesible desde el portal europeo
- **Total datasets**: 63 conjuntos de datos en 6 formatos diferentes
---
## 2. ADIF (Infraestructura Ferroviaria)
### Portal de Datos Espaciales
- **URL**: https://ideadif.adif.es/
- **Nombre**: IDEADIF (Infraestructura de Datos Espaciales de ADIF)
- **Descripción**: Infraestructura de datos geoespaciales de ADIF
### Servicios WMS (Web Map Service)
- **Especificación**: OGC WMS 1.1.1 y 1.3.0
- **Versión actual**: Julio 2024
- **URL**: https://inspire-geoportal.ec.europa.eu/srv/api/records/191574be-5eca-4315-b4b4-756dc50ac553
- **Normativa**: INSPIRE (Transport Networks Annex I)
### PISERVI - Sistema de Información de Servicios
- **URL**: https://www.adif.es/en/sobre-adif/declaracion-red
- **Descripción**: Acceso a características técnicas de instalaciones
- **Información disponible**:
- Terminales de mercancías
- Estaciones de pasajeros
- Instalaciones de mantenimiento
- Apartaderos particulares
- Cambiadores de ancho de vía
### Datos Disponibles
- **Kilómetros de vía**: 11,689 km (ADIF) + 3,926 km (ADIF-Alta Velocidad)
- **Estaciones**: 1,451 (ADIF) + 46 (ADIF-Alta Velocidad)
- **Mapa interactivo**: Red Ferroviaria de Interés General (RFIG)
- **Búsquedas por**:
- Ubicación geográfica
- Tipo de instalación
- Tipo de servicio
### Declaración de Red
- **URL**: https://www.adif.es/en/sobre-adif/declaracion-red
- **Contenido**:
- Características de la infraestructura
- Condiciones de acceso
- Servicios ofrecidos
- Cánones aplicables
---
## 3. Datos.gob.es (Portal Nacional)
### Portal General
- **URL**: https://datos.gob.es/
- **Búsqueda transporte**: https://datos.gob.es/en/nti-reference/transporte
### Datasets Relevantes
#### Estadística sobre Transporte Ferroviario
- **URL**: https://datos.gob.es/en/catalogo/ea0010587-estadistica-sobre-transporte-ferroviario
- **Organismo**: Ministerio de Transportes
- **Periodicidad**: Mensual/Anual
#### Archivos GTFS
- **URL**: https://datos.gob.es/en/catalogo/l03380010-archivos-gtfs
- **Múltiples operadores**: Renfe, FGC, Metro, etc.
#### FGC - GTFS Realtime
- **URL**: https://datos.gob.es/en/catalogo/a09002970-fgc-actualitzaciones-de-viaje-gtfs_realtime
- **Organismo**: Ferrocarrils de la Generalitat de Catalunya
---
## 4. Portales Regionales
### CRTM (Consorcio Regional de Transportes de Madrid)
- **Plataforma**: ArcGIS Open Data
- **URL**: https://learning.esri.es/caso-de-exito/plataforma-datos-abiertos-del-crtm/
- **Formato base**: GTFS
- **Servicios**:
- Metro de Madrid
- Tren Ligero
- Cercanías Madrid
- Autobuses EMT
### CTB (Consorcio de Transportes de Bizkaia)
- **URL**: https://data.ctb.eus/en/dataset
- **Filtros**: GTFS + Tren + Renfe
- **Servicios**: Cercanías Bilbao, EuskoTren, etc.
---
## 5. Agregadores Internacionales
### Mobility Database (Recomendado)
- **URL**: https://mobilitydatabase.org/
- **Descripción**: Base de datos global de feeds GTFS y GTFS-RT
- **Cobertura España**: Sí
- **Búsqueda por**:
- Ubicación
- Estado (activo/inactivo)
- Características
### OpenMobilityData / TransitFeeds (Deprecado)
- **URL**: https://transitfeeds.com/p/renfe
- **Estado**: Deprecado en diciembre 2025
- **Migración**: Mobility Database
- **Útil para**: Datos históricos hasta 2025
### GTFS.pro
- **URL**: https://gtfs.pro/en/spain
- **Descripción**: Agregador comercial de datos GTFS
- **Servicios España**:
- Madrid (metro, cercanías, autobuses)
- Barcelona (metro, ferrocarril)
- Otras ciudades
### Transport Data France (para AVE Francia-España)
- **URL**: https://transport.data.gouv.fr/datasets/horaires-ave-espagne-france
- **Descripción**: Horarios AVE red europea (España-Francia)
- **Formato**: GTFS
---
## 6. Repositorios GitHub
### API Renfe (No oficial)
- **URL**: https://github.com/ferranpm/renfe
- **Autor**: ferranpm
- **Descripción**: API no oficial para consultar información de Renfe
- **Estado**: Verificar actualización
### GTFS Data Pipeline
- **URL**: https://github.com/CxAalto/gtfs_data_pipeline/blob/master/gtfs-sources.yaml
- **Descripción**: Pipeline de datos con fuentes GTFS documentadas
- **Útil para**: Referencias a múltiples fuentes europeas
---
## 7. Fuentes Adicionales de Interés
### European Data Portal
- **URL**: https://data.europa.eu/
- **Búsqueda**: "transporte ferroviario España"
- **Contenido**: Datos agregados de toda Europa
### INSPIRE Geoportal
- **URL**: https://inspire-geoportal.ec.europa.eu/
- **Normativa**: Directiva INSPIRE
- **Servicios**: WMS/WFS de redes de transporte
### Observatorio del Ferrocarril en España
- **URL**: https://cdn.transportes.gob.es/portal-web-transportes/ferroviario/observatorio/
- **Formato**: PDF (Informes anuales)
- **Contenido**: Estadísticas, análisis de red, evolución
---
## Recomendaciones de Implementación
### Prioridad 1: Datos Esenciales (MVP)
1. **GTFS-RT Posiciones** (ya implementado)
- URL: https://gtfsrt.renfe.com/vehicle_positions.pb
- Polling: 30 segundos
- Almacenar en PostgreSQL + Redis
2. **GTFS Static de Renfe**
- Descargar de: https://data.renfe.com/dataset
- Frecuencia: Semanal (verificar actualizaciones)
- Tablas: routes, stops, trips, stop_times
3. **Estaciones ADIF/Renfe**
- Fuente: data.renfe.com o IDEADIF
- Frecuencia: Mensual
- Enriquecer tabla stations
### Prioridad 2: Datos en Tiempo Real Adicionales
4. **Trip Updates**
- URL: https://gtfsrt.renfe.com/trip_updates_cercanias.pb
- Frecuencia: 30 segundos
- Tabla: alerts (retrasos, cancelaciones)
5. **Service Alerts**
- URL: https://gtfsrt.renfe.com/alerts.pb
- Frecuencia: 30 segundos
- Tabla: alerts (incidencias)
### Prioridad 3: Enriquecimiento de Datos
6. **Infraestructura ADIF**
- Fuente: IDEADIF WMS
- Uso: Visualización de vías, topología
- Formato: GeoJSON/WMS overlay
7. **Datos Regionales**
- CRTM (Madrid)
- CTB (Bilbao)
- Otros consorcios
- Integrar según demanda
### Prioridad 4: Análisis y Estadísticas
8. **Estadísticas Ministerio**
- Fuente: datos.gob.es
- Frecuencia: Mensual/Anual
- Uso: Dashboards, métricas
9. **Observatorio Ferroviario**
- Fuente: Informes PDF
- Uso: Contexto, KPIs benchmark
---
## Plan de Actualización Automática
### Script de Sincronización (Sugerido)
```javascript
// Pseudocódigo para worker de sincronización
const syncJobs = [
{
name: 'GTFS Static Renfe',
url: 'https://data.renfe.com/api/gtfs/latest',
frequency: '0 0 * * 0', // Domingo a medianoche
parser: parseGTFSZip,
tables: ['routes', 'stops', 'trips', 'stop_times']
},
{
name: 'Estaciones Renfe',
url: 'https://data.renfe.com/api/stations',
frequency: '0 0 1 * *', // Primero de mes
parser: parseStations,
tables: ['stations']
},
{
name: 'GTFS-RT Vehicle Positions',
url: 'https://gtfsrt.renfe.com/vehicle_positions.pb',
frequency: '*/30 * * * * *', // Cada 30 segundos
parser: parseGTFSRT,
tables: ['train_positions']
}
];
```
### Monitorización de Cambios
- **Checksums**: Calcular hash de archivos descargados
- **Versiones**: Detectar cambios en GTFS feeds
- **Logs**: Registrar actualizaciones exitosas/fallidas
- **Alertas**: Notificar si falla alguna sincronización
---
## APIs Recomendadas para Integración
### A Implementar en el Backend
```javascript
// Endpoints sugeridos
// 1. Actualizar catálogo GTFS Static
POST /api/admin/sync/gtfs-static
// 2. Actualizar estaciones
POST /api/admin/sync/stations
// 3. Estado de sincronización
GET /api/admin/sync/status
// 4. Información de fuentes
GET /api/data-sources
```
---
## Notas Legales y de Uso
### Licencias
- **Renfe Data**: Datos abiertos, verificar condiciones de uso en portal
- **ADIF**: Datos públicos bajo normativa INSPIRE
- **Datos.gob.es**: Datos abiertos del sector público español
### Atribución
Incluir en la aplicación:
```
Datos de trenes en tiempo real proporcionados por Renfe
Información de infraestructura proporcionada por ADIF
Fuente: data.renfe.com | ideadif.adif.es
```
### Rate Limiting
- GTFS-RT: Respetar frecuencia de 30 segundos mínimo
- GTFS Static: No hacer polling agresivo, descargar una vez al día
- WMS: Cachear tiles, no hacer requests excesivos
---
## Próximos Pasos
1. ✅ Implementar polling GTFS-RT Vehicle Positions (YA HECHO)
2. ⬜ Descargar e importar GTFS Static inicial
3. ⬜ Crear worker de sincronización semanal GTFS Static
4. ⬜ Implementar Trip Updates y Alerts
5. ⬜ Integrar datos de estaciones desde Renfe Data
6. ⬜ (Opcional) Overlay WMS de ADIF para visualización de vías
7. ⬜ (Futuro) Integración con datos regionales según demanda
---
## Referencias
- **GTFS Specification**: https://gtfs.org/
- **GTFS Realtime**: https://gtfs.org/documentation/realtime/
- **Renfe Data**: https://data.renfe.com/
- **ADIF IDEADIF**: https://ideadif.adif.es/
- **Datos.gob.es**: https://datos.gob.es/
- **Mobility Database**: https://mobilitydatabase.org/
---
*Documento actualizado: 27 noviembre 2025*

263
Makefile Normal file
View File

@@ -0,0 +1,263 @@
# ============================================
# Makefile para Sistema de Tracking de Trenes
# ============================================
.PHONY: help start stop restart logs clean reset migrate status psql redis-cli \
test-start test-stop test-restart test-logs test-clean test-reset test-migrate \
build dev-start debug-start
# Variables
DOCKER_COMPOSE := docker-compose
ENV_FILE := .env
ENV_TEST_FILE := .env.testing
# Colores para output
COLOR_RESET := \033[0m
COLOR_INFO := \033[0;36m
COLOR_SUCCESS := \033[0;32m
COLOR_WARNING := \033[0;33m
COLOR_ERROR := \033[0;31m
# ============================================
# Ayuda por defecto
# ============================================
help: ## Mostrar esta ayuda
@echo "$(COLOR_INFO)Sistema de Tracking de Trenes - Comandos disponibles:$(COLOR_RESET)"
@echo ""
@echo "$(COLOR_SUCCESS)Entorno de Producción:$(COLOR_RESET)"
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
grep -v "test-" | \
awk 'BEGIN {FS = ":.*?## "}; {printf " $(COLOR_INFO)%-20s$(COLOR_RESET) %s\n", $$1, $$2}'
@echo ""
@echo "$(COLOR_WARNING)Entorno de Testing:$(COLOR_RESET)"
@grep -E '^test-[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \
awk 'BEGIN {FS = ":.*?## "}; {printf " $(COLOR_INFO)%-20s$(COLOR_RESET) %s\n", $$1, $$2}'
@echo ""
# ============================================
# Comandos de Producción
# ============================================
start: ## Iniciar servicios en producción
@echo "$(COLOR_INFO)Iniciando servicios en modo producción...$(COLOR_RESET)"
@if [ ! -f $(ENV_FILE) ]; then \
echo "$(COLOR_WARNING)Archivo $(ENV_FILE) no encontrado, copiando desde .env.example$(COLOR_RESET)"; \
cp .env.example $(ENV_FILE); \
echo "$(COLOR_WARNING)Por favor, edita $(ENV_FILE) con tus credenciales$(COLOR_RESET)"; \
exit 1; \
fi
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) up -d
@echo "$(COLOR_SUCCESS)✓ Servicios iniciados$(COLOR_RESET)"
@echo ""
@echo "$(COLOR_INFO)Acceso a servicios:$(COLOR_RESET)"
@echo " - Aplicación Web: http://localhost"
@echo " - API: http://localhost/api"
@echo ""
@echo "$(COLOR_INFO)Para ver logs: make logs$(COLOR_RESET)"
stop: ## Detener servicios
@echo "$(COLOR_INFO)Deteniendo servicios...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) down
@echo "$(COLOR_SUCCESS)✓ Servicios detenidos$(COLOR_RESET)"
restart: ## Reiniciar servicios
@echo "$(COLOR_INFO)Reiniciando servicios...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) restart
@echo "$(COLOR_SUCCESS)✓ Servicios reiniciados$(COLOR_RESET)"
logs: ## Ver logs de todos los servicios
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) logs -f
logs-api: ## Ver logs del API
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) logs -f api
logs-worker: ## Ver logs del worker
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) logs -f worker
logs-db: ## Ver logs de PostgreSQL
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) logs -f postgres
status: ## Ver estado de los servicios
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) ps
clean: ## Detener y eliminar volúmenes (¡CUIDADO! Elimina datos)
@echo "$(COLOR_WARNING)¡ADVERTENCIA! Esto eliminará todos los datos.$(COLOR_RESET)"
@read -p "¿Continuar? [y/N]: " confirm; \
if [ "$$confirm" = "y" ] || [ "$$confirm" = "Y" ]; then \
echo "$(COLOR_INFO)Limpiando servicios y volúmenes...$(COLOR_RESET)"; \
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) down -v; \
echo "$(COLOR_SUCCESS)✓ Limpieza completada$(COLOR_RESET)"; \
else \
echo "$(COLOR_INFO)Operación cancelada$(COLOR_RESET)"; \
fi
reset: clean start ## Reset completo (clean + start)
build: ## Construir imágenes Docker
@echo "$(COLOR_INFO)Construyendo imágenes Docker...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) build
@echo "$(COLOR_SUCCESS)✓ Imágenes construidas$(COLOR_RESET)"
migrate: ## Ejecutar migraciones de base de datos
@echo "$(COLOR_INFO)Ejecutando migraciones...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) --profile migration up flyway
@echo "$(COLOR_SUCCESS)✓ Migraciones completadas$(COLOR_RESET)"
migrate-info: ## Ver información de migraciones
@echo "$(COLOR_INFO)Información de migraciones:$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) --profile migration run --rm flyway info
migrate-validate: ## Validar migraciones
@echo "$(COLOR_INFO)Validando migraciones...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) --profile migration run --rm flyway validate
@echo "$(COLOR_SUCCESS)✓ Migraciones válidas$(COLOR_RESET)"
psql: ## Conectar a PostgreSQL
@echo "$(COLOR_INFO)Conectando a PostgreSQL...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec postgres psql -U trenes_user -d trenes_db
redis-cli: ## Conectar a Redis
@echo "$(COLOR_INFO)Conectando a Redis...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec redis redis-cli
debug-start: ## Iniciar con herramientas de debug (Adminer, Redis Commander)
@echo "$(COLOR_INFO)Iniciando con herramientas de debug...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) --profile debug up -d
@echo "$(COLOR_SUCCESS)✓ Servicios iniciados con debug$(COLOR_RESET)"
@echo ""
@echo "$(COLOR_INFO)Herramientas de debug:$(COLOR_RESET)"
@echo " - Adminer (PostgreSQL): http://localhost:8080"
@echo " - Redis Commander: http://localhost:8081"
# ============================================
# Comandos de Testing
# ============================================
test-start: ## Iniciar servicios en modo testing
@echo "$(COLOR_WARNING)Iniciando servicios en modo TESTING...$(COLOR_RESET)"
@if [ ! -f $(ENV_TEST_FILE) ]; then \
echo "$(COLOR_ERROR)Archivo $(ENV_TEST_FILE) no encontrado$(COLOR_RESET)"; \
exit 1; \
fi
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) --profile debug up -d
@echo "$(COLOR_SUCCESS)✓ Servicios de testing iniciados$(COLOR_RESET)"
@echo ""
@echo "$(COLOR_INFO)Acceso a servicios de testing:$(COLOR_RESET)"
@echo " - Aplicación Web: http://localhost"
@echo " - API: http://localhost/api"
@echo " - Adminer (PostgreSQL):http://localhost:8080"
@echo " - Redis Commander: http://localhost:8081"
@echo ""
@echo "$(COLOR_INFO)Para ver logs: make test-logs$(COLOR_RESET)"
test-stop: ## Detener servicios de testing
@echo "$(COLOR_INFO)Deteniendo servicios de testing...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) down
@echo "$(COLOR_SUCCESS)✓ Servicios de testing detenidos$(COLOR_RESET)"
test-restart: ## Reiniciar servicios de testing
@echo "$(COLOR_INFO)Reiniciando servicios de testing...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) restart
@echo "$(COLOR_SUCCESS)✓ Servicios de testing reiniciados$(COLOR_RESET)"
test-logs: ## Ver logs de testing
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) logs -f
test-logs-api: ## Ver logs del API (testing)
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) logs -f api
test-logs-worker: ## Ver logs del worker (testing)
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) logs -f worker
test-status: ## Ver estado de servicios de testing
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) ps
test-clean: ## Limpiar entorno de testing
@echo "$(COLOR_WARNING)Limpiando entorno de testing...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) down -v
@echo "$(COLOR_SUCCESS)✓ Entorno de testing limpiado$(COLOR_RESET)"
test-reset: test-clean test-start ## Reset completo de testing
test-migrate: ## Ejecutar migraciones en testing
@echo "$(COLOR_INFO)Ejecutando migraciones en testing...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) --profile migration up flyway
@echo "$(COLOR_SUCCESS)✓ Migraciones de testing completadas$(COLOR_RESET)"
test-psql: ## Conectar a PostgreSQL (testing)
@echo "$(COLOR_INFO)Conectando a PostgreSQL (testing)...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) exec postgres psql -U trenes_user -d trenes_db
test-redis-cli: ## Conectar a Redis (testing)
@echo "$(COLOR_INFO)Conectando a Redis (testing)...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_TEST_FILE) exec redis redis-cli
# ============================================
# Comandos de Desarrollo
# ============================================
dev-start: ## Iniciar en modo desarrollo (con hot-reload)
@echo "$(COLOR_INFO)Iniciando en modo desarrollo...$(COLOR_RESET)"
@echo "$(COLOR_WARNING)Asegúrate de tener node_modules instalados localmente$(COLOR_RESET)"
@echo ""
@echo "$(COLOR_INFO)Backend:$(COLOR_RESET) cd backend && npm install && npm run dev"
@echo "$(COLOR_INFO)Frontend:$(COLOR_RESET) cd frontend && npm install && npm run dev"
# ============================================
# Comandos de Utilidad
# ============================================
check-env: ## Verificar configuración de .env
@echo "$(COLOR_INFO)Verificando archivo .env...$(COLOR_RESET)"
@if [ -f $(ENV_FILE) ]; then \
echo "$(COLOR_SUCCESS)✓ Archivo $(ENV_FILE) existe$(COLOR_RESET)"; \
echo ""; \
echo "$(COLOR_INFO)Variables configuradas:$(COLOR_RESET)"; \
grep -v '^#' $(ENV_FILE) | grep -v '^$$' | sed 's/=.*/=***/' ; \
else \
echo "$(COLOR_ERROR)✗ Archivo $(ENV_FILE) no encontrado$(COLOR_RESET)"; \
echo "$(COLOR_WARNING)Ejecuta: cp .env.example .env$(COLOR_RESET)"; \
fi
check-test-env: ## Verificar configuración de .env.testing
@echo "$(COLOR_INFO)Verificando archivo .env.testing...$(COLOR_RESET)"
@if [ -f $(ENV_TEST_FILE) ]; then \
echo "$(COLOR_SUCCESS)✓ Archivo $(ENV_TEST_FILE) existe$(COLOR_RESET)"; \
else \
echo "$(COLOR_ERROR)✗ Archivo $(ENV_TEST_FILE) no encontrado$(COLOR_RESET)"; \
fi
backup-db: ## Crear backup de la base de datos
@echo "$(COLOR_INFO)Creando backup de la base de datos...$(COLOR_RESET)"
@mkdir -p backups
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec -T postgres \
pg_dump -U trenes_user trenes_db > backups/backup_$$(date +%Y%m%d_%H%M%S).sql
@echo "$(COLOR_SUCCESS)✓ Backup creado en backups/$(COLOR_RESET)"
restore-db: ## Restaurar base de datos desde backup (usar: make restore-db FILE=backup.sql)
@if [ -z "$(FILE)" ]; then \
echo "$(COLOR_ERROR)Error: Debes especificar el archivo$(COLOR_RESET)"; \
echo "$(COLOR_INFO)Uso: make restore-db FILE=backups/backup_20250127.sql$(COLOR_RESET)"; \
exit 1; \
fi
@echo "$(COLOR_WARNING)Restaurando base de datos desde $(FILE)...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec -T postgres \
psql -U trenes_user trenes_db < $(FILE)
@echo "$(COLOR_SUCCESS)✓ Base de datos restaurada$(COLOR_RESET)"
cleanup-old-data: ## Limpiar datos antiguos (>90 días)
@echo "$(COLOR_INFO)Limpiando datos antiguos...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec postgres \
psql -U trenes_user -d trenes_db -c "SELECT cleanup_old_positions(90);"
@echo "$(COLOR_SUCCESS)✓ Datos antiguos eliminados$(COLOR_RESET)"
create-partition: ## Crear siguiente partición mensual
@echo "$(COLOR_INFO)Creando siguiente partición...$(COLOR_RESET)"
$(DOCKER_COMPOSE) --env-file $(ENV_FILE) exec postgres \
psql -U trenes_user -d trenes_db -c "SELECT create_next_partition();"
@echo "$(COLOR_SUCCESS)✓ Partición creada$(COLOR_RESET)"
# ============================================
# Target por defecto
# ============================================
.DEFAULT_GOAL := help

680
README.md Normal file
View File

@@ -0,0 +1,680 @@
# Sistema de Tracking de Trenes en Tiempo Real
Sistema web para visualizar en tiempo real la posición de todos los trenes operados por Renfe en España, con capacidad de consultar histórico mediante un timeline slider.
## Características
- 🚄 **Visualización en tiempo real** de posiciones de trenes en mapa OpenStreetMap
- 📊 **Información detallada** de cada tren (velocidad, dirección, estado, ruta)
- ⏱️ **Timeline slider** para navegar por el histórico de posiciones
- 🗺️ **Datos geoespaciales** usando PostGIS
- 🔄 **Actualización automática** cada 30 segundos desde feed GTFS-RT
- 📡 **WebSocket** para actualizaciones en tiempo real sin polling
- 🎨 **Interfaz moderna** con React + Leaflet.js
## Stack Tecnológico
### Frontend
- React + JavaScript (JSX)
- Leaflet.js para mapas (OpenStreetMap)
- Socket.io client para WebSocket
- Vite como bundler
### Backend
- Node.js + Express/Fastify
- Socket.io para WebSocket server
- Parser GTFS-RT (gtfs-realtime-bindings)
### Infraestructura
- PostgreSQL 15 + PostGIS (datos geoespaciales)
- Redis (caché de posiciones actuales)
- Nginx (reverse proxy)
- Flyway (migraciones de base de datos)
- Docker + Docker Compose
## Requisitos Previos
- Docker >= 20.10
- Docker Compose >= 2.0
- Make (opcional, pero recomendado)
- Git
## Instalación Rápida
### Usando Make (Recomendado)
```bash
# 1. Clonar el repositorio
git clone <repository-url>
cd trenes
# 2. Ver comandos disponibles
make help
# 3. Configurar variables de entorno
cp .env.example .env
# Editar .env con tus credenciales
# 4. Ejecutar migraciones
make migrate
# 5. Iniciar servicios
make start
```
### Instalación Manual (sin Make)
#### 1. Clonar el repositorio
```bash
git clone <repository-url>
cd trenes
```
#### 2. Configurar variables de entorno
```bash
cp .env.example .env
```
Editar `.env` y configurar las contraseñas y secretos:
```env
POSTGRES_PASSWORD=tu_password_seguro
REDIS_PASSWORD=tu_redis_password
JWT_SECRET=tu_jwt_secret_minimo_32_caracteres
```
#### 3. Ejecutar migraciones de base de datos
```bash
# Primera vez: ejecutar migraciones
docker-compose --profile migration up flyway
# Verificar que las migraciones se aplicaron correctamente
docker-compose logs flyway
```
#### 4. Iniciar todos los servicios
```bash
docker-compose up -d
```
#### 5. Verificar que todo está funcionando
```bash
# Ver logs de todos los servicios
docker-compose logs -f
# Verificar estado de los servicios
docker-compose ps
```
## Acceso a los Servicios
Una vez iniciado, el sistema estará disponible en:
- **Aplicación Web**: http://localhost
- **API REST**: http://localhost/api
- **WebSocket**: ws://localhost/ws
- **Adminer** (DB admin): http://localhost:8080 (solo en modo debug)
- **Redis Commander**: http://localhost:8081 (solo en modo debug)
## Uso
### Comandos Make Disponibles
```bash
# Ver todos los comandos disponibles
make help
# Comandos de Producción
make start # Iniciar servicios
make stop # Detener servicios
make restart # Reiniciar servicios
make logs # Ver logs
make status # Ver estado de servicios
make migrate # Ejecutar migraciones
make debug-start # Iniciar con herramientas de debug
# Comandos de Testing
make test-start # Iniciar entorno de testing
make test-stop # Detener entorno de testing
make test-logs # Ver logs de testing
make test-clean # Limpiar datos de testing
make test-reset # Reset completo de testing
# Utilidades
make psql # Conectar a PostgreSQL
make redis-cli # Conectar a Redis
make backup-db # Crear backup de BD
make cleanup-old-data # Limpiar datos antiguos
```
### Modo Normal (Producción)
```bash
# Usando Make
make start # Iniciar todos los servicios
make logs # Ver logs
make stop # Detener servicios
# O manualmente con docker-compose
docker-compose up -d
docker-compose logs -f
docker-compose down
```
### Modo Testing
El entorno de testing usa el archivo `.env.testing` que incluye configuraciones específicas para pruebas:
```bash
# Iniciar entorno de testing
make test-start
# Ver logs
make test-logs
# Detener y limpiar
make test-clean
```
El entorno de testing incluye:
- Configuración de desarrollo con logs detallados
- Polling más frecuente (15 segundos)
- Debug endpoints habilitados
- Rate limiting deshabilitado
- Herramientas de administración (Adminer, Redis Commander) activadas por defecto
### Modo Debug (con herramientas de administración)
```bash
# Usando Make
make debug-start
# O manualmente
docker-compose --profile debug up -d
# Acceder a Adminer: http://localhost:8080
# - Sistema: PostgreSQL
# - Servidor: postgres
# - Usuario: trenes_user
# - Contraseña: [tu POSTGRES_PASSWORD]
# - Base de datos: trenes_db
```
### Ejecutar Migraciones
```bash
# Usando Make
make migrate # Aplicar migraciones
make migrate-info # Ver información
make migrate-validate # Validar migraciones
# O manualmente
docker-compose --profile migration up flyway
docker-compose --profile migration run --rm flyway info
docker-compose --profile migration run --rm flyway validate
```
## Estructura del Proyecto
```
trenes/
├── backend/ # Backend Node.js
│ ├── src/
│ │ ├── api/ # API REST
│ │ │ └── server.js
│ │ └── worker/ # Worker GTFS-RT polling
│ │ └── gtfs-poller.js
│ ├── Dockerfile
│ └── package.json
├── frontend/ # Frontend React
│ ├── src/
│ │ ├── components/
│ │ ├── hooks/
│ │ └── App.jsx
│ ├── Dockerfile
│ ├── nginx.conf # Configuración nginx del contenedor
│ └── package.json
├── database/
│ ├── init/ # Scripts de inicialización (solo primera vez)
│ │ ├── 01-init-extensions.sql
│ │ ├── 02-create-schema.sql
│ │ └── 03-seed-data.sql
│ └── migrations/ # Migraciones versionadas (Flyway)
│ ├── V1__initial_schema.sql
│ ├── V2__create_partitions.sql
│ ├── V3__create_views_and_functions.sql
│ └── V4__seed_initial_data.sql
├── nginx/ # Configuración Nginx (reverse proxy)
│ ├── nginx.conf
│ └── conf.d/
│ └── default.conf
├── docker-compose.yml
├── .env.example
└── README.md
```
## API Endpoints
### Posiciones de Trenes
```bash
# Obtener posiciones actuales de todos los trenes
GET /api/trains/current
# Obtener histórico de un tren
GET /api/trains/history?train_id=XXX&from=TIMESTAMP&to=TIMESTAMP
# Obtener información de un tren específico
GET /api/trains/:id
# Obtener rutas disponibles
GET /api/routes
# Obtener estaciones
GET /api/stations
```
### WebSocket Events
```javascript
// Conectar al WebSocket
const socket = io('ws://localhost/ws');
// Escuchar actualizaciones de posiciones
socket.on('train:update', (position) => {
console.log('Nueva posición:', position);
});
// Suscribirse a un tren específico
socket.emit('subscribe', { train_id: 'AVE-03041' });
```
## Base de Datos
### Tablas Principales
- **trains**: Catálogo de trenes
- **train_positions**: Histórico de posiciones (particionada por mes)
- **routes**: Rutas y líneas
- **stations**: Estaciones
- **alerts**: Alertas e incidencias
### Vistas
- **current_train_positions**: Última posición de cada tren
- **active_trains**: Trenes activos en las últimas 24 horas
### Funciones Útiles
```sql
-- Limpiar posiciones antiguas (mayores a 90 días)
SELECT cleanup_old_positions(90);
-- Crear siguiente partición mensual
SELECT create_next_partition();
-- Obtener trayectoria de un tren
SELECT * FROM get_train_path('AVE-03041', '2025-01-01', '2025-01-02');
-- Obtener trenes en un área
SELECT * FROM get_trains_in_area(40.0, -4.0, 41.0, -3.0);
-- Calcular estadísticas de un tren
SELECT * FROM calculate_train_statistics('AVE-03041', '2025-01-01', '2025-01-02');
```
## Gestión de Migraciones
Este proyecto usa **Flyway** para gestionar las migraciones de base de datos de forma versionada y reproducible.
### Convención de Nombres
Las migraciones siguen el formato: `V{version}__{description}.sql`
- `V1__initial_schema.sql` - Versión 1: Schema inicial
- `V2__create_partitions.sql` - Versión 2: Crear particiones
- etc.
### Crear Nueva Migración
1. Crear archivo en `database/migrations/` siguiendo la convención:
```
V5__add_train_operators.sql
```
2. Aplicar migración:
```bash
docker-compose --profile migration up flyway
```
### Comandos de Flyway
```bash
# Ver estado de migraciones
docker-compose --profile migration run --rm flyway info
# Validar migraciones
docker-compose --profile migration run --rm flyway validate
# Limpiar base de datos (¡CUIDADO! Borra todo)
docker-compose --profile migration run --rm flyway clean
# Reparar tabla de migraciones
docker-compose --profile migration run --rm flyway repair
```
## Mantenimiento
### Backup de Base de Datos
```bash
# Crear backup
docker-compose exec postgres pg_dump -U trenes_user trenes_db > backup.sql
# Restaurar backup
docker-compose exec -T postgres psql -U trenes_user trenes_db < backup.sql
```
### Limpiar Datos Antiguos
```bash
# Conectar a PostgreSQL
docker-compose exec postgres psql -U trenes_user -d trenes_db
# Ejecutar función de limpieza (elimina datos > 90 días)
SELECT cleanup_old_positions(90);
```
### Crear Nuevas Particiones
```bash
# Conectar a PostgreSQL
docker-compose exec postgres psql -U trenes_user -d trenes_db
# Crear próxima partición
SELECT create_next_partition();
```
## Desarrollo
### Modo Desarrollo
Para desarrollo local con hot-reload:
```bash
# Backend
cd backend
npm install
npm run dev
# Frontend
cd frontend
npm install
npm run dev
```
### Estructura de Logs
```bash
# Ver logs de un servicio específico
docker-compose logs -f api
docker-compose logs -f worker
docker-compose logs -f frontend
# Ver logs de todos los servicios
docker-compose logs -f
```
## Troubleshooting
### El worker no puede conectar con GTFS-RT
```bash
# Verificar que el worker está corriendo
docker-compose logs worker
# Probar manualmente el endpoint
curl -I https://gtfsrt.renfe.com/vehicle_positions.pb
```
### La base de datos no inicia
```bash
# Ver logs de PostgreSQL
docker-compose logs postgres
# Verificar permisos de volúmenes
ls -la postgres_data/
# Reiniciar contenedor
docker-compose restart postgres
```
### No se ven los trenes en el mapa
1. Verificar que el worker está recolectando datos:
```bash
docker-compose logs worker
```
2. Verificar que hay datos en la base de datos:
```bash
docker-compose exec postgres psql -U trenes_user -d trenes_db -c "SELECT COUNT(*) FROM train_positions;"
```
3. Verificar que el API responde:
```bash
curl http://localhost/api/trains/current
```
## Despliegue en Producción
### Requisitos del Servidor
- Linux (Debian/Ubuntu recomendado)
- Docker >= 20.10
- Docker Compose >= 2.0
- Mínimo 2GB RAM, 20GB disco
- Puerto 80 y 443 abiertos
- Dominio apuntando al servidor
### Despliegue Rápido
```bash
# 1. Copiar proyecto al servidor
rsync -avz --exclude 'node_modules' --exclude '.git' --exclude 'postgres_data' \
./ root@tu-servidor:/opt/trenes/
# 2. En el servidor, crear archivo .env de producción
cat > /opt/trenes/.env << 'EOF'
POSTGRES_USER=trenes
POSTGRES_PASSWORD=TU_PASSWORD_SEGURO_GENERADO
POSTGRES_DB=trenes
DATABASE_URL=postgres://trenes:${POSTGRES_PASSWORD}@postgres:5432/trenes
REDIS_URL=redis://redis:6379
NODE_ENV=production
PORT=3000
CORS_ORIGINS=https://tu-dominio.com
EOF
# 3. Iniciar servicios
cd /opt/trenes
docker compose -f docker-compose.prod.yml up -d
```
### Configuración SSL con Let's Encrypt
```bash
# 1. Iniciar nginx temporal para obtener certificado
docker run -d --name temp-nginx -p 80:80 \
-v /opt/trenes/nginx/prod.conf:/etc/nginx/conf.d/default.conf:ro \
-v certbot_webroot:/var/www/certbot nginx:alpine
# 2. Obtener certificado
docker run --rm \
-v trenes_certbot_certs:/etc/letsencrypt \
-v certbot_webroot:/var/www/certbot \
certbot/certbot certonly --webroot \
--webroot-path=/var/www/certbot \
--email tu-email@dominio.com \
--agree-tos --no-eff-email \
-d tu-dominio.com
# 3. Detener nginx temporal
docker stop temp-nginx && docker rm temp-nginx
# 4. Iniciar servicios completos
docker compose -f docker-compose.prod.yml up -d
```
### Lecciones Aprendidas del Despliegue
#### 1. PostgreSQL/PostGIS - Compatibilidad de Versiones
**Problema**: Error `FATAL: database files are incompatible with server` al usar datos existentes.
**Solución**: Asegurar que la versión de PostgreSQL en producción coincida con la de desarrollo. Si los datos fueron creados con PostgreSQL 16, usar `postgis/postgis:16-3.4-alpine` en producción.
#### 2. Socket.io - Configuración de URL
**Problema**: Error "Invalid namespace" al conectar WebSocket.
**Causa**: Socket.io añade automáticamente `/socket.io/` al path. Si `VITE_WS_URL=wss://dominio.com/ws`, intentará conectar a `/ws/socket.io/`.
**Solución**: Configurar `VITE_WS_URL=https://dominio.com` (sin path adicional) para que Socket.io conecte correctamente a `/socket.io/`.
#### 3. Nginx - Variables y Proxy Pass
**Problema**: Cuando se usan variables en `proxy_pass`, nginx NO hace sustitución automática de URI.
**Solución**: Usar `rewrite` explícito:
```nginx
location /api/ {
set $backend api:3000;
rewrite ^/api/(.*)$ /$1 break;
proxy_pass http://$backend;
}
```
#### 4. Nginx - Resolución DNS de Contenedores
**Problema**: Error `host not found in upstream` al iniciar nginx antes que otros servicios.
**Solución**: Usar el resolver DNS de Docker:
```nginx
resolver 127.0.0.11 valid=30s;
```
#### 5. Nginx - Directiva HTTP/2
**Problema**: Warning `the "listen ... http2" directive is deprecated`.
**Solución**: Cambiar de `listen 443 ssl http2;` a:
```nginx
listen 443 ssl;
http2 on;
```
#### 6. Variables VITE en Docker
**Problema**: Las variables `VITE_*` del `.env` no se aplican en el build de Docker.
**Solución**: Pasar las variables como build args en docker-compose:
```yaml
frontend:
build:
args:
VITE_API_URL: https://tu-dominio.com/api
VITE_WS_URL: https://tu-dominio.com
```
Y en el Dockerfile del frontend:
```dockerfile
ARG VITE_API_URL
ARG VITE_WS_URL
ENV VITE_API_URL=${VITE_API_URL}
ENV VITE_WS_URL=${VITE_WS_URL}
```
#### 7. Migración de Datos
**Comando para exportar e importar datos históricos**:
```bash
# Exportar desde desarrollo
docker compose exec -T postgres pg_dump -U trenes_user -d trenes_db \
--data-only --exclude-table='flyway_schema_history' | gzip > backup.sql.gz
# Copiar al servidor
scp backup.sql.gz root@servidor:/tmp/
# Importar en producción
gunzip -c /tmp/backup.sql.gz | docker exec -i trenes-postgres \
psql -U trenes -d trenes
```
### Verificación del Despliegue
```bash
# Verificar servicios
docker ps --format "table {{.Names}}\t{{.Status}}"
# Test API
curl https://tu-dominio.com/api/health
# Test Dashboard
curl https://tu-dominio.com/api/dashboard/current
# Test WebSocket (debe devolver sid)
curl "https://tu-dominio.com/socket.io/?EIO=4&transport=polling"
```
## Roadmap
- [x] **Fase 1: MVP** ✅ COMPLETADA
- [x] Arquitectura y Docker Compose
- [x] Backend API y Worker GTFS-RT
- [x] Frontend con mapa Leaflet
- [x] WebSocket en tiempo real
- [x] Timeline slider (UI básica)
- [x] **Fase 2: Enriquecimiento** ✅ BACKEND COMPLETADO
- [x] GTFS Static Syncer (sync diario)
- [x] Trip Updates Poller (retrasos)
- [x] Service Alerts Poller (alertas)
- [x] API de Alertas y Trips
- [ ] Frontend: componentes de alertas y puntualidad
- [x] **Fase 3: Analytics** ✅ BACKEND COMPLETADO
- [x] Analytics API (heatmaps, estadísticas)
- [x] Explorer API (planificador de viajes)
- [x] Exportación de datos (CSV, JSON, GeoJSON)
- [ ] Frontend: dashboard de analytics
- [ ] **Fase 4: ML y Predicciones** (Futuro)
- [ ] Predicción de retrasos
- [ ] Detección de anomalías
- [ ] Correlación con meteorología
## Licencia
MIT
## Contribuir
Las contribuciones son bienvenidas. Por favor:
1. Fork el proyecto
2. Crea una rama para tu feature (`git checkout -b feature/AmazingFeature`)
3. Commit tus cambios (`git commit -m 'Add some AmazingFeature'`)
4. Push a la rama (`git push origin feature/AmazingFeature`)
5. Abre un Pull Request
## Recursos
- [GTFS Realtime Specification](https://gtfs.org/documentation/realtime/)
- [Renfe Data Portal](https://data.renfe.com/)
- [PostGIS Documentation](https://postgis.net/)
- [Leaflet.js](https://leafletjs.com/)
- [Flyway Documentation](https://flywaydb.org/documentation/)

240
RESUMEN-IMPLEMENTACION.md Normal file
View File

@@ -0,0 +1,240 @@
# Sistema de Tracking de Trenes - Referencia Operativa
## Estado del Proyecto (27 Nov 2025)
| Fase | Backend | Frontend |
|------|---------|----------|
| 1. MVP | ✅ 100% | ✅ 100% |
| 2. Enriquecimiento | ✅ 100% | ⏳ 0% |
| 3. Analytics | ✅ 100% | ⏳ 0% |
---
## Quick Start
```bash
# 1. Configurar
cp .env.example .env
# Editar .env con contraseñas seguras
# 2. Ejecutar migraciones
make migrate
# 3. Iniciar
make start
# 4. Acceder
# App: http://localhost
# API: http://localhost/api
```
---
## Servicios Docker
| Servicio | Puerto | Descripción |
|----------|--------|-------------|
| nginx | 80 | Reverse proxy |
| api | 3000 | API REST + WebSocket |
| frontend | 5173 | React app |
| postgres | 5432 | PostgreSQL + PostGIS |
| redis | 6379 | Cache |
| worker | - | GTFS-RT Vehicle Positions |
| gtfs-static-syncer | - | Sync GTFS Static (3 AM) |
| trip-updates-poller | - | Trip Updates (30s) |
| alerts-poller | - | Service Alerts (30s) |
| analytics-refresher | - | Refresh vistas (15 min) |
---
## API Endpoints
### Fase 1 - Core
```
GET /health - Health check
GET /trains/current - Posiciones actuales
GET /trains/:id - Info de un tren
GET /trains/:id/history - Histórico de posiciones
GET /trains/:id/path - Trayectoria
GET /trains/area - Trenes en área geográfica
GET /routes - Todas las rutas
GET /routes/:id - Ruta específica
GET /stations - Todas las estaciones
GET /stations/:id - Estación específica
GET /stats - Estadísticas sistema
WS /ws - WebSocket (Socket.io)
```
### Fase 2 - Alertas y Delays
```
GET /alerts - Alertas activas
GET /alerts/:id - Alerta específica
GET /alerts/route/:routeId - Alertas por ruta
GET /trips - Viajes activos hoy
GET /trips/:id - Detalles de viaje
GET /trips/:id/delays - Retrasos de viaje
GET /trips/delayed/all - Todos los retrasados
```
### Fase 3 - Analytics
```
GET /analytics/traffic/heatmap - Heatmap de tráfico
GET /analytics/traffic/hourly - Tráfico por hora
GET /analytics/statistics/daily - Estadísticas diarias
GET /analytics/delays/top-routes - Rutas más retrasadas
GET /analytics/export - Exportar datos (CSV/JSON/GeoJSON)
GET /explorer/routes/:routeId - Explorar ruta completa
GET /explorer/planner - Planificador de viajes
GET /explorer/search - Búsqueda de estaciones
```
---
## WebSocket Events
```javascript
// Cliente
socket.emit('subscribe:train', trainId);
socket.emit('unsubscribe:train', trainId);
// Servidor
socket.on('trains:update', (positions) => {});
socket.on('train:update', (position) => {});
```
---
## Comandos Make
```bash
make start # Iniciar servicios
make stop # Detener servicios
make logs # Ver todos los logs
make migrate # Ejecutar migraciones
make psql # Conectar a PostgreSQL
make redis-cli # Conectar a Redis
make debug-start # Iniciar con Adminer + Redis Commander
make test-start # Entorno de testing
make backup-db # Backup de BD
```
---
## Base de Datos
### Tablas Principales
- `trains` - Catálogo de trenes
- `train_positions` - Histórico (particionada por mes)
- `routes` - Rutas y líneas
- `stations` - Estaciones
- `alerts` - Alertas e incidencias
- `trips` - Viajes programados (GTFS Static)
- `trip_updates` - Actualizaciones tiempo real
### Funciones Útiles
```sql
-- Limpiar posiciones antiguas (>90 días)
SELECT cleanup_old_positions(90);
-- Crear siguiente partición mensual
SELECT create_next_partition();
-- Trayectoria de un tren
SELECT * FROM get_train_path('TRAIN_ID', '2025-01-01', '2025-01-02');
-- Trenes en área geográfica
SELECT * FROM get_trains_in_area(40.0, -4.0, 41.0, -3.0);
-- Próximas salidas desde estación
SELECT * FROM get_next_departures('STATION_ID', 10);
```
---
## Fuentes de Datos GTFS-RT
| Feed | URL | Frecuencia |
|------|-----|------------|
| Vehicle Positions | https://gtfsrt.renfe.com/vehicle_positions.pb | 30s |
| Trip Updates | https://gtfsrt.renfe.com/trip_updates_cercanias.pb | 30s |
| Service Alerts | https://gtfsrt.renfe.com/alerts.pb | 30s |
| GTFS Static | https://data.renfe.com/dataset | Diario |
---
## Estructura del Proyecto
```
trenes/
├── backend/src/
│ ├── api/
│ │ ├── server.js # API + WebSocket
│ │ └── routes/ # Endpoints (8 archivos)
│ ├── worker/ # 5 workers
│ ├── lib/ # db, redis, logger
│ └── config/
├── frontend/src/
│ ├── components/ # TrainMap, TrainInfo, Timeline
│ ├── hooks/useTrains.js # WebSocket hook
│ └── App.jsx
├── database/migrations/ # V1-V6
├── nginx/ # Reverse proxy config
├── docker-compose.yml
├── Makefile
└── .env.example
```
---
## Troubleshooting
### No se ven trenes
```bash
docker compose logs worker # Verificar worker
make psql
> SELECT COUNT(*) FROM train_positions WHERE recorded_at > NOW() - INTERVAL '1 hour';
```
### Error conexión WebSocket
- Verificar `CORS_ORIGIN` en `.env`
- Verificar `VITE_WS_URL` en frontend
### BD sin datos
```bash
make migrate # Ejecutar migraciones
make migrate-info # Verificar estado
```
---
## Siguiente Fase: Frontend Fase 2/3
### Pendiente implementar:
1. **Componente AlertsPanel** - Mostrar alertas activas
2. **Componente PunctualityMonitor** - Dashboard de puntualidad
3. **Timeline funcional** - Reproducción de histórico
4. **Heatmap de tráfico** - Visualización en mapa
5. **Dashboard Analytics** - Gráficos y estadísticas
6. **Planificador UI** - Interfaz de búsqueda de viajes
### APIs disponibles para frontend:
- `GET /alerts` - Lista de alertas
- `GET /trips/delayed/all` - Viajes retrasados
- `GET /analytics/traffic/heatmap` - Datos para heatmap
- `GET /analytics/statistics/daily` - Estadísticas
- `GET /explorer/planner?from=X&to=Y&time=Z` - Planificador
---
## Documentación Adicional
- `README.md` - Introducción y setup
- `arquitectura-sistema-tracking-trenes.md` - Arquitectura detallada
- `FUENTES_DATOS.md` - Fuentes de datos disponibles
- `FASE1-MVP.md` - Detalles Fase 1
- `FASE2-ENRIQUECIMIENTO.md` - Detalles Fase 2
- `FASE3-ANALYTICS.md` - Detalles Fase 3
---
**Última actualización**: 27 noviembre 2025

File diff suppressed because it is too large Load Diff

20
backend/.env.example Normal file
View File

@@ -0,0 +1,20 @@
# API Configuration
PORT=3000
NODE_ENV=development
LOG_LEVEL=info
# Database
DATABASE_URL=postgresql://trenes_user:trenes_password@localhost:5432/trenes_db
# Redis
REDIS_URL=redis://:redis_password@localhost:6379
# GTFS-RT
GTFS_RT_URL=https://gtfsrt.renfe.com/vehicle_positions.pb
POLLING_INTERVAL=30000
# CORS
CORS_ORIGIN=http://localhost:3000,http://localhost:5173
# JWT
JWT_SECRET=your_jwt_secret_here

86
backend/Dockerfile Normal file
View File

@@ -0,0 +1,86 @@
# Multi-stage Dockerfile para Backend (API + Worker)
FROM node:20-alpine AS base
# Instalar dependencias del sistema
RUN apk add --no-cache \
python3 \
make \
g++ \
curl \
wget
WORKDIR /app
# Copiar archivos de dependencias
COPY package*.json ./
# Instalar dependencias
RUN npm install --omit=dev && \
npm cache clean --force
# Copiar código fuente
COPY . .
# ================================
# Stage para API
# ================================
FROM base AS api
ENV NODE_ENV=production
ENV PORT=3000
# Crear usuario no-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "src/api/server.js"]
# ================================
# Stage para Worker
# ================================
FROM base AS worker
ENV NODE_ENV=production
# Crear usuario no-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
# Health check para worker (verifica que el proceso esté corriendo)
HEALTHCHECK --interval=60s --timeout=10s --start-period=40s --retries=3 \
CMD pgrep -f "node.*worker" || exit 1
CMD ["node", "src/worker/gtfs-poller.js"]
# ================================
# Stage de desarrollo (opcional)
# ================================
FROM node:20-alpine AS development
RUN apk add --no-cache \
python3 \
make \
g++ \
curl \
wget
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_ENV=development
CMD ["npm", "run", "dev"]

64
backend/package.json Normal file
View File

@@ -0,0 +1,64 @@
{
"name": "trenes-backend",
"version": "1.0.0",
"description": "Backend API y Worker para sistema de tracking de trenes en tiempo real",
"main": "src/api/server.js",
"type": "module",
"scripts": {
"dev": "nodemon src/api/server.js",
"dev:worker": "nodemon src/worker/gtfs-poller.js",
"dev:gtfs-static": "nodemon src/worker/gtfs-static-syncer.js",
"dev:trip-updates": "nodemon src/worker/trip-updates-poller.js",
"dev:alerts": "nodemon src/worker/alerts-poller.js",
"dev:analytics": "nodemon src/worker/analytics-refresher.js",
"start": "node src/api/server.js",
"start:worker": "node src/worker/gtfs-poller.js",
"start:gtfs-static": "node src/worker/gtfs-static-syncer.js",
"start:trip-updates": "node src/worker/trip-updates-poller.js",
"start:alerts": "node src/worker/alerts-poller.js",
"start:analytics": "node src/worker/analytics-refresher.js",
"test": "NODE_ENV=test jest",
"lint": "eslint src/**/*.js",
"format": "prettier --write src/**/*.js"
},
"keywords": [
"renfe",
"gtfs",
"gtfs-rt",
"trains",
"real-time"
],
"author": "",
"license": "MIT",
"dependencies": {
"express": "^4.18.2",
"fastify": "^4.25.2",
"@fastify/cors": "^8.5.0",
"@fastify/websocket": "^8.3.1",
"socket.io": "^4.6.1",
"pg": "^8.11.3",
"redis": "^4.6.12",
"gtfs-realtime-bindings": "^1.1.0",
"node-fetch": "^3.3.2",
"dotenv": "^16.3.1",
"pino": "^8.17.2",
"pino-pretty": "^10.3.1",
"csv-parse": "^5.5.3",
"adm-zip": "^0.5.10",
"node-cron": "^3.0.3",
"express-rate-limit": "^7.1.5",
"helmet": "^7.1.0",
"express-validator": "^7.0.1",
"hpp": "^0.2.3",
"unzipper": "^0.12.3"
},
"devDependencies": {
"nodemon": "^3.0.2",
"eslint": "^8.56.0",
"prettier": "^3.1.1",
"jest": "^29.7.0"
},
"engines": {
"node": ">=20.0.0"
}
}

View File

@@ -0,0 +1,112 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// GET /alerts - Get all active alerts
router.get('/', async (req, res, next) => {
try {
const { route_id, severity, type } = req.query;
let query = `
SELECT *
FROM alerts
WHERE (end_time IS NULL OR end_time > NOW())
AND (start_time IS NULL OR start_time <= NOW())
`;
const params = [];
let paramIndex = 1;
if (route_id) {
query += ` AND route_id = $${paramIndex}`;
params.push(route_id);
paramIndex++;
}
if (severity) {
query += ` AND severity = $${paramIndex}`;
params.push(severity);
paramIndex++;
}
if (type) {
query += ` AND alert_type = $${paramIndex}`;
params.push(type);
paramIndex++;
}
query += ' ORDER BY severity DESC, created_at DESC';
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /alerts/:id - Get specific alert
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
const result = await db.query(
'SELECT * FROM alerts WHERE alert_id = $1',
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'Alert not found',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
// GET /alerts/route/:routeId - Get alerts for specific route
router.get('/route/:routeId', async (req, res, next) => {
try {
const { routeId } = req.params;
const result = await db.query(`
SELECT *
FROM alerts
WHERE route_id = $1
AND (end_time IS NULL OR end_time > NOW())
AND (start_time IS NULL OR start_time <= NOW())
ORDER BY severity DESC, created_at DESC
`, [routeId]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /alerts/train/:trainId - Get alerts for specific train
router.get('/train/:trainId', async (req, res, next) => {
try {
const { trainId } = req.params;
const result = await db.query(`
SELECT *
FROM alerts
WHERE train_id = $1
AND (end_time IS NULL OR end_time > NOW())
AND (start_time IS NULL OR start_time <= NOW())
ORDER BY severity DESC, created_at DESC
`, [trainId]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,360 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// GET /analytics/traffic/heatmap - Get traffic heatmap data
router.get('/traffic/heatmap', async (req, res, next) => {
try {
const {
start_date = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString(),
end_date = new Date().toISOString(),
grid_size = 0.1,
} = req.query;
// Check cache
const cacheKey = `analytics:heatmap:${start_date}:${end_date}:${grid_size}`;
const cached = await redis.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}
const result = await db.query(
'SELECT * FROM get_traffic_heatmap($1, $2, $3)',
[start_date, end_date, parseFloat(grid_size)]
);
const heatmapData = result.rows.map(row => ({
lat: row.lat_bucket,
lon: row.lon_bucket,
intensity: parseInt(row.train_count, 10),
avgSpeed: parseFloat(row.avg_speed || 0),
}));
// Cache for 15 minutes
await redis.set(cacheKey, JSON.stringify(heatmapData), 900);
res.json(heatmapData);
} catch (error) {
next(error);
}
});
// GET /analytics/traffic/hourly - Get hourly traffic pattern
router.get('/traffic/hourly', async (req, res, next) => {
try {
const { days = 7 } = req.query;
const cacheKey = `analytics:hourly:${days}`;
const cached = await redis.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}
const result = await db.query(
'SELECT * FROM get_hourly_traffic_pattern($1)',
[parseInt(days, 10)]
);
// Cache for 1 hour
await redis.set(cacheKey, JSON.stringify(result.rows), 3600);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/traffic/by-hour - Get materialized view data
router.get('/traffic/by-hour', async (req, res, next) => {
try {
const { limit = 24 } = req.query;
const result = await db.query(
'SELECT * FROM traffic_by_hour ORDER BY hour DESC LIMIT $1',
[parseInt(limit, 10)]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/traffic/by-route - Get traffic by route
router.get('/traffic/by-route', async (req, res, next) => {
try {
const result = await db.query(
'SELECT * FROM traffic_by_route ORDER BY total_trains DESC'
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/statistics/daily - Get daily statistics
router.get('/statistics/daily', async (req, res, next) => {
try {
const { days = 30 } = req.query;
const result = await db.query(
'SELECT * FROM daily_statistics ORDER BY date DESC LIMIT $1',
[parseInt(days, 10)]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/statistics/system - Get current system status
router.get('/statistics/system', async (req, res, next) => {
try {
const result = await db.query('SELECT * FROM system_status');
res.json(result.rows[0] || {});
} catch (error) {
next(error);
}
});
// GET /analytics/performance/routes - Get route performance metrics
router.get('/performance/routes', async (req, res, next) => {
try {
const { limit = 20 } = req.query;
const result = await db.query(
'SELECT * FROM route_performance ORDER BY punctuality_percentage DESC LIMIT $1',
[parseInt(limit, 10)]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/performance/route/:routeId - Get specific route statistics
router.get('/performance/route/:routeId', async (req, res, next) => {
try {
const { routeId } = req.params;
const { days = 7 } = req.query;
const result = await db.query(
'SELECT * FROM get_route_statistics($1, $2)',
[routeId, parseInt(days, 10)]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'Route not found or no data available',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
// GET /analytics/delays/top-routes - Get most delayed routes
router.get('/delays/top-routes', async (req, res, next) => {
try {
const result = await db.query('SELECT * FROM top_delayed_routes');
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/stations/busiest - Get busiest stations
router.get('/stations/busiest', async (req, res, next) => {
try {
const { limit = 20 } = req.query;
const result = await db.query(
'SELECT * FROM busiest_stations LIMIT $1',
[parseInt(limit, 10)]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /analytics/stations/:stationId/statistics - Get station statistics
router.get('/stations/:stationId/statistics', async (req, res, next) => {
try {
const { stationId } = req.params;
const { days = 7 } = req.query;
const result = await db.query(
'SELECT * FROM get_station_statistics($1, $2)',
[stationId, parseInt(days, 10)]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'Station not found or no data available',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
// GET /analytics/trains/:trainId/distance - Calculate distance traveled
router.get('/trains/:trainId/distance', async (req, res, next) => {
try {
const { trainId } = req.params;
const {
start_time = new Date(Date.now() - 24 * 60 * 60 * 1000).toISOString(),
end_time = new Date().toISOString(),
} = req.query;
const result = await db.query(
'SELECT calculate_distance_traveled($1, $2, $3) as distance_km',
[trainId, start_time, end_time]
);
res.json({
train_id: trainId,
start_time,
end_time,
distance_km: parseFloat(result.rows[0].distance_km || 0),
});
} catch (error) {
next(error);
}
});
// POST /analytics/refresh - Refresh materialized views (admin endpoint)
router.post('/refresh', async (req, res, next) => {
try {
await db.query('SELECT refresh_all_analytics_views()');
res.json({
success: true,
message: 'Analytics views refreshed successfully',
timestamp: new Date().toISOString(),
});
} catch (error) {
next(error);
}
});
// GET /analytics/export - Export data (basic implementation)
router.get('/export', async (req, res, next) => {
try {
const {
table,
format = 'json',
start_date,
end_date,
limit = 1000,
} = req.query;
// Whitelist allowed tables
const allowedTables = [
'train_positions',
'trains',
'routes',
'stations',
'alerts',
'trip_updates',
'traffic_by_hour',
'daily_statistics',
];
if (!table || !allowedTables.includes(table)) {
return res.status(400).json({
error: 'Invalid or missing table parameter',
allowed_tables: allowedTables,
});
}
// Build query with filters
let query = `SELECT * FROM ${table}`;
const params = [];
const conditions = [];
if (start_date && (table === 'train_positions' || table === 'trip_updates' || table === 'alerts')) {
params.push(start_date);
conditions.push(`recorded_at >= $${params.length}`);
}
if (end_date && (table === 'train_positions' || table === 'trip_updates' || table === 'alerts')) {
params.push(end_date);
conditions.push(`recorded_at <= $${params.length}`);
}
if (conditions.length > 0) {
query += ' WHERE ' + conditions.join(' AND ');
}
query += ` ORDER BY ${table === 'train_positions' || table === 'trip_updates' ? 'recorded_at' : table === 'routes' ? 'route_id' : 'created_at'} DESC LIMIT $${params.length + 1}`;
params.push(parseInt(limit, 10));
const result = await db.query(query, params);
if (format === 'csv') {
// Simple CSV export
if (result.rows.length === 0) {
return res.status(404).json({ error: 'No data found' });
}
const headers = Object.keys(result.rows[0]);
const csvRows = [
headers.join(','),
...result.rows.map(row =>
headers.map(header => {
const value = row[header];
return typeof value === 'string' && value.includes(',')
? `"${value}"`
: value;
}).join(',')
),
];
res.setHeader('Content-Type', 'text/csv');
res.setHeader('Content-Disposition', `attachment; filename="${table}_${Date.now()}.csv"`);
res.send(csvRows.join('\n'));
} else if (format === 'geojson' && (table === 'train_positions' || table === 'stations')) {
// GeoJSON export for spatial data
const features = result.rows
.filter(row => row.latitude && row.longitude)
.map(row => ({
type: 'Feature',
geometry: {
type: 'Point',
coordinates: [parseFloat(row.longitude), parseFloat(row.latitude)],
},
properties: { ...row, latitude: undefined, longitude: undefined },
}));
const geojson = {
type: 'FeatureCollection',
features,
};
res.setHeader('Content-Type', 'application/geo+json');
res.setHeader('Content-Disposition', `attachment; filename="${table}_${Date.now()}.geojson"`);
res.json(geojson);
} else {
// JSON export (default)
res.json(result.rows);
}
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,347 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// Map nucleo codes to region names
const NUCLEO_NAMES = {
'10': 'Madrid',
'20': 'Barcelona',
'30': 'Sevilla',
'31': 'Cadiz',
'40': 'Valencia',
'41': 'Murcia/Alicante',
'50': 'Bilbao',
'60': 'Asturias',
'61': 'Santander',
'62': 'San Sebastian',
'70': 'Malaga',
};
// GET /dashboard/snapshot - Get stats for a specific point in time
router.get('/snapshot', async (req, res, next) => {
try {
const { timestamp } = req.query;
const targetTime = timestamp ? new Date(timestamp) : new Date();
// Get a 1-minute window around the target time
const startTime = new Date(targetTime.getTime() - 30000);
const endTime = new Date(targetTime.getTime() + 30000);
// Get train positions at that time
const positionsResult = await db.query(`
SELECT DISTINCT ON (train_id)
train_id,
status,
speed,
latitude,
longitude,
bearing,
recorded_at
FROM train_positions
WHERE recorded_at BETWEEN $1 AND $2
ORDER BY train_id, recorded_at DESC
`, [startTime, endTime]);
// Get punctuality data at that time
const punctualityResult = await db.query(`
SELECT DISTINCT ON (train_id)
train_id,
line_code,
nucleo,
delay_minutes,
origin_station_code,
destination_station_code,
recorded_at
FROM train_punctuality
WHERE recorded_at BETWEEN $1 AND $2
ORDER BY train_id, recorded_at DESC
`, [startTime, endTime]);
// Calculate statistics
const trains = positionsResult.rows;
const punctuality = punctualityResult.rows;
// Train status breakdown
const statusCounts = {
IN_TRANSIT_TO: 0,
STOPPED_AT: 0,
INCOMING_AT: 0,
UNKNOWN: 0,
};
for (const train of trains) {
statusCounts[train.status] = (statusCounts[train.status] || 0) + 1;
}
// Punctuality breakdown
const punctualityCounts = {
on_time: 0, // delay = 0
minor_delay: 0, // 1-5 min
moderate_delay: 0, // 6-15 min
severe_delay: 0, // >15 min
early: 0, // < 0
};
let totalDelay = 0;
for (const p of punctuality) {
const delay = p.delay_minutes;
totalDelay += delay;
if (delay < 0) punctualityCounts.early++;
else if (delay === 0) punctualityCounts.on_time++;
else if (delay <= 5) punctualityCounts.minor_delay++;
else if (delay <= 15) punctualityCounts.moderate_delay++;
else punctualityCounts.severe_delay++;
}
// Lines breakdown
const linesCounts = {};
for (const p of punctuality) {
if (p.line_code) {
linesCounts[p.line_code] = (linesCounts[p.line_code] || 0) + 1;
}
}
// Nucleos breakdown
const nucleosCounts = {};
for (const p of punctuality) {
if (p.nucleo) {
nucleosCounts[p.nucleo] = (nucleosCounts[p.nucleo] || 0) + 1;
}
}
res.json({
timestamp: targetTime.toISOString(),
total_trains: trains.length,
status_breakdown: statusCounts,
punctuality_breakdown: punctualityCounts,
average_delay: punctuality.length > 0 ? (totalDelay / punctuality.length).toFixed(2) : 0,
lines_breakdown: linesCounts,
nucleos_breakdown: nucleosCounts,
punctuality_percentage: punctuality.length > 0
? ((punctualityCounts.on_time + punctualityCounts.minor_delay) / punctuality.length * 100).toFixed(1)
: 0,
});
} catch (error) {
next(error);
}
});
// GET /dashboard/current - Get current live stats
router.get('/current', async (req, res, next) => {
try {
// Get active trains from Redis
const trainIds = await redis.sMembers('trains:active');
// Get all current positions
const trains = await Promise.all(
trainIds.map(async (trainId) => {
const data = await redis.get(`trains:current:${trainId}`);
const fleetData = await redis.get(`fleet:${trainId}`);
return {
position: data ? JSON.parse(data) : null,
fleet: fleetData ? JSON.parse(fleetData) : null,
};
})
);
const validTrains = trains.filter(t => t.position !== null);
// Calculate status breakdown
const statusCounts = {
IN_TRANSIT_TO: 0,
STOPPED_AT: 0,
INCOMING_AT: 0,
UNKNOWN: 0,
};
for (const t of validTrains) {
const status = t.position.status || 'UNKNOWN';
statusCounts[status] = (statusCounts[status] || 0) + 1;
}
// Calculate punctuality from fleet data
const punctualityCounts = {
on_time: 0,
minor_delay: 0,
moderate_delay: 0,
severe_delay: 0,
early: 0,
};
let totalDelay = 0;
let delayCount = 0;
// Lines breakdown by nucleo (key: "nucleo:line")
const linesWithNucleo = {};
const nucleosCounts = {};
for (const t of validTrains) {
if (t.fleet) {
const delay = parseInt(t.fleet.retrasoMin, 10) || 0;
totalDelay += delay;
delayCount++;
if (delay < 0) punctualityCounts.early++;
else if (delay === 0) punctualityCounts.on_time++;
else if (delay <= 5) punctualityCounts.minor_delay++;
else if (delay <= 15) punctualityCounts.moderate_delay++;
else punctualityCounts.severe_delay++;
if (t.fleet.codLinea && t.fleet.nucleo) {
const key = `${t.fleet.nucleo}:${t.fleet.codLinea}`;
if (!linesWithNucleo[key]) {
linesWithNucleo[key] = {
line_code: t.fleet.codLinea,
nucleo: t.fleet.nucleo,
nucleo_name: NUCLEO_NAMES[t.fleet.nucleo] || t.fleet.nucleo,
count: 0,
};
}
linesWithNucleo[key].count++;
}
if (t.fleet.nucleo) {
if (!nucleosCounts[t.fleet.nucleo]) {
nucleosCounts[t.fleet.nucleo] = {
nucleo: t.fleet.nucleo,
nucleo_name: NUCLEO_NAMES[t.fleet.nucleo] || t.fleet.nucleo,
count: 0,
};
}
nucleosCounts[t.fleet.nucleo].count++;
}
}
}
// Convert to arrays sorted by count
const linesArray = Object.values(linesWithNucleo).sort((a, b) => b.count - a.count);
const nucleosArray = Object.values(nucleosCounts).sort((a, b) => b.count - a.count);
res.json({
timestamp: new Date().toISOString(),
total_trains: validTrains.length,
status_breakdown: statusCounts,
punctuality_breakdown: punctualityCounts,
average_delay: delayCount > 0 ? (totalDelay / delayCount).toFixed(2) : 0,
lines_breakdown: linesArray,
nucleos_breakdown: nucleosArray,
punctuality_percentage: delayCount > 0
? ((punctualityCounts.on_time + punctualityCounts.minor_delay) / delayCount * 100).toFixed(1)
: 0,
});
} catch (error) {
next(error);
}
});
// GET /dashboard/timeline - Get stats over a time range
router.get('/timeline', async (req, res, next) => {
try {
const { start, end, interval = '5' } = req.query;
const startTime = start ? new Date(start) : new Date(Date.now() - 3600000); // Last hour
const endTime = end ? new Date(end) : new Date();
const intervalMinutes = parseInt(interval, 10);
const result = await db.query(`
WITH time_buckets AS (
SELECT
date_trunc('minute', recorded_at) -
(EXTRACT(MINUTE FROM recorded_at)::INTEGER % $3) * INTERVAL '1 minute' as time_bucket,
train_id,
delay_minutes,
line_code
FROM train_punctuality
WHERE recorded_at BETWEEN $1 AND $2
)
SELECT
time_bucket,
COUNT(DISTINCT train_id) as train_count,
AVG(delay_minutes)::FLOAT as avg_delay,
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::FLOAT /
NULLIF(COUNT(*), 0) * 100 as punctuality_pct,
COUNT(CASE WHEN delay_minutes = 0 THEN 1 END) as on_time,
COUNT(CASE WHEN delay_minutes > 0 AND delay_minutes <= 5 THEN 1 END) as minor_delay,
COUNT(CASE WHEN delay_minutes > 5 AND delay_minutes <= 15 THEN 1 END) as moderate_delay,
COUNT(CASE WHEN delay_minutes > 15 THEN 1 END) as severe_delay
FROM time_buckets
GROUP BY time_bucket
ORDER BY time_bucket
`, [startTime, endTime, intervalMinutes]);
res.json({
start: startTime.toISOString(),
end: endTime.toISOString(),
interval_minutes: intervalMinutes,
data: result.rows.map(row => ({
timestamp: row.time_bucket,
train_count: parseInt(row.train_count, 10),
avg_delay: parseFloat(row.avg_delay) || 0,
punctuality_pct: parseFloat(row.punctuality_pct) || 0,
on_time: parseInt(row.on_time, 10),
minor_delay: parseInt(row.minor_delay, 10),
moderate_delay: parseInt(row.moderate_delay, 10),
severe_delay: parseInt(row.severe_delay, 10),
})),
});
} catch (error) {
next(error);
}
});
// GET /dashboard/lines-ranking - Get lines ranked by punctuality
router.get('/lines-ranking', async (req, res, next) => {
try {
const { timestamp, hours = 24 } = req.query;
const targetTime = timestamp ? new Date(timestamp) : new Date();
const startTime = new Date(targetTime.getTime() - hours * 3600000);
const result = await db.query(`
SELECT
line_code,
nucleo,
COUNT(*) as observations,
COUNT(DISTINCT train_id) as unique_trains,
AVG(delay_minutes)::FLOAT as avg_delay,
MAX(delay_minutes) as max_delay,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100, 1
) as punctuality_pct
FROM train_punctuality
WHERE recorded_at BETWEEN $1 AND $2
AND line_code IS NOT NULL
GROUP BY line_code, nucleo
HAVING COUNT(*) >= 10
ORDER BY punctuality_pct ASC
`, [startTime, targetTime]);
// Add nucleo_name to each row
const rowsWithNucleoName = result.rows.map(row => ({
...row,
nucleo_name: NUCLEO_NAMES[row.nucleo] || row.nucleo,
}));
res.json(rowsWithNucleoName);
} catch (error) {
next(error);
}
});
// GET /dashboard/available-range - Get available data time range
router.get('/available-range', async (req, res, next) => {
try {
const result = await db.query(`
SELECT
MIN(recorded_at) as earliest,
MAX(recorded_at) as latest
FROM train_punctuality
`);
res.json({
earliest: result.rows[0]?.earliest,
latest: result.rows[0]?.latest,
});
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,371 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// GET /explorer/routes/:routeId - Get complete route information
router.get('/routes/:routeId', async (req, res, next) => {
try {
const { routeId } = req.params;
// Get route info
const routeResult = await db.query(
'SELECT * FROM routes WHERE route_id = $1',
[routeId]
);
if (routeResult.rows.length === 0) {
return res.status(404).json({ error: 'Route not found' });
}
const route = routeResult.rows[0];
// Get all trips for this route
const tripsResult = await db.query(
'SELECT * FROM trips WHERE route_id = $1 ORDER BY trip_id',
[routeId]
);
// Get all stops for this route (from any trip)
const stopsResult = await db.query(`
SELECT DISTINCT ON (s.stop_id)
s.stop_id,
s.stop_name,
s.stop_lat,
s.stop_lon,
s.location_type,
s.parent_station
FROM stops s
JOIN stop_times st ON s.stop_id = st.stop_id
JOIN trips t ON st.trip_id = t.trip_id
WHERE t.route_id = $1
ORDER BY s.stop_id
`, [routeId]);
// Get shape if available
const shapeResult = await db.query(`
SELECT DISTINCT shape_id FROM trips WHERE route_id = $1 AND shape_id IS NOT NULL LIMIT 1
`, [routeId]);
let shape = null;
if (shapeResult.rows.length > 0 && shapeResult.rows[0].shape_id) {
const shapePointsResult = await db.query(`
SELECT
shape_pt_lat,
shape_pt_lon,
shape_pt_sequence,
shape_dist_traveled
FROM shapes
WHERE shape_id = $1
ORDER BY shape_pt_sequence
`, [shapeResult.rows[0].shape_id]);
shape = {
shape_id: shapeResult.rows[0].shape_id,
points: shapePointsResult.rows.map(p => ({
lat: parseFloat(p.shape_pt_lat),
lon: parseFloat(p.shape_pt_lon),
sequence: p.shape_pt_sequence,
distance: p.shape_dist_traveled,
})),
};
}
res.json({
route,
trips: tripsResult.rows,
stops: stopsResult.rows,
shape,
total_trips: tripsResult.rows.length,
total_stops: stopsResult.rows.length,
});
} catch (error) {
next(error);
}
});
// GET /explorer/trips/:tripId/schedule - Get complete trip schedule
router.get('/trips/:tripId/schedule', async (req, res, next) => {
try {
const { tripId } = req.params;
const result = await db.query(
'SELECT * FROM get_trip_schedule($1)',
[tripId]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Trip not found' });
}
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /explorer/stations/:stationId - Get complete station information
router.get('/stations/:stationId', async (req, res, next) => {
try {
const { stationId } = req.params;
// Get station info
const stationResult = await db.query(
'SELECT * FROM stops WHERE stop_id = $1',
[stationId]
);
if (stationResult.rows.length === 0) {
return res.status(404).json({ error: 'Station not found' });
}
const station = stationResult.rows[0];
// Get next departures
const departuresResult = await db.query(
'SELECT * FROM get_next_departures($1, 10)',
[stationId]
);
// Get routes that serve this station
const routesResult = await db.query(`
SELECT DISTINCT r.*
FROM routes r
JOIN trips t ON r.route_id = t.route_id
JOIN stop_times st ON t.trip_id = st.trip_id
WHERE st.stop_id = $1
`, [stationId]);
// Get statistics
const statsResult = await db.query(
'SELECT * FROM get_station_statistics($1, 7)',
[stationId]
);
res.json({
station,
next_departures: departuresResult.rows,
routes: routesResult.rows,
statistics: statsResult.rows[0] || null,
});
} catch (error) {
next(error);
}
});
// GET /explorer/planner - Trip planner (basic implementation)
router.get('/planner', async (req, res, next) => {
try {
const { origin, destination, time, date } = req.query;
if (!origin || !destination) {
return res.status(400).json({
error: 'Origin and destination are required',
});
}
// Get direct trips between two stations
const directTripsResult = await db.query(`
SELECT
t.trip_id,
t.route_id,
r.route_name,
t.trip_headsign,
origin_st.departure_time as origin_departure,
dest_st.arrival_time as destination_arrival,
EXTRACT(EPOCH FROM (dest_st.arrival_time - origin_st.departure_time)) / 60 as duration_minutes,
origin_st.stop_sequence as origin_sequence,
dest_st.stop_sequence as destination_sequence
FROM trips t
JOIN routes r ON t.route_id = r.route_id
JOIN stop_times origin_st ON t.trip_id = origin_st.trip_id AND origin_st.stop_id = $1
JOIN stop_times dest_st ON t.trip_id = dest_st.trip_id AND dest_st.stop_id = $2
WHERE origin_st.stop_sequence < dest_st.stop_sequence
AND ($3::TIME IS NULL OR origin_st.departure_time >= $3::TIME)
ORDER BY origin_st.departure_time
LIMIT 10
`, [origin, destination, time || null]);
// Get trip updates for delay info
const tripsWithDelays = await Promise.all(
directTripsResult.rows.map(async (trip) => {
const delayResult = await db.query(`
SELECT delay_seconds, schedule_relationship
FROM trip_updates
WHERE trip_id = $1
AND received_at > NOW() - INTERVAL '10 minutes'
ORDER BY received_at DESC
LIMIT 1
`, [trip.trip_id]);
return {
...trip,
delay: delayResult.rows[0] || null,
};
})
);
// Find trips with one transfer
const oneTransferResult = await db.query(`
WITH possible_transfers AS (
SELECT
t1.trip_id as trip1_id,
t1.route_id as route1_id,
r1.route_name as route1_name,
t2.trip_id as trip2_id,
t2.route_id as route2_id,
r2.route_name as route2_name,
origin_st.departure_time as origin_departure,
transfer_st1.arrival_time as transfer_arrival,
transfer_st2.departure_time as transfer_departure,
dest_st.arrival_time as destination_arrival,
transfer_st1.stop_id as transfer_station,
s.stop_name as transfer_station_name
FROM trips t1
JOIN routes r1 ON t1.route_id = r1.route_id
JOIN stop_times origin_st ON t1.trip_id = origin_st.trip_id AND origin_st.stop_id = $1
JOIN stop_times transfer_st1 ON t1.trip_id = transfer_st1.trip_id
JOIN stops s ON transfer_st1.stop_id = s.stop_id
JOIN stop_times transfer_st2 ON transfer_st1.stop_id = transfer_st2.stop_id
JOIN trips t2 ON transfer_st2.trip_id = t2.trip_id
JOIN routes r2 ON t2.route_id = r2.route_id
JOIN stop_times dest_st ON t2.trip_id = dest_st.trip_id AND dest_st.stop_id = $2
WHERE origin_st.stop_sequence < transfer_st1.stop_sequence
AND transfer_st2.stop_sequence < dest_st.stop_sequence
AND transfer_st1.arrival_time < transfer_st2.departure_time
AND (transfer_st2.departure_time - transfer_st1.arrival_time) >= INTERVAL '5 minutes'
AND (transfer_st2.departure_time - transfer_st1.arrival_time) <= INTERVAL '60 minutes'
AND ($3::TIME IS NULL OR origin_st.departure_time >= $3::TIME)
)
SELECT
*,
EXTRACT(EPOCH FROM (destination_arrival - origin_departure)) / 60 as total_duration_minutes
FROM possible_transfers
ORDER BY origin_departure, total_duration_minutes
LIMIT 5
`, [origin, destination, time || null]);
res.json({
origin,
destination,
requested_time: time || 'any',
requested_date: date || 'today',
direct_trips: tripsWithDelays,
trips_with_transfer: oneTransferResult.rows,
total_options: directTripsResult.rows.length + oneTransferResult.rows.length,
});
} catch (error) {
next(error);
}
});
// GET /explorer/stations/:stationId/nearby - Get nearby stations
router.get('/stations/:stationId/nearby', async (req, res, next) => {
try {
const { stationId } = req.params;
const { radius = 5 } = req.query; // radius in km
const result = await db.query(`
WITH target_station AS (
SELECT stop_lat, stop_lon
FROM stops
WHERE stop_id = $1
)
SELECT
s.stop_id,
s.stop_name,
s.stop_lat,
s.stop_lon,
ST_Distance(
ST_SetSRID(ST_MakePoint(s.stop_lon, s.stop_lat), 4326)::geography,
ST_SetSRID(ST_MakePoint(ts.stop_lon, ts.stop_lat), 4326)::geography
) / 1000 as distance_km
FROM stops s, target_station ts
WHERE s.stop_id != $1
AND ST_DWithin(
ST_SetSRID(ST_MakePoint(s.stop_lon, s.stop_lat), 4326)::geography,
ST_SetSRID(ST_MakePoint(ts.stop_lon, ts.stop_lat), 4326)::geography,
$2 * 1000
)
ORDER BY distance_km
LIMIT 20
`, [stationId, parseFloat(radius)]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /explorer/routes/between - Find all routes between two stations
router.get('/routes/between', async (req, res, next) => {
try {
const { origin, destination } = req.query;
if (!origin || !destination) {
return res.status(400).json({
error: 'Origin and destination are required',
});
}
const result = await db.query(`
SELECT DISTINCT
r.route_id,
r.route_name,
r.route_type,
r.route_color,
COUNT(DISTINCT t.trip_id) as daily_trips
FROM routes r
JOIN trips t ON r.route_id = t.route_id
JOIN stop_times origin_st ON t.trip_id = origin_st.trip_id AND origin_st.stop_id = $1
JOIN stop_times dest_st ON t.trip_id = dest_st.trip_id AND dest_st.stop_id = $2
WHERE origin_st.stop_sequence < dest_st.stop_sequence
GROUP BY r.route_id, r.route_name, r.route_type, r.route_color
ORDER BY daily_trips DESC
`, [origin, destination]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /explorer/search - Search for stations by name
router.get('/search', async (req, res, next) => {
try {
const { query, limit = 10 } = req.query;
if (!query || query.length < 2) {
return res.status(400).json({
error: 'Search query must be at least 2 characters',
});
}
const result = await db.query(`
SELECT
stop_id,
stop_name,
stop_lat,
stop_lon,
location_type,
parent_station
FROM stops
WHERE stop_name ILIKE $1
ORDER BY
CASE
WHEN stop_name ILIKE $2 THEN 1
WHEN stop_name ILIKE $3 THEN 2
ELSE 3
END,
stop_name
LIMIT $4
`, [`%${query}%`, `${query}%`, `% ${query}%`, parseInt(limit, 10)]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,160 @@
import express from 'express';
import db from '../../lib/db.js';
import logger from '../../lib/logger.js';
const router = express.Router();
// GET /lines - Get all train lines
router.get('/', async (req, res, next) => {
try {
const { nucleo } = req.query;
let query = `
SELECT
line_id,
line_code,
line_name,
nucleo_id,
nucleo_name,
color,
metadata
FROM train_lines
`;
const params = [];
if (nucleo) {
query += ' WHERE nucleo_id = $1';
params.push(nucleo);
}
query += ' ORDER BY nucleo_name, line_code';
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /lines/:id - Get specific line with geometry
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
const result = await db.query(
`SELECT
line_id,
line_code,
line_name,
nucleo_id,
nucleo_name,
color,
ST_AsGeoJSON(geometry) as geometry,
metadata
FROM train_lines
WHERE line_id = $1 OR line_code = $1`,
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Line not found' });
}
const line = result.rows[0];
if (line.geometry) {
line.geometry = JSON.parse(line.geometry);
}
res.json(line);
} catch (error) {
next(error);
}
});
// GET /lines/:id/stations - Get stations on a line
router.get('/:id/stations', async (req, res, next) => {
try {
const { id } = req.params;
// Get line first
const lineResult = await db.query(
`SELECT line_code FROM train_lines WHERE line_id = $1 OR line_code = $1`,
[id]
);
if (lineResult.rows.length === 0) {
return res.status(404).json({ error: 'Line not found' });
}
const lineCode = lineResult.rows[0].line_code;
// Get stations that have this line in their LINEAS metadata
const result = await db.query(
`SELECT
station_id,
station_code,
station_name,
latitude,
longitude,
station_type,
metadata
FROM stations
WHERE metadata->>'lineas' LIKE $1
ORDER BY station_name`,
[`%${lineCode}%`]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /lines/geojson - Get all lines as GeoJSON FeatureCollection
router.get('/format/geojson', async (req, res, next) => {
try {
const { nucleo } = req.query;
let query = `
SELECT
line_id,
line_code,
line_name,
nucleo_id,
nucleo_name,
color,
ST_AsGeoJSON(geometry) as geometry
FROM train_lines
WHERE geometry IS NOT NULL
`;
const params = [];
if (nucleo) {
query += ' AND nucleo_id = $1';
params.push(nucleo);
}
const result = await db.query(query, params);
const features = result.rows.map(line => ({
type: 'Feature',
properties: {
id: line.line_id,
codigo: line.line_code,
nombre: line.line_name,
nucleo: line.nucleo_name,
color: line.color,
},
geometry: line.geometry ? JSON.parse(line.geometry) : null,
})).filter(f => f.geometry);
res.json({
type: 'FeatureCollection',
features,
});
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,267 @@
import express from 'express';
import db from '../../lib/db.js';
const router = express.Router();
// GET /punctuality/summary - Get overall punctuality summary
router.get('/summary', async (req, res, next) => {
try {
const { days = 7 } = req.query;
const result = await db.query(`
SELECT
COUNT(*) as total_observations,
COUNT(DISTINCT train_id) as unique_trains,
COUNT(DISTINCT line_code) as unique_lines,
AVG(delay_minutes)::FLOAT as avg_delay_minutes,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY delay_minutes)::FLOAT as median_delay_minutes,
MAX(delay_minutes) as max_delay_minutes,
COUNT(CASE WHEN delay_minutes = 0 THEN 1 END) as on_time_count,
COUNT(CASE WHEN delay_minutes > 0 AND delay_minutes <= 5 THEN 1 END) as minor_delay_count,
COUNT(CASE WHEN delay_minutes > 5 AND delay_minutes <= 15 THEN 1 END) as moderate_delay_count,
COUNT(CASE WHEN delay_minutes > 15 THEN 1 END) as severe_delay_count,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) as punctuality_percentage
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '1 day' * $1
`, [parseInt(days, 10)]);
res.json(result.rows[0] || {});
} catch (error) {
next(error);
}
});
// GET /punctuality/by-line - Get punctuality statistics by line
router.get('/by-line', async (req, res, next) => {
try {
const { days = 7, limit = 50 } = req.query;
const result = await db.query(`
SELECT
line_code,
nucleo,
COUNT(*) as total_observations,
COUNT(DISTINCT train_id) as unique_trains,
AVG(delay_minutes)::FLOAT as avg_delay_minutes,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY delay_minutes)::FLOAT as median_delay_minutes,
MAX(delay_minutes) as max_delay_minutes,
COUNT(CASE WHEN delay_minutes = 0 THEN 1 END) as on_time_count,
COUNT(CASE WHEN delay_minutes > 0 AND delay_minutes <= 5 THEN 1 END) as minor_delay_count,
COUNT(CASE WHEN delay_minutes > 5 AND delay_minutes <= 15 THEN 1 END) as moderate_delay_count,
COUNT(CASE WHEN delay_minutes > 15 THEN 1 END) as severe_delay_count,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) as punctuality_percentage
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '1 day' * $1
AND line_code IS NOT NULL
GROUP BY line_code, nucleo
HAVING COUNT(*) >= 10
ORDER BY punctuality_percentage ASC
LIMIT $2
`, [parseInt(days, 10), parseInt(limit, 10)]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /punctuality/by-hour - Get punctuality pattern by hour of day
router.get('/by-hour', async (req, res, next) => {
try {
const { days = 7, line_code } = req.query;
let query = `
SELECT
EXTRACT(HOUR FROM recorded_at)::INTEGER as hour_of_day,
COUNT(*) as total_observations,
AVG(delay_minutes)::FLOAT as avg_delay_minutes,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) as punctuality_percentage
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '1 day' * $1
`;
const params = [parseInt(days, 10)];
if (line_code) {
query += ` AND line_code = $2`;
params.push(line_code);
}
query += `
GROUP BY EXTRACT(HOUR FROM recorded_at)
ORDER BY hour_of_day
`;
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /punctuality/daily - Get daily punctuality trend
router.get('/daily', async (req, res, next) => {
try {
const { days = 30, line_code } = req.query;
let query = `
SELECT
DATE(recorded_at) as date,
COUNT(*) as total_observations,
COUNT(DISTINCT train_id) as unique_trains,
AVG(delay_minutes)::FLOAT as avg_delay_minutes,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) as punctuality_percentage
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '1 day' * $1
`;
const params = [parseInt(days, 10)];
if (line_code) {
query += ` AND line_code = $2`;
params.push(line_code);
}
query += `
GROUP BY DATE(recorded_at)
ORDER BY date DESC
`;
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /punctuality/worst-lines - Get worst performing lines
router.get('/worst-lines', async (req, res, next) => {
try {
const { days = 7, limit = 10 } = req.query;
const result = await db.query(`
SELECT * FROM get_worst_punctuality_lines($1, $2)
`, [parseInt(days, 10), parseInt(limit, 10)]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /punctuality/line/:lineCode - Get detailed stats for a specific line
router.get('/line/:lineCode', async (req, res, next) => {
try {
const { lineCode } = req.params;
const { days = 7 } = req.query;
const result = await db.query(`
SELECT * FROM get_line_punctuality_summary($1, $2)
`, [lineCode, parseInt(days, 10)]);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Line not found or no data available' });
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
// GET /punctuality/routes - Get punctuality by origin-destination pairs
router.get('/routes', async (req, res, next) => {
try {
const { days = 7, limit = 50, line_code } = req.query;
let query = `
SELECT
origin_station_code,
destination_station_code,
line_code,
COUNT(*) as total_trips,
AVG(delay_minutes)::FLOAT as avg_delay_minutes,
MAX(delay_minutes) as max_delay_minutes,
ROUND(
COUNT(CASE WHEN delay_minutes <= 5 THEN 1 END)::NUMERIC /
NULLIF(COUNT(*), 0) * 100,
2
) as punctuality_percentage
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '1 day' * $1
AND origin_station_code IS NOT NULL
AND destination_station_code IS NOT NULL
`;
const params = [parseInt(days, 10)];
if (line_code) {
query += ` AND line_code = $2`;
params.push(line_code);
}
query += `
GROUP BY origin_station_code, destination_station_code, line_code
HAVING COUNT(*) >= 5
ORDER BY punctuality_percentage ASC
LIMIT $${params.length + 1}
`;
params.push(parseInt(limit, 10));
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /punctuality/current-delays - Get trains currently delayed
router.get('/current-delays', async (req, res, next) => {
try {
const { min_delay = 5 } = req.query;
const result = await db.query(`
SELECT DISTINCT ON (train_id)
train_id,
trip_id,
line_code,
nucleo,
origin_station_code,
destination_station_code,
current_station_code,
next_station_code,
delay_minutes,
platform,
recorded_at
FROM train_punctuality
WHERE recorded_at > NOW() - INTERVAL '10 minutes'
AND delay_minutes >= $1
ORDER BY train_id, recorded_at DESC
`, [parseInt(min_delay, 10)]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,42 @@
import express from 'express';
import db from '../../lib/db.js';
const router = express.Router();
// GET /routes - Get all routes
router.get('/', async (req, res, next) => {
try {
const result = await db.query(`
SELECT * FROM routes
ORDER BY route_name
`);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /routes/:id - Get specific route
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
const result = await db.query(
'SELECT * FROM routes WHERE route_id = $1',
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'Route not found',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,51 @@
import express from 'express';
import db from '../../lib/db.js';
const router = express.Router();
// GET /stations - Get all stations
router.get('/', async (req, res, next) => {
try {
const { type } = req.query;
let query = 'SELECT * FROM stations';
const params = [];
if (type) {
query += ' WHERE station_type = $1';
params.push(type);
}
query += ' ORDER BY station_name';
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /stations/:id - Get specific station
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
const result = await db.query(
'SELECT * FROM stations WHERE station_id = $1',
[id]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'Station not found',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,70 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// GET /stats - Get system statistics
router.get('/', async (req, res, next) => {
try {
// Get active trains count
const activeTrains = await redis.sMembers('trains:active');
const activeCount = activeTrains.length;
// Get last update time
const lastUpdate = await redis.get('stats:last_update');
// Get total trains in database
const totalResult = await db.query(
'SELECT COUNT(*) as total FROM trains'
);
const totalTrains = parseInt(totalResult.rows[0].total, 10);
// Get total positions stored
const positionsResult = await db.query(
'SELECT COUNT(*) as total FROM train_positions WHERE recorded_at > NOW() - INTERVAL \'24 hours\''
);
const positions24h = parseInt(positionsResult.rows[0].total, 10);
res.json({
active_trains: activeCount,
total_trains: totalTrains,
positions_24h: positions24h,
last_update: lastUpdate,
timestamp: new Date().toISOString(),
});
} catch (error) {
next(error);
}
});
// GET /stats/train/:id - Get statistics for specific train
router.get('/train/:id', async (req, res, next) => {
try {
const { id } = req.params;
const { from, to } = req.query;
if (!from || !to) {
return res.status(400).json({
error: 'Missing required parameters: from, to',
});
}
const result = await db.query(
`SELECT * FROM calculate_train_statistics($1, $2, $3)`,
[id, new Date(from), new Date(to)]
);
if (result.rows.length === 0) {
return res.status(404).json({
error: 'No data found for this train in the specified period',
});
}
res.json(result.rows[0]);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,321 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
import logger from '../../lib/logger.js';
const router = express.Router();
// Normalize station code - Renfe uses codes with leading zeros (04040) but geojson has them without (4040)
function normalizeStationCode(code) {
if (!code) return null;
// Remove leading zeros for lookup
return code.replace(/^0+/, '');
}
// Helper to get station names map from codes
async function getStationNamesMap(stationCodes) {
if (!stationCodes || stationCodes.length === 0) return new Map();
const uniqueCodes = [...new Set(stationCodes.filter(Boolean))];
if (uniqueCodes.length === 0) return new Map();
// Create both original and normalized versions for lookup
const normalizedCodes = uniqueCodes.map(c => normalizeStationCode(c));
const allCodes = [...new Set([...uniqueCodes, ...normalizedCodes])];
const result = await db.query(
`SELECT station_code, station_name FROM stations WHERE station_code = ANY($1)`,
[allCodes]
);
// Build map that works with both original and normalized codes
const dbMap = new Map(result.rows.map(s => [s.station_code, s.station_name]));
const resultMap = new Map();
for (const code of uniqueCodes) {
// Try original code first, then normalized
const name = dbMap.get(code) || dbMap.get(normalizeStationCode(code));
if (name) {
resultMap.set(code, name);
}
}
return resultMap;
}
// GET /trains/current - Get all current train positions with fleet data
router.get('/current', async (req, res, next) => {
try {
const trainIds = await redis.sMembers('trains:active');
if (trainIds.length === 0) {
return res.json([]);
}
// First pass: collect all positions and fleet data
const trainsData = await Promise.all(
trainIds.map(async (trainId) => {
const data = await redis.get(`trains:current:${trainId}`);
if (!data) return null;
const position = JSON.parse(data);
const fleetData = await redis.get(`fleet:${trainId}`);
const fleet = fleetData ? JSON.parse(fleetData) : null;
return { position, fleet };
})
);
const validTrains = trainsData.filter(t => t !== null);
// Collect all station codes to resolve names in one query
const allStationCodes = [];
for (const { fleet } of validTrains) {
if (fleet) {
allStationCodes.push(fleet.codEstAct, fleet.codEstSig, fleet.codEstDest, fleet.codEstOrig);
}
}
// Get station names map
const stationNames = await getStationNamesMap(allStationCodes);
// Build final response with station names
const positions = validTrains.map(({ position, fleet }) => {
if (fleet) {
return {
...position,
codLinea: fleet.codLinea,
retrasoMin: fleet.retrasoMin,
codEstAct: fleet.codEstAct,
estacionActual: stationNames.get(fleet.codEstAct) || null,
codEstSig: fleet.codEstSig,
estacionSiguiente: stationNames.get(fleet.codEstSig) || null,
horaLlegadaSigEst: fleet.horaLlegadaSigEst,
codEstDest: fleet.codEstDest,
estacionDestino: stationNames.get(fleet.codEstDest) || null,
codEstOrig: fleet.codEstOrig,
estacionOrigen: stationNames.get(fleet.codEstOrig) || null,
nucleo: fleet.nucleo,
accesible: fleet.accesible,
via: fleet.via,
};
}
return position;
});
res.json(positions);
} catch (error) {
next(error);
}
});
// GET /trains/:id - Get specific train information with fleet data
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
// Get current position from Redis
const currentData = await redis.get(`trains:current:${id}`);
const current = currentData ? JSON.parse(currentData) : null;
// Get fleet data from Renfe
const fleetData = await redis.get(`fleet:${id}`);
const fleet = fleetData ? JSON.parse(fleetData) : null;
// Get train info from database
const trainResult = await db.query(
'SELECT * FROM trains WHERE train_id = $1',
[id]
);
const train = trainResult.rows[0] || null;
if (!train && !current) {
return res.status(404).json({
error: 'Train not found',
});
}
// Resolve station names if we have fleet data
let currentStation = null;
let nextStation = null;
let destStation = null;
let origStation = null;
if (fleet) {
const stationCodes = [fleet.codEstAct, fleet.codEstSig, fleet.codEstDest, fleet.codEstOrig].filter(Boolean);
if (stationCodes.length > 0) {
const stationMap = await getStationNamesMap(stationCodes);
currentStation = stationMap.get(fleet.codEstAct);
nextStation = stationMap.get(fleet.codEstSig);
destStation = stationMap.get(fleet.codEstDest);
origStation = stationMap.get(fleet.codEstOrig);
}
}
res.json({
...train,
current_position: current,
fleet_data: fleet ? {
codLinea: fleet.codLinea,
retrasoMin: fleet.retrasoMin,
codEstAct: fleet.codEstAct,
estacionActual: currentStation,
codEstSig: fleet.codEstSig,
estacionSiguiente: nextStation,
horaLlegadaSigEst: fleet.horaLlegadaSigEst,
codEstDest: fleet.codEstDest,
estacionDestino: destStation,
codEstOrig: fleet.codEstOrig,
estacionOrigen: origStation,
nucleo: fleet.nucleo,
accesible: fleet.accesible,
via: fleet.via,
} : null,
});
} catch (error) {
next(error);
}
});
// GET /trains/history/all - Get all train positions in a time range
router.get('/history/all', async (req, res, next) => {
try {
const { from, to, limit = 5000 } = req.query;
// Default to last hour if no time range specified
const endTime = to ? new Date(to) : new Date();
const startTime = from ? new Date(from) : new Date(endTime.getTime() - 3600000);
const result = await db.query(
`SELECT
train_id,
latitude,
longitude,
bearing,
speed,
status,
occupancy_status,
trip_id,
timestamp,
recorded_at
FROM train_positions
WHERE timestamp >= $1 AND timestamp <= $2
ORDER BY timestamp ASC
LIMIT $3`,
[startTime, endTime, parseInt(limit, 10)]
);
// Convert latitude/longitude to numbers
const positions = result.rows.map(row => ({
...row,
latitude: parseFloat(row.latitude),
longitude: parseFloat(row.longitude),
}));
res.json(positions);
} catch (error) {
next(error);
}
});
// GET /trains/:id/history - Get train position history
router.get('/:id/history', async (req, res, next) => {
try {
const { id } = req.params;
const { from, to, limit = 100 } = req.query;
let query = `
SELECT
train_id,
latitude,
longitude,
bearing,
speed,
status,
timestamp,
recorded_at
FROM train_positions
WHERE train_id = $1
`;
const params = [id];
let paramIndex = 2;
if (from) {
query += ` AND timestamp >= $${paramIndex}`;
params.push(new Date(from));
paramIndex++;
}
if (to) {
query += ` AND timestamp <= $${paramIndex}`;
params.push(new Date(to));
paramIndex++;
}
query += ` ORDER BY timestamp DESC LIMIT $${paramIndex}`;
params.push(parseInt(limit, 10));
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /trains/:id/path - Get train path for visualization
router.get('/:id/path', async (req, res, next) => {
try {
const { id } = req.params;
const { from, to } = req.query;
if (!from || !to) {
return res.status(400).json({
error: 'Missing required parameters: from, to',
});
}
const result = await db.query(
`SELECT * FROM get_train_path($1, $2, $3)`,
[id, new Date(from), new Date(to)]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /trains/area - Get trains in a geographic area
router.get('/area', async (req, res, next) => {
try {
const { minLat, minLon, maxLat, maxLon, time } = req.query;
if (!minLat || !minLon || !maxLat || !maxLon) {
return res.status(400).json({
error: 'Missing required parameters: minLat, minLon, maxLat, maxLon',
});
}
const timestamp = time ? new Date(time) : new Date();
const result = await db.query(
`SELECT * FROM get_trains_in_area($1, $2, $3, $4, $5)`,
[
parseFloat(minLat),
parseFloat(minLon),
parseFloat(maxLat),
parseFloat(maxLon),
timestamp,
]
);
res.json(result.rows);
} catch (error) {
next(error);
}
});
export default router;

View File

@@ -0,0 +1,213 @@
import express from 'express';
import db from '../../lib/db.js';
import redis from '../../lib/redis.js';
const router = express.Router();
// GET /trips - Get all active trips for today
router.get('/', async (req, res, next) => {
try {
const { route_id, service_id } = req.query;
let query = 'SELECT * FROM active_trips_today WHERE 1=1';
const params = [];
let paramIndex = 1;
if (route_id) {
query += ` AND route_id = $${paramIndex}`;
params.push(route_id);
paramIndex++;
}
if (service_id) {
query += ` AND service_id = $${paramIndex}`;
params.push(service_id);
paramIndex++;
}
query += ' ORDER BY trip_id';
const result = await db.query(query, params);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /trips/:id - Get specific trip details
router.get('/:id', async (req, res, next) => {
try {
const { id } = req.params;
const tripResult = await db.query(
'SELECT * FROM trips WHERE trip_id = $1',
[id]
);
if (tripResult.rows.length === 0) {
return res.status(404).json({
error: 'Trip not found',
});
}
const trip = tripResult.rows[0];
// Get schedule using the function
const scheduleResult = await db.query(
'SELECT * FROM get_trip_schedule($1)',
[id]
);
res.json({
...trip,
schedule: scheduleResult.rows,
});
} catch (error) {
next(error);
}
});
// GET /trips/:id/updates - Get real-time updates for a trip
router.get('/:id/updates', async (req, res, next) => {
try {
const { id } = req.params;
// Get latest trip update
const updateResult = await db.query(`
SELECT
tu.*,
json_agg(
json_build_object(
'stop_sequence', stu.stop_sequence,
'stop_id', stu.stop_id,
'arrival_delay', stu.arrival_delay,
'departure_delay', stu.departure_delay,
'schedule_relationship', stu.schedule_relationship
) ORDER BY stu.stop_sequence
) as stop_time_updates
FROM trip_updates tu
LEFT JOIN stop_time_updates stu ON tu.update_id = stu.update_id
WHERE tu.trip_id = $1
AND tu.received_at > NOW() - INTERVAL '10 minutes'
GROUP BY tu.update_id
ORDER BY tu.received_at DESC
LIMIT 1
`, [id]);
if (updateResult.rows.length === 0) {
return res.json({
trip_id: id,
has_updates: false,
message: 'No recent updates available',
});
}
res.json({
trip_id: id,
has_updates: true,
...updateResult.rows[0],
});
} catch (error) {
next(error);
}
});
// GET /trips/:id/delays - Get delay information for a trip
router.get('/:id/delays', async (req, res, next) => {
try {
const { id } = req.params;
// Check Redis cache first
const cachedDelay = await redis.get(`trip:delay:${id}`);
if (cachedDelay) {
return res.json(JSON.parse(cachedDelay));
}
// Get from database
const result = await db.query(`
SELECT
trip_id,
delay_seconds,
schedule_relationship,
received_at,
CASE
WHEN delay_seconds IS NULL THEN 'NO_DATA'
WHEN delay_seconds = 0 THEN 'ON_TIME'
WHEN delay_seconds > 0 AND delay_seconds <= 300 THEN 'MINOR_DELAY'
WHEN delay_seconds > 300 AND delay_seconds <= 900 THEN 'MODERATE_DELAY'
WHEN delay_seconds > 900 THEN 'MAJOR_DELAY'
WHEN delay_seconds < 0 THEN 'EARLY'
END as delay_status,
CASE
WHEN delay_seconds >= 60 THEN
FLOOR(delay_seconds / 60) || ' min ' || MOD(delay_seconds::int, 60) || ' s'
WHEN delay_seconds IS NOT NULL THEN
delay_seconds || ' s'
ELSE 'N/A'
END as delay_formatted
FROM trip_updates
WHERE trip_id = $1
AND received_at > NOW() - INTERVAL '10 minutes'
ORDER BY received_at DESC
LIMIT 1
`, [id]);
if (result.rows.length === 0) {
return res.json({
trip_id: id,
delay_status: 'NO_DATA',
delay_seconds: null,
delay_formatted: 'N/A',
message: 'No recent delay information available',
});
}
const delayInfo = result.rows[0];
// Cache for 30 seconds
await redis.set(`trip:delay:${id}`, JSON.stringify(delayInfo), 30);
res.json(delayInfo);
} catch (error) {
next(error);
}
});
// GET /trips/route/:routeId - Get all trips for a specific route
router.get('/route/:routeId', async (req, res, next) => {
try {
const { routeId } = req.params;
const result = await db.query(`
SELECT t.*
FROM trips t
WHERE t.route_id = $1
ORDER BY t.trip_id
`, [routeId]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
// GET /trips/delayed - Get all currently delayed trips
router.get('/delayed/all', async (req, res, next) => {
try {
const { min_delay } = req.query;
const minDelaySeconds = min_delay ? parseInt(min_delay, 10) : 0;
const result = await db.query(`
SELECT * FROM delayed_trips
WHERE delay_seconds >= $1
ORDER BY delay_seconds DESC
`, [minDelaySeconds]);
res.json(result.rows);
} catch (error) {
next(error);
}
});
export default router;

278
backend/src/api/server.js Normal file
View File

@@ -0,0 +1,278 @@
import express from 'express';
import { createServer } from 'http';
import { Server } from 'socket.io';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
import {
rateLimiters,
helmetConfig,
hppProtection,
securityHeaders,
sanitizeRequest,
securityErrorHandler
} from '../lib/security.js';
import trainsRouter from './routes/trains.js';
import routesRouter from './routes/routes.js';
import stationsRouter from './routes/stations.js';
import statsRouter from './routes/stats.js';
import alertsRouter from './routes/alerts.js';
import tripsRouter from './routes/trips.js';
import analyticsRouter from './routes/analytics.js';
import explorerRouter from './routes/explorer.js';
import linesRouter from './routes/lines.js';
import punctualityRouter from './routes/punctuality.js';
import dashboardRouter from './routes/dashboard.js';
class APIServer {
constructor() {
this.app = express();
this.httpServer = createServer(this.app);
this.io = new Server(this.httpServer, {
cors: {
origin: config.cors.origin,
methods: ['GET', 'POST'],
},
});
this.watchInterval = null;
}
setupMiddleware() {
// Trust proxy (for rate limiting behind nginx)
this.app.set('trust proxy', 1);
// Security headers (Helmet)
this.app.use(helmetConfig);
// Additional security headers
this.app.use(securityHeaders);
// HPP - HTTP Parameter Pollution protection
this.app.use(hppProtection);
// CORS
this.app.use((req, res, next) => {
const origin = req.headers.origin;
if (config.cors.origin.includes(origin) || config.cors.origin.includes('*')) {
res.header('Access-Control-Allow-Origin', origin);
}
res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept, Authorization');
res.header('Access-Control-Max-Age', '86400'); // 24 hours
// Handle preflight
if (req.method === 'OPTIONS') {
return res.sendStatus(204);
}
next();
});
// JSON parser with size limit
this.app.use(express.json({ limit: '1mb' }));
this.app.use(express.urlencoded({ extended: true, limit: '1mb' }));
// Request sanitization
this.app.use(sanitizeRequest);
// General rate limiting (skip in development if needed)
if (config.env === 'production' || config.security?.enableRateLimiting) {
this.app.use(rateLimiters.general);
}
// Request logging
this.app.use((req, res, next) => {
const start = Date.now();
res.on('finish', () => {
const duration = Date.now() - start;
logger.info({
method: req.method,
url: req.url,
status: res.statusCode,
duration: `${duration}ms`,
ip: req.ip,
}, 'HTTP Request');
});
next();
});
}
setupRoutes() {
// Health check (no rate limiting)
this.app.get('/health', (req, res) => {
res.json({
status: 'ok',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
security: {
rateLimiting: config.env === 'production' || config.security?.enableRateLimiting,
helmet: true,
hpp: true
}
});
});
// API routes with rate limiting
this.app.use('/trains', trainsRouter);
this.app.use('/routes', routesRouter);
this.app.use('/stations', stationsRouter);
this.app.use('/stats', statsRouter);
this.app.use('/alerts', alertsRouter);
this.app.use('/trips', tripsRouter);
this.app.use('/lines', linesRouter);
this.app.use('/punctuality', punctualityRouter);
this.app.use('/dashboard', dashboardRouter);
// Analytics routes with stricter rate limiting
this.app.use('/analytics', rateLimiters.strict, analyticsRouter);
// Explorer routes with strict rate limiting
this.app.use('/explorer', rateLimiters.strict, explorerRouter);
// Security error handler
this.app.use(securityErrorHandler);
// 404 handler
this.app.use((req, res) => {
res.status(404).json({
error: 'Not Found',
path: req.url,
});
});
// Error handler
this.app.use((err, req, res, next) => {
// Don't expose stack trace in production
const errorResponse = {
error: err.message || 'Internal Server Error',
};
if (config.env !== 'production') {
errorResponse.stack = err.stack;
}
logger.error({ error: err.message, stack: err.stack }, 'API Error');
res.status(err.status || 500).json(errorResponse);
});
}
setupWebSocket() {
this.io.on('connection', (socket) => {
logger.info({ socketId: socket.id }, 'WebSocket client connected');
// Join default room
socket.join('trains');
// Handle subscribe to specific train
socket.on('subscribe:train', (trainId) => {
socket.join(`train:${trainId}`);
logger.debug({ socketId: socket.id, trainId }, 'Client subscribed to train');
});
// Handle unsubscribe from specific train
socket.on('unsubscribe:train', (trainId) => {
socket.leave(`train:${trainId}`);
logger.debug({ socketId: socket.id, trainId }, 'Client unsubscribed from train');
});
socket.on('disconnect', () => {
logger.info({ socketId: socket.id }, 'WebSocket client disconnected');
});
});
// Watch Redis for updates and broadcast via WebSocket
this.startRedisWatch();
}
async startRedisWatch() {
// Poll Redis every 2 seconds for changes
this.watchInterval = setInterval(async () => {
try {
const lastUpdate = await redis.get('stats:last_update');
if (!lastUpdate) return;
const trainIds = await redis.sMembers('trains:active');
if (trainIds.length === 0) return;
// Get all current positions
const positions = await Promise.all(
trainIds.map(async (trainId) => {
const data = await redis.get(`trains:current:${trainId}`);
return data ? JSON.parse(data) : null;
})
);
const validPositions = positions.filter(p => p !== null);
if (validPositions.length > 0) {
// Broadcast to all clients
this.io.to('trains').emit('trains:update', validPositions);
// Broadcast individual train updates
for (const position of validPositions) {
this.io.to(`train:${position.train_id}`).emit('train:update', position);
}
}
} catch (error) {
logger.error({ error: error.message }, 'Error in Redis watch');
}
}, 2000);
}
async start() {
// Connect to databases
await db.connect();
await redis.connect();
// Setup middleware and routes
this.setupMiddleware();
this.setupRoutes();
this.setupWebSocket();
// Start HTTP server
this.httpServer.listen(config.port, () => {
logger.info({
port: config.port,
env: config.env,
}, 'API Server started');
});
}
async stop() {
logger.info('Stopping API Server...');
if (this.watchInterval) {
clearInterval(this.watchInterval);
this.watchInterval = null;
}
this.io.close();
this.httpServer.close();
await db.disconnect();
await redis.disconnect();
logger.info('API Server stopped');
}
}
// Main execution
const server = new APIServer();
// Graceful shutdown
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await server.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Start server
server.start().catch((error) => {
logger.fatal({ error }, 'Failed to start API server');
process.exit(1);
});
export default APIServer;

View File

@@ -0,0 +1,62 @@
import dotenv from 'dotenv';
dotenv.config();
export const config = {
// Server
port: parseInt(process.env.PORT || '3000', 10),
env: process.env.NODE_ENV || 'development',
logLevel: process.env.LOG_LEVEL || 'info',
// Database
database: {
url: process.env.DATABASE_URL,
poolMin: 2,
poolMax: 10,
},
// Redis
redis: {
url: process.env.REDIS_URL,
},
// GTFS-RT
gtfsRT: {
vehiclePositionsUrl: process.env.GTFS_RT_URL || 'https://gtfsrt.renfe.com/vehicle_positions.pb',
tripUpdatesUrl: process.env.GTFS_TRIP_UPDATES_URL || 'https://gtfsrt.renfe.com/trip_updates_cercanias.pb',
alertsUrl: process.env.GTFS_ALERTS_URL || 'https://gtfsrt.renfe.com/alerts.pb',
pollingInterval: parseInt(process.env.POLLING_INTERVAL || '30000', 10),
},
// CORS
cors: {
origin: process.env.CORS_ORIGIN?.split(',') || ['http://localhost:3000'],
},
// JWT
jwt: {
secret: process.env.JWT_SECRET || 'default_secret_change_me',
expiresIn: '7d',
},
// Security
security: {
enableRateLimiting: process.env.ENABLE_RATE_LIMITING !== 'false',
rateLimits: {
general: {
windowMs: parseInt(process.env.RATE_LIMIT_WINDOW_MS || '900000', 10), // 15 minutes
max: parseInt(process.env.RATE_LIMIT_MAX || '1000', 10),
},
strict: {
windowMs: parseInt(process.env.RATE_LIMIT_STRICT_WINDOW_MS || '900000', 10),
max: parseInt(process.env.RATE_LIMIT_STRICT_MAX || '100', 10),
},
export: {
windowMs: parseInt(process.env.RATE_LIMIT_EXPORT_WINDOW_MS || '3600000', 10), // 1 hour
max: parseInt(process.env.RATE_LIMIT_EXPORT_MAX || '10', 10),
},
},
},
};
export default config;

62
backend/src/lib/db.js Normal file
View File

@@ -0,0 +1,62 @@
import pg from 'pg';
import config from '../config/index.js';
import logger from './logger.js';
const { Pool } = pg;
class Database {
constructor() {
this.pool = null;
}
async connect() {
if (this.pool) {
return this.pool;
}
try {
this.pool = new Pool({
connectionString: config.database.url,
min: config.database.poolMin,
max: config.database.poolMax,
});
// Test connection
const client = await this.pool.connect();
logger.info('PostgreSQL connected successfully');
client.release();
return this.pool;
} catch (error) {
logger.error({ error }, 'Failed to connect to PostgreSQL');
throw error;
}
}
async query(text, params) {
if (!this.pool) {
await this.connect();
}
try {
const result = await this.pool.query(text, params);
return result;
} catch (error) {
logger.error({ error, query: text }, 'Database query error');
throw error;
}
}
async disconnect() {
if (this.pool) {
await this.pool.end();
logger.info('PostgreSQL disconnected');
this.pool = null;
}
}
}
// Singleton instance
const db = new Database();
export default db;

16
backend/src/lib/logger.js Normal file
View File

@@ -0,0 +1,16 @@
import pino from 'pino';
import config from '../config/index.js';
const logger = pino({
level: config.logLevel,
transport: config.env === 'development' ? {
target: 'pino-pretty',
options: {
colorize: true,
translateTime: 'SYS:standard',
ignore: 'pid,hostname',
},
} : undefined,
});
export default logger;

131
backend/src/lib/redis.js Normal file
View File

@@ -0,0 +1,131 @@
import { createClient } from 'redis';
import config from '../config/index.js';
import logger from './logger.js';
class RedisClient {
constructor() {
this.client = null;
}
async connect() {
if (this.client?.isOpen) {
return this.client;
}
try {
this.client = createClient({
url: config.redis.url,
});
this.client.on('error', (err) => {
logger.error({ error: err }, 'Redis error');
});
this.client.on('connect', () => {
logger.info('Redis connecting...');
});
this.client.on('ready', () => {
logger.info('Redis connected successfully');
});
await this.client.connect();
return this.client;
} catch (error) {
logger.error({ error }, 'Failed to connect to Redis');
throw error;
}
}
async get(key) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.get(key);
} catch (error) {
logger.error({ error, key }, 'Redis GET error');
throw error;
}
}
async set(key, value, options = {}) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.set(key, value, options);
} catch (error) {
logger.error({ error, key }, 'Redis SET error');
throw error;
}
}
async del(key) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.del(key);
} catch (error) {
logger.error({ error, key }, 'Redis DEL error');
throw error;
}
}
async keys(pattern) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.keys(pattern);
} catch (error) {
logger.error({ error, pattern }, 'Redis KEYS error');
throw error;
}
}
async sAdd(key, ...members) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.sAdd(key, members);
} catch (error) {
logger.error({ error, key }, 'Redis SADD error');
throw error;
}
}
async sMembers(key) {
if (!this.client?.isOpen) {
await this.connect();
}
try {
return await this.client.sMembers(key);
} catch (error) {
logger.error({ error, key }, 'Redis SMEMBERS error');
throw error;
}
}
async disconnect() {
if (this.client?.isOpen) {
await this.client.quit();
logger.info('Redis disconnected');
this.client = null;
}
}
}
// Singleton instance
const redis = new RedisClient();
export default redis;

319
backend/src/lib/security.js Normal file
View File

@@ -0,0 +1,319 @@
import rateLimit from 'express-rate-limit';
import helmet from 'helmet';
import hpp from 'hpp';
import { validationResult, query, param } from 'express-validator';
import logger from './logger.js';
// Rate limiting configurations
export const rateLimiters = {
// General API rate limiter
general: rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 1000, // 1000 requests per 15 minutes
message: {
error: 'Too many requests',
message: 'You have exceeded the rate limit. Please try again later.',
retryAfter: '15 minutes'
},
standardHeaders: true,
legacyHeaders: false,
handler: (req, res, next, options) => {
logger.warn({
ip: req.ip,
path: req.path,
method: req.method
}, 'Rate limit exceeded');
res.status(429).json(options.message);
}
}),
// Strict rate limiter for heavy endpoints (analytics, export)
strict: rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per 15 minutes
message: {
error: 'Too many requests',
message: 'This endpoint has stricter limits. Please try again later.',
retryAfter: '15 minutes'
},
standardHeaders: true,
legacyHeaders: false,
handler: (req, res, next, options) => {
logger.warn({
ip: req.ip,
path: req.path,
method: req.method
}, 'Strict rate limit exceeded');
res.status(429).json(options.message);
}
}),
// Very strict for export/heavy operations
export: rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 10, // 10 exports per hour
message: {
error: 'Export limit reached',
message: 'You can only export 10 times per hour. Please try again later.',
retryAfter: '1 hour'
},
standardHeaders: true,
legacyHeaders: false
}),
// WebSocket connection limiter
websocket: rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 10, // 10 connection attempts per minute
message: {
error: 'Too many connection attempts',
message: 'Please wait before trying to connect again.'
},
standardHeaders: true,
legacyHeaders: false
})
};
// Helmet security headers configuration
export const helmetConfig = helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'", "https://unpkg.com"],
scriptSrc: ["'self'", "https://unpkg.com"],
imgSrc: ["'self'", "data:", "https:", "blob:"],
connectSrc: ["'self'", "wss:", "ws:", "https://gtfsrt.renfe.com", "https://data.renfe.com"],
fontSrc: ["'self'", "https://fonts.gstatic.com"],
objectSrc: ["'none'"],
mediaSrc: ["'self'"],
frameSrc: ["'none'"]
}
},
crossOriginEmbedderPolicy: false, // Allow loading map tiles
crossOriginResourcePolicy: { policy: "cross-origin" }
});
// HPP - HTTP Parameter Pollution protection
export const hppProtection = hpp({
whitelist: [
'route_id',
'train_id',
'station_id',
'type',
'severity',
'format'
]
});
// Input validation rules
export const validators = {
// Train ID validation
trainId: param('id')
.trim()
.notEmpty()
.withMessage('Train ID is required')
.isLength({ max: 100 })
.withMessage('Train ID too long')
.matches(/^[\w\-\.]+$/)
.withMessage('Invalid train ID format'),
// Route ID validation
routeId: param('routeId')
.trim()
.notEmpty()
.withMessage('Route ID is required')
.isLength({ max: 100 })
.withMessage('Route ID too long'),
// Station ID validation
stationId: param('stationId')
.trim()
.notEmpty()
.withMessage('Station ID is required')
.isLength({ max: 100 })
.withMessage('Station ID too long'),
// Pagination validation
pagination: [
query('limit')
.optional()
.isInt({ min: 1, max: 1000 })
.withMessage('Limit must be between 1 and 1000')
.toInt(),
query('offset')
.optional()
.isInt({ min: 0 })
.withMessage('Offset must be a positive integer')
.toInt()
],
// Date range validation
dateRange: [
query('from')
.optional()
.isISO8601()
.withMessage('Invalid from date format (use ISO8601)'),
query('to')
.optional()
.isISO8601()
.withMessage('Invalid to date format (use ISO8601)')
],
// Geographic bounds validation
geoBounds: [
query('minLat')
.optional()
.isFloat({ min: -90, max: 90 })
.withMessage('minLat must be between -90 and 90'),
query('maxLat')
.optional()
.isFloat({ min: -90, max: 90 })
.withMessage('maxLat must be between -90 and 90'),
query('minLon')
.optional()
.isFloat({ min: -180, max: 180 })
.withMessage('minLon must be between -180 and 180'),
query('maxLon')
.optional()
.isFloat({ min: -180, max: 180 })
.withMessage('maxLon must be between -180 and 180')
],
// Heatmap parameters
heatmap: [
query('gridSize')
.optional()
.isFloat({ min: 0.001, max: 1 })
.withMessage('gridSize must be between 0.001 and 1'),
query('hours')
.optional()
.isInt({ min: 1, max: 168 })
.withMessage('hours must be between 1 and 168 (1 week)')
],
// Alert filters
alertFilters: [
query('severity')
.optional()
.isIn(['LOW', 'MEDIUM', 'HIGH', 'CRITICAL'])
.withMessage('Invalid severity level'),
query('type')
.optional()
.isIn(['DELAY', 'CANCELLATION', 'INCIDENT', 'MAINTENANCE', 'OTHER'])
.withMessage('Invalid alert type')
],
// Export format
exportFormat: [
query('format')
.optional()
.isIn(['json', 'csv', 'geojson'])
.withMessage('Format must be json, csv, or geojson'),
query('type')
.optional()
.isIn(['positions', 'routes', 'stations', 'alerts', 'statistics'])
.withMessage('Invalid export type')
]
};
// Validation middleware
export const validate = (validations) => {
return async (req, res, next) => {
// Run all validations
for (const validation of validations) {
const result = await validation.run(req);
if (!result.isEmpty()) break;
}
const errors = validationResult(req);
if (errors.isEmpty()) {
return next();
}
logger.warn({
path: req.path,
errors: errors.array()
}, 'Validation failed');
return res.status(400).json({
error: 'Validation Error',
details: errors.array().map(err => ({
field: err.path,
message: err.msg
}))
});
};
};
// Security headers middleware
export const securityHeaders = (req, res, next) => {
// Remove X-Powered-By header
res.removeHeader('X-Powered-By');
// Add additional security headers
res.setHeader('X-Content-Type-Options', 'nosniff');
res.setHeader('X-Frame-Options', 'DENY');
res.setHeader('X-XSS-Protection', '1; mode=block');
res.setHeader('Referrer-Policy', 'strict-origin-when-cross-origin');
res.setHeader('Permissions-Policy', 'geolocation=(), microphone=(), camera=()');
next();
};
// Request sanitization middleware
export const sanitizeRequest = (req, res, next) => {
// Sanitize query parameters
if (req.query) {
for (const key of Object.keys(req.query)) {
if (typeof req.query[key] === 'string') {
// Remove potential SQL injection patterns
req.query[key] = req.query[key]
.replace(/[;'"\\]/g, '')
.substring(0, 500); // Limit length
}
}
}
// Sanitize path parameters
if (req.params) {
for (const key of Object.keys(req.params)) {
if (typeof req.params[key] === 'string') {
req.params[key] = req.params[key]
.replace(/[;'"\\]/g, '')
.substring(0, 200);
}
}
}
next();
};
// Error handler for security issues
export const securityErrorHandler = (err, req, res, next) => {
if (err.type === 'entity.too.large') {
return res.status(413).json({
error: 'Payload Too Large',
message: 'Request body exceeds size limit'
});
}
if (err.type === 'charset.unsupported') {
return res.status(415).json({
error: 'Unsupported Media Type',
message: 'Unsupported character encoding'
});
}
next(err);
};
export default {
rateLimiters,
helmetConfig,
hppProtection,
validators,
validate,
securityHeaders,
sanitizeRequest,
securityErrorHandler
};

View File

@@ -0,0 +1,381 @@
import GtfsRealtimeBindings from 'gtfs-realtime-bindings';
import fetch from 'node-fetch';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
class AlertsPoller {
constructor() {
this.isRunning = false;
this.pollInterval = null;
this.stats = {
totalPolls: 0,
successfulPolls: 0,
failedPolls: 0,
totalAlerts: 0,
lastPollTime: null,
};
}
async start() {
logger.info('Starting Service Alerts Poller...');
await db.connect();
await redis.connect();
this.isRunning = true;
// Initial poll
await this.poll();
// Setup polling interval
this.pollInterval = setInterval(
() => this.poll(),
config.gtfsRT.pollingInterval
);
logger.info({
interval: config.gtfsRT.pollingInterval,
url: config.gtfsRT.alertsUrl,
}, 'Service Alerts Poller started');
}
async poll() {
if (!this.isRunning) return;
this.stats.totalPolls++;
const startTime = Date.now();
try {
logger.debug('Polling Service Alerts feed...');
const response = await fetch(config.gtfsRT.alertsUrl, {
timeout: 10000,
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const buffer = await response.arrayBuffer();
const feed = GtfsRealtimeBindings.transit_realtime.FeedMessage.decode(
new Uint8Array(buffer)
);
logger.debug({
entities: feed.entity?.length || 0,
}, 'Service Alerts feed decoded');
const alerts = [];
for (const entity of feed.entity || []) {
if (entity.alert) {
const alert = this.parseAlert(entity);
if (alert) {
alerts.push(alert);
}
}
}
logger.info({
alerts: alerts.length,
duration: Date.now() - startTime,
}, 'Processed service alerts');
if (alerts.length > 0) {
await this.storeAlerts(alerts);
await this.updateRedisCache(alerts);
}
this.stats.successfulPolls++;
this.stats.totalAlerts = alerts.length;
this.stats.lastPollTime = new Date();
} catch (error) {
this.stats.failedPolls++;
logger.error({
error: error.message,
stack: error.stack,
}, 'Error polling Service Alerts feed');
}
}
parseAlert(entity) {
try {
const alert = entity.alert;
// Extract header and description
const headerText = this.extractText(alert.headerText);
const descriptionText = this.extractText(alert.descriptionText);
const url = this.extractText(alert.url);
// Extract informed entities
const informedEntities = [];
for (const ie of alert.informedEntity || []) {
informedEntities.push({
agency_id: ie.agencyId,
route_id: ie.routeId,
route_type: ie.routeType,
trip_id: ie.trip?.tripId,
stop_id: ie.stopId,
});
}
// Extract route and train IDs
const routeIds = informedEntities
.map(ie => ie.route_id)
.filter(Boolean);
const tripIds = informedEntities
.map(ie => ie.trip_id)
.filter(Boolean);
// Determine type
const alertType = this.determineAlertType(alert);
return {
alert_id: entity.id,
route_id: routeIds[0] || null,
train_id: tripIds[0] || null,
alert_type: alertType,
severity: this.mapSeverity(alert.severityLevel),
title: headerText,
description: descriptionText,
url: url,
header_text: headerText,
description_text: descriptionText,
cause: this.mapCause(alert.cause),
effect: this.mapEffect(alert.effect),
start_time: alert.activePeriod?.[0]?.start
? new Date(alert.activePeriod[0].start * 1000)
: null,
end_time: alert.activePeriod?.[0]?.end
? new Date(alert.activePeriod[0].end * 1000)
: null,
informed_entity: informedEntities,
};
} catch (error) {
logger.error({
error: error.message,
entity: entity.id,
}, 'Error parsing alert');
return null;
}
}
extractText(translation) {
if (!translation) return null;
if (translation.translation && translation.translation.length > 0) {
return translation.translation[0].text;
}
return null;
}
determineAlertType(alert) {
const effect = alert.effect;
const cause = alert.cause;
if (effect === 4) return 'CANCELLATION'; // NO_SERVICE
if (effect === 1) return 'DELAY'; // REDUCED_SERVICE
if (effect === 8) return 'MODIFIED_SERVICE'; // MODIFIED_SERVICE
if (cause === 9) return 'INCIDENT'; // ACCIDENT
if (cause === 10) return 'INCIDENT'; // MEDICAL_EMERGENCY
return 'INFO';
}
mapSeverity(level) {
const map = {
1: 'LOW',
2: 'MEDIUM',
3: 'HIGH',
4: 'CRITICAL',
};
return map[level] || 'MEDIUM';
}
mapCause(cause) {
const map = {
1: 'UNKNOWN_CAUSE',
2: 'OTHER_CAUSE',
3: 'TECHNICAL_PROBLEM',
4: 'STRIKE',
5: 'DEMONSTRATION',
6: 'ACCIDENT',
7: 'HOLIDAY',
8: 'WEATHER',
9: 'MAINTENANCE',
10: 'CONSTRUCTION',
11: 'POLICE_ACTIVITY',
12: 'MEDICAL_EMERGENCY',
};
return map[cause] || 'UNKNOWN_CAUSE';
}
mapEffect(effect) {
const map = {
1: 'NO_SERVICE',
2: 'REDUCED_SERVICE',
3: 'SIGNIFICANT_DELAYS',
4: 'DETOUR',
5: 'ADDITIONAL_SERVICE',
6: 'MODIFIED_SERVICE',
7: 'OTHER_EFFECT',
8: 'UNKNOWN_EFFECT',
9: 'STOP_MOVED',
};
return map[effect] || 'UNKNOWN_EFFECT';
}
async storeAlerts(alerts) {
const client = await db.pool.connect();
try {
await client.query('BEGIN');
for (const alert of alerts) {
// Check if alert already exists
const existingResult = await client.query(
'SELECT alert_id FROM alerts WHERE alert_id = $1',
[alert.alert_id]
);
if (existingResult.rows.length === 0) {
// Insert new alert
await client.query(`
INSERT INTO alerts (
route_id, train_id, alert_type, severity, title,
description, url, header_text, description_text,
cause, effect, start_time, end_time, informed_entity
)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
`, [
alert.route_id,
alert.train_id,
alert.alert_type,
alert.severity,
alert.title,
alert.description,
alert.url,
alert.header_text,
alert.description_text,
alert.cause,
alert.effect,
alert.start_time,
alert.end_time,
JSON.stringify(alert.informed_entity),
]);
} else {
// Update existing alert
await client.query(`
UPDATE alerts
SET
alert_type = $2,
severity = $3,
title = $4,
description = $5,
url = $6,
header_text = $7,
description_text = $8,
cause = $9,
effect = $10,
end_time = $11,
informed_entity = $12,
updated_at = NOW()
WHERE alert_id = $1
`, [
alert.alert_id,
alert.alert_type,
alert.severity,
alert.title,
alert.description,
alert.url,
alert.header_text,
alert.description_text,
alert.cause,
alert.effect,
alert.end_time,
JSON.stringify(alert.informed_entity),
]);
}
}
await client.query('COMMIT');
logger.debug({ count: alerts.length }, 'Alerts stored');
} catch (error) {
await client.query('ROLLBACK');
logger.error({ error: error.message }, 'Error storing alerts');
throw error;
} finally {
client.release();
}
}
async updateRedisCache(alerts) {
try {
// Clear previous active alerts
await redis.del('alerts:active');
// Store each alert
for (const alert of alerts) {
const key = `alert:${alert.alert_id}`;
await redis.set(key, JSON.stringify(alert), { EX: 600 }); // 10 min TTL
// Add to active alerts set
await redis.sAdd('alerts:active', alert.alert_id);
// Index by route
if (alert.route_id) {
await redis.sAdd(`alerts:route:${alert.route_id}`, alert.alert_id);
}
// Index by train
if (alert.train_id) {
await redis.sAdd(`alerts:train:${alert.train_id}`, alert.alert_id);
}
}
logger.debug({ count: alerts.length }, 'Redis cache updated');
} catch (error) {
logger.error({ error: error.message }, 'Error updating Redis cache');
}
}
async stop() {
logger.info('Stopping Service Alerts Poller...');
this.isRunning = false;
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
await db.disconnect();
await redis.disconnect();
logger.info('Service Alerts Poller stopped');
}
}
// Main execution
const poller = new AlertsPoller();
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await poller.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
poller.start().catch((error) => {
logger.fatal({ error }, 'Failed to start Service Alerts Poller');
process.exit(1);
});
export default AlertsPoller;

View File

@@ -0,0 +1,125 @@
import cron from 'node-cron';
import db from '../lib/db.js';
import logger from '../lib/logger.js';
class AnalyticsRefresher {
constructor() {
this.schedule = process.env.ANALYTICS_REFRESH_SCHEDULE || '*/15 * * * *'; // Every 15 minutes
this.job = null;
this.isRefreshing = false;
}
async refreshViews() {
if (this.isRefreshing) {
logger.warn('Analytics refresh already in progress, skipping...');
return;
}
this.isRefreshing = true;
const startTime = Date.now();
try {
logger.info('Starting analytics views refresh...');
await db.query('SELECT refresh_all_analytics_views()');
const duration = Date.now() - startTime;
logger.info({
duration,
durationType: 'ms',
}, 'Analytics views refreshed successfully');
} catch (error) {
logger.error({
error: error.message,
stack: error.stack,
}, 'Error refreshing analytics views');
} finally {
this.isRefreshing = false;
}
}
async cleanupExports() {
try {
logger.info('Cleaning up old export requests...');
const result = await db.query('SELECT cleanup_old_export_requests() as deleted_count');
const deletedCount = result.rows[0].deleted_count;
logger.info({
deletedCount,
}, 'Export cleanup completed');
} catch (error) {
logger.error({
error: error.message,
}, 'Error cleaning up exports');
}
}
start() {
logger.info({
schedule: this.schedule,
}, 'Starting Analytics Refresher Worker');
// Refresh materialized views periodically
this.job = cron.schedule(this.schedule, async () => {
await this.refreshViews();
});
// Cleanup exports daily at 3 AM
cron.schedule('0 3 * * *', async () => {
await this.cleanupExports();
});
// Initial refresh
setTimeout(() => {
this.refreshViews();
}, 5000); // Wait 5 seconds after startup
logger.info('Analytics Refresher Worker started');
}
async stop() {
logger.info('Stopping Analytics Refresher Worker...');
if (this.job) {
this.job.stop();
this.job = null;
}
// Wait for any ongoing refresh to complete
while (this.isRefreshing) {
await new Promise(resolve => setTimeout(resolve, 1000));
}
await db.disconnect();
logger.info('Analytics Refresher Worker stopped');
}
}
// Main execution
const worker = new AnalyticsRefresher();
// Graceful shutdown
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await worker.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Start worker
(async () => {
try {
await db.connect();
worker.start();
} catch (error) {
logger.fatal({ error }, 'Failed to start Analytics Refresher Worker');
process.exit(1);
}
})();
export default AnalyticsRefresher;

View File

@@ -0,0 +1,340 @@
import GtfsRealtimeBindings from 'gtfs-realtime-bindings';
import fetch from 'node-fetch';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
class GTFSRealtimePoller {
constructor() {
this.isRunning = false;
this.pollInterval = null;
this.statsInterval = null;
this.stats = {
totalPolls: 0,
successfulPolls: 0,
failedPolls: 0,
totalTrains: 0,
lastPollTime: null,
errors: [],
};
}
async start() {
logger.info('Starting GTFS-RT Poller...');
// Connect to databases
await db.connect();
await redis.connect();
this.isRunning = true;
// Initial poll
await this.poll();
// Setup polling interval
this.pollInterval = setInterval(
() => this.poll(),
config.gtfsRT.pollingInterval
);
// Setup stats interval (every minute)
this.statsInterval = setInterval(
() => this.logStats(),
60000
);
logger.info({
interval: config.gtfsRT.pollingInterval,
url: config.gtfsRT.vehiclePositionsUrl,
}, 'GTFS-RT Poller started');
}
async poll() {
if (!this.isRunning) {
return;
}
this.stats.totalPolls++;
const startTime = Date.now();
try {
logger.debug('Polling GTFS-RT feed...');
// Fetch GTFS-RT feed
const response = await fetch(config.gtfsRT.vehiclePositionsUrl, {
timeout: 10000,
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const buffer = await response.arrayBuffer();
const feed = GtfsRealtimeBindings.transit_realtime.FeedMessage.decode(
new Uint8Array(buffer)
);
logger.debug({
entities: feed.entity?.length || 0,
header: feed.header,
}, 'GTFS-RT feed decoded');
// Process entities
const positions = [];
const trainIds = [];
for (const entity of feed.entity || []) {
if (entity.vehicle) {
const position = this.parseVehiclePosition(entity);
if (position) {
positions.push(position);
trainIds.push(position.train_id);
}
}
}
logger.info({
trains: positions.length,
duration: Date.now() - startTime,
}, 'Processed vehicle positions');
// Store positions
if (positions.length > 0) {
await this.storePositions(positions);
await this.updateRedisCache(positions, trainIds);
}
this.stats.successfulPolls++;
this.stats.totalTrains = positions.length;
this.stats.lastPollTime = new Date();
} catch (error) {
this.stats.failedPolls++;
this.stats.errors.push({
timestamp: new Date(),
message: error.message,
});
// Keep only last 10 errors
if (this.stats.errors.length > 10) {
this.stats.errors = this.stats.errors.slice(-10);
}
logger.error({
error: error.message,
stack: error.stack,
duration: Date.now() - startTime,
}, 'Error polling GTFS-RT feed');
}
}
parseVehiclePosition(entity) {
try {
const vehicle = entity.vehicle;
const position = vehicle.position;
const timestamp = vehicle.timestamp
? new Date(vehicle.timestamp * 1000)
: new Date();
// Validate required fields
if (!position || position.latitude == null || position.longitude == null) {
logger.warn({ entity: entity.id }, 'Vehicle position missing coordinates');
return null;
}
// Validate coordinate ranges
if (
position.latitude < -90 || position.latitude > 90 ||
position.longitude < -180 || position.longitude > 180
) {
logger.warn({
lat: position.latitude,
lon: position.longitude,
}, 'Invalid coordinates');
return null;
}
return {
train_id: vehicle.vehicle?.id || entity.id,
trip_id: vehicle.trip?.tripId || null,
route_id: vehicle.trip?.routeId || null,
latitude: position.latitude,
longitude: position.longitude,
bearing: position.bearing || null,
speed: position.speed ? position.speed * 3.6 : null, // Convert m/s to km/h
status: this.mapVehicleStatus(vehicle.currentStatus),
occupancy_status: this.mapOccupancyStatus(vehicle.occupancyStatus),
timestamp: timestamp,
recorded_at: new Date(),
};
} catch (error) {
logger.error({
error: error.message,
entity: entity.id,
}, 'Error parsing vehicle position');
return null;
}
}
mapVehicleStatus(status) {
const statusMap = {
0: 'INCOMING_AT',
1: 'STOPPED_AT',
2: 'IN_TRANSIT_TO',
};
return statusMap[status] || 'UNKNOWN';
}
mapOccupancyStatus(status) {
const occupancyMap = {
0: 'EMPTY',
1: 'MANY_SEATS_AVAILABLE',
2: 'FEW_SEATS_AVAILABLE',
3: 'STANDING_ROOM_ONLY',
4: 'CRUSHED_STANDING_ROOM_ONLY',
5: 'FULL',
6: 'NOT_ACCEPTING_PASSENGERS',
};
return occupancyMap[status] || null;
}
async storePositions(positions) {
const client = await db.pool.connect();
try {
await client.query('BEGIN');
// Batch insert positions
const values = positions.map((p, idx) => {
const offset = idx * 12;
return `($${offset + 1}, $${offset + 2}, $${offset + 3}, $${offset + 4}, $${offset + 5}, $${offset + 6}, $${offset + 7}, $${offset + 8}, $${offset + 9}, $${offset + 10}, $${offset + 11}, $${offset + 12})`;
}).join(',');
const params = positions.flatMap(p => [
p.train_id,
p.trip_id,
p.route_id,
p.latitude,
p.longitude,
p.bearing,
p.speed,
p.status,
p.occupancy_status,
p.timestamp,
p.recorded_at,
`SRID=4326;POINT(${p.longitude} ${p.latitude})`,
]);
await client.query(`
INSERT INTO train_positions (
train_id, trip_id, route_id, latitude, longitude, bearing, speed,
status, occupancy_status, timestamp, recorded_at, position
)
VALUES ${values}
`, params);
// Update trains table (upsert)
for (const p of positions) {
await client.query(`
INSERT INTO trains (train_id, route_id, train_type, last_seen, is_active)
VALUES ($1, $2, 'UNKNOWN', $3, true)
ON CONFLICT (train_id) DO UPDATE
SET last_seen = $3, is_active = true, route_id = COALESCE(trains.route_id, $2)
`, [p.train_id, p.route_id, p.recorded_at]);
}
await client.query('COMMIT');
logger.debug({ count: positions.length }, 'Positions stored in PostgreSQL');
} catch (error) {
await client.query('ROLLBACK');
logger.error({ error: error.message }, 'Error storing positions');
throw error;
} finally {
client.release();
}
}
async updateRedisCache(positions, trainIds) {
try {
// Store each position in Redis with 5-minute expiration
const promises = positions.map(async (p) => {
const key = `trains:current:${p.train_id}`;
await redis.set(key, JSON.stringify(p), { EX: 300 });
});
await Promise.all(promises);
// Update active trains set
if (trainIds.length > 0) {
await redis.del('trains:active');
await redis.sAdd('trains:active', ...trainIds);
}
// Store last update timestamp
await redis.set('stats:last_update', new Date().toISOString());
logger.debug({ count: positions.length }, 'Redis cache updated');
} catch (error) {
logger.error({ error: error.message }, 'Error updating Redis cache');
// Don't throw - Redis cache is not critical
}
}
logStats() {
const successRate = this.stats.totalPolls > 0
? ((this.stats.successfulPolls / this.stats.totalPolls) * 100).toFixed(2)
: 0;
logger.info({
...this.stats,
successRate: `${successRate}%`,
recentErrors: this.stats.errors.slice(-3),
}, 'Poller statistics');
}
async stop() {
logger.info('Stopping GTFS-RT Poller...');
this.isRunning = false;
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
if (this.statsInterval) {
clearInterval(this.statsInterval);
this.statsInterval = null;
}
await db.disconnect();
await redis.disconnect();
logger.info('GTFS-RT Poller stopped');
}
}
// Main execution
const poller = new GTFSRealtimePoller();
// Graceful shutdown
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await poller.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Start poller
poller.start().catch((error) => {
logger.fatal({ error }, 'Failed to start poller');
process.exit(1);
});
export default GTFSRealtimePoller;

View File

@@ -0,0 +1,522 @@
import fetch from 'node-fetch';
import { createWriteStream, createReadStream } from 'fs';
import { pipeline } from 'stream/promises';
import { createHash } from 'crypto';
import { parse } from 'csv-parse';
import { unlink, mkdir } from 'fs/promises';
import { createGunzip } from 'zlib';
import { Extract } from 'unzipper';
import { join } from 'path';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
class GTFSStaticSyncer {
constructor() {
this.tmpDir = '/tmp/gtfs';
this.gtfsUrl = process.env.GTFS_STATIC_URL || 'https://data.renfe.com/api/gtfs/latest.zip';
this.syncInterval = null;
}
async start() {
logger.info('Starting GTFS Static Syncer...');
await db.connect();
await redis.connect();
// Initial sync (don't crash if it fails, just log and continue)
try {
await this.sync();
} catch (error) {
logger.warn({ error: error.message }, 'Initial GTFS sync failed, will retry at scheduled time');
}
// Schedule daily sync at 3 AM
const now = new Date();
const next3AM = new Date();
next3AM.setHours(3, 0, 0, 0);
if (next3AM <= now) {
next3AM.setDate(next3AM.getDate() + 1);
}
const msUntil3AM = next3AM - now;
setTimeout(() => {
this.sync();
// Then sync every 24 hours
this.syncInterval = setInterval(() => this.sync(), 24 * 60 * 60 * 1000);
}, msUntil3AM);
logger.info({
nextSync: next3AM.toISOString(),
msUntil: msUntil3AM,
}, 'GTFS Static Syncer scheduled');
}
async sync() {
const startTime = Date.now();
logger.info('Starting GTFS Static synchronization...');
try {
// Ensure tmp directory exists
await mkdir(this.tmpDir, { recursive: true });
// Download GTFS zip
const zipPath = await this.downloadGTFS();
// Calculate checksum
const checksum = await this.calculateChecksum(zipPath);
// Check if already imported
const existingResult = await db.query(
'SELECT feed_id FROM gtfs_feeds WHERE feed_checksum = $1',
[checksum]
);
if (existingResult.rows.length > 0) {
logger.info({ checksum }, 'GTFS feed already imported, skipping');
await unlink(zipPath);
return;
}
// Extract zip
const extractPath = await this.extractZip(zipPath);
// Parse GTFS files
const data = await this.parseGTFSFiles(extractPath);
// Import to database
await this.importToDatabase(data, checksum);
// Cleanup
await unlink(zipPath);
const duration = Date.now() - startTime;
logger.info({
duration,
checksum,
stats: {
routes: data.routes?.length || 0,
trips: data.trips?.length || 0,
stops: data.stops?.length || 0,
stopTimes: data.stopTimes?.length || 0,
},
}, 'GTFS Static synchronization completed');
// Invalidate cache
await this.invalidateCaches();
} catch (error) {
logger.error({
error: error.message,
stack: error.stack,
}, 'Error synchronizing GTFS Static');
throw error;
}
}
async downloadGTFS() {
const zipPath = join(this.tmpDir, 'gtfs.zip');
logger.info({ url: this.gtfsUrl }, 'Downloading GTFS feed...');
const response = await fetch(this.gtfsUrl);
if (!response.ok) {
throw new Error(`Failed to download GTFS: ${response.status} ${response.statusText}`);
}
await pipeline(
response.body,
createWriteStream(zipPath)
);
logger.info({ path: zipPath }, 'GTFS feed downloaded');
return zipPath;
}
async calculateChecksum(filePath) {
const hash = createHash('sha256');
await pipeline(
createReadStream(filePath),
hash
);
return hash.digest('hex');
}
async extractZip(zipPath) {
const extractPath = join(this.tmpDir, 'extracted');
logger.info('Extracting GTFS zip...');
await pipeline(
createReadStream(zipPath),
Extract({ path: extractPath })
);
logger.info({ path: extractPath }, 'GTFS zip extracted');
return extractPath;
}
async parseGTFSFiles(extractPath) {
logger.info('Parsing GTFS files...');
const data = {
routes: await this.parseCSV(join(extractPath, 'routes.txt')),
trips: await this.parseCSV(join(extractPath, 'trips.txt')),
stops: await this.parseCSV(join(extractPath, 'stops.txt')),
stopTimes: await this.parseCSV(join(extractPath, 'stop_times.txt')),
calendar: await this.parseCSV(join(extractPath, 'calendar.txt')),
calendarDates: await this.parseCSV(join(extractPath, 'calendar_dates.txt')),
shapes: await this.parseCSV(join(extractPath, 'shapes.txt')),
};
logger.info('GTFS files parsed');
return data;
}
async parseCSV(filePath) {
const records = [];
try {
await pipeline(
createReadStream(filePath),
parse({
columns: true,
skip_empty_lines: true,
trim: true,
}),
async function* (source) {
for await (const record of source) {
records.push(record);
}
}
);
} catch (error) {
logger.warn({ file: filePath, error: error.message }, 'Optional GTFS file not found');
}
return records;
}
async importToDatabase(data, checksum) {
const client = await db.pool.connect();
try {
await client.query('BEGIN');
logger.info('Importing GTFS data to database...');
// Create feed record
const feedResult = await client.query(`
INSERT INTO gtfs_feeds (feed_checksum, feed_url, imported_at)
VALUES ($1, $2, NOW())
RETURNING feed_id
`, [checksum, this.gtfsUrl]);
const feedId = feedResult.rows[0].feed_id;
// Import routes (update existing)
if (data.routes && data.routes.length > 0) {
logger.info({ count: data.routes.length }, 'Importing routes...');
for (const route of data.routes) {
await client.query(`
INSERT INTO routes (
route_id, route_name, route_short_name, route_type,
color, description, metadata
)
VALUES ($1, $2, $3, $4, $5, $6, $7)
ON CONFLICT (route_id) DO UPDATE
SET
route_name = EXCLUDED.route_name,
route_short_name = EXCLUDED.route_short_name,
route_type = EXCLUDED.route_type,
color = EXCLUDED.color,
description = EXCLUDED.description,
updated_at = NOW()
`, [
route.route_id,
route.route_long_name || route.route_short_name,
route.route_short_name,
this.mapRouteType(route.route_type),
route.route_color ? '#' + route.route_color : null,
route.route_desc,
JSON.stringify({
gtfs_route_type: route.route_type,
text_color: route.route_text_color,
}),
]);
}
}
// Import stops (update existing stations)
if (data.stops && data.stops.length > 0) {
logger.info({ count: data.stops.length }, 'Importing stops...');
for (const stop of data.stops) {
await client.query(`
INSERT INTO stations (
station_id, station_name, station_code,
latitude, longitude, position, station_type, metadata
)
VALUES ($1, $2, $3, $4, $5, ST_SetSRID(ST_MakePoint($5, $4), 4326)::geography, $6, $7)
ON CONFLICT (station_id) DO UPDATE
SET
station_name = EXCLUDED.station_name,
latitude = EXCLUDED.latitude,
longitude = EXCLUDED.longitude,
position = EXCLUDED.position,
metadata = EXCLUDED.metadata,
updated_at = NOW()
`, [
stop.stop_id,
stop.stop_name,
stop.stop_code,
parseFloat(stop.stop_lat),
parseFloat(stop.stop_lon),
this.mapStationType(stop.location_type),
JSON.stringify({
location_type: stop.location_type,
parent_station: stop.parent_station,
wheelchair_boarding: stop.wheelchair_boarding,
platform_code: stop.platform_code,
}),
]);
}
}
// Import trips
if (data.trips && data.trips.length > 0) {
logger.info({ count: data.trips.length }, 'Importing trips...');
// Batch insert trips
const tripValues = data.trips.map((trip, idx) => {
const offset = idx * 10;
return `($${offset + 1}, $${offset + 2}, $${offset + 3}, $${offset + 4}, $${offset + 5}, $${offset + 6}, $${offset + 7}, $${offset + 8}, $${offset + 9}, $${offset + 10})`;
}).join(',');
const tripParams = data.trips.flatMap(trip => [
trip.trip_id,
trip.route_id,
trip.service_id,
trip.trip_headsign,
trip.trip_short_name,
parseInt(trip.direction_id) || null,
trip.block_id,
trip.shape_id,
parseInt(trip.wheelchair_accessible) || null,
parseInt(trip.bikes_allowed) || null,
]);
await client.query(`
INSERT INTO trips (
trip_id, route_id, service_id, trip_headsign, trip_short_name,
direction_id, block_id, shape_id, wheelchair_accessible, bikes_allowed
)
VALUES ${tripValues}
ON CONFLICT (trip_id) DO UPDATE
SET
route_id = EXCLUDED.route_id,
service_id = EXCLUDED.service_id,
trip_headsign = EXCLUDED.trip_headsign,
updated_at = NOW()
`, tripParams);
}
// Import stop_times (batch insert)
if (data.stopTimes && data.stopTimes.length > 0) {
logger.info({ count: data.stopTimes.length }, 'Importing stop times...');
// Delete old stop_times
await client.query('DELETE FROM stop_times');
// Batch insert in chunks of 1000
const chunkSize = 1000;
for (let i = 0; i < data.stopTimes.length; i += chunkSize) {
const chunk = data.stopTimes.slice(i, i + chunkSize);
const values = chunk.map((_, idx) => {
const offset = idx * 6;
return `($${offset + 1}, $${offset + 2}, $${offset + 3}, $${offset + 4}, $${offset + 5}, $${offset + 6})`;
}).join(',');
const params = chunk.flatMap(st => [
st.trip_id,
st.arrival_time,
st.departure_time,
st.stop_id,
parseInt(st.stop_sequence),
parseInt(st.pickup_type) || null,
]);
await client.query(`
INSERT INTO stop_times (
trip_id, arrival_time, departure_time, stop_id, stop_sequence, pickup_type
)
VALUES ${values}
`, params);
}
}
// Import calendar
if (data.calendar && data.calendar.length > 0) {
logger.info({ count: data.calendar.length }, 'Importing calendar...');
for (const cal of data.calendar) {
await client.query(`
INSERT INTO calendar (
service_id, monday, tuesday, wednesday, thursday,
friday, saturday, sunday, start_date, end_date
)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
ON CONFLICT (service_id) DO UPDATE
SET
monday = EXCLUDED.monday,
tuesday = EXCLUDED.tuesday,
wednesday = EXCLUDED.wednesday,
thursday = EXCLUDED.thursday,
friday = EXCLUDED.friday,
saturday = EXCLUDED.saturday,
sunday = EXCLUDED.sunday,
start_date = EXCLUDED.start_date,
end_date = EXCLUDED.end_date
`, [
cal.service_id,
cal.monday === '1',
cal.tuesday === '1',
cal.wednesday === '1',
cal.thursday === '1',
cal.friday === '1',
cal.saturday === '1',
cal.sunday === '1',
cal.start_date,
cal.end_date,
]);
}
}
// Import calendar_dates
if (data.calendarDates && data.calendarDates.length > 0) {
logger.info({ count: data.calendarDates.length }, 'Importing calendar dates...');
await client.query('DELETE FROM calendar_dates');
for (const cd of data.calendarDates) {
await client.query(`
INSERT INTO calendar_dates (service_id, date, exception_type)
VALUES ($1, $2, $3)
`, [cd.service_id, cd.date, parseInt(cd.exception_type)]);
}
}
// Import shapes
if (data.shapes && data.shapes.length > 0) {
logger.info({ count: data.shapes.length }, 'Importing shapes...');
await client.query('DELETE FROM shapes');
const chunkSize = 1000;
for (let i = 0; i < data.shapes.length; i += chunkSize) {
const chunk = data.shapes.slice(i, i + chunkSize);
const values = chunk.map((_, idx) => {
const offset = idx * 5;
return `($${offset + 1}, $${offset + 2}, $${offset + 3}, $${offset + 4}, ST_SetSRID(ST_MakePoint($${offset + 3}, $${offset + 2}), 4326)::geography)`;
}).join(',');
const params = chunk.flatMap(shape => [
shape.shape_id,
parseFloat(shape.shape_pt_lat),
parseFloat(shape.shape_pt_lon),
parseInt(shape.shape_pt_sequence),
parseFloat(shape.shape_dist_traveled) || null,
]);
await client.query(`
INSERT INTO shapes (shape_id, shape_pt_lat, shape_pt_lon, shape_pt_sequence, shape_dist_traveled, geom)
VALUES ${values}
`, params);
}
}
await client.query('COMMIT');
logger.info('GTFS data imported successfully');
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
mapRouteType(gtfsType) {
const typeMap = {
'0': 'TRAM',
'1': 'SUBWAY',
'2': 'RAIL',
'3': 'BUS',
'4': 'FERRY',
'100': 'RAIL',
'101': 'HIGH_SPEED',
'102': 'LONG_DISTANCE',
'103': 'REGIONAL',
'109': 'COMMUTER',
};
return typeMap[gtfsType] || 'UNKNOWN';
}
mapStationType(locationType) {
const typeMap = {
'0': 'STOP',
'1': 'STATION',
'2': 'ENTRANCE',
'3': 'GENERIC_NODE',
'4': 'BOARDING_AREA',
};
return typeMap[locationType] || 'STOP';
}
async invalidateCaches() {
try {
await redis.del('routes:*');
await redis.del('stations:*');
await redis.del('trips:*');
logger.info('Caches invalidated');
} catch (error) {
logger.error({ error: error.message }, 'Error invalidating caches');
}
}
async stop() {
logger.info('Stopping GTFS Static Syncer...');
if (this.syncInterval) {
clearInterval(this.syncInterval);
this.syncInterval = null;
}
await db.disconnect();
await redis.disconnect();
logger.info('GTFS Static Syncer stopped');
}
}
// Main execution
const syncer = new GTFSStaticSyncer();
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await syncer.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
syncer.start().catch((error) => {
logger.fatal({ error }, 'Failed to start GTFS Static Syncer');
process.exit(1);
});
export default GTFSStaticSyncer;

View File

@@ -0,0 +1,474 @@
import fetch from 'node-fetch';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
/**
* Renfe Fleet Data Poller
* Fetches additional train data from Renfe's real-time visualization endpoint
* This provides extra info like delays, line codes, next stations, etc.
*/
class RenfeFleetPoller {
constructor() {
this.isRunning = false;
this.pollInterval = null;
this.stationsInterval = null;
this.linesInterval = null;
this.stats = {
totalPolls: 0,
successfulPolls: 0,
failedPolls: 0,
totalTrains: 0,
lastPollTime: null,
errors: [],
};
// Renfe real-time endpoints
this.FLEET_URL = 'https://tiempo-real.renfe.com/renfe-visor/flota.json';
this.STATIONS_URL = 'https://tiempo-real.renfe.com/data/estaciones.geojson';
this.LINES_URL = 'https://tiempo-real.renfe.com/data/lineasnucleos.geojson';
}
async start() {
logger.info('Starting Renfe Fleet Poller...');
// Connect to databases
await db.connect();
await redis.connect();
this.isRunning = true;
// Initial data load
await this.loadStationsAndLines();
await this.pollFleet();
// Setup polling intervals
// Fleet data every 30 seconds (it updates frequently)
this.pollInterval = setInterval(
() => this.pollFleet(),
30000
);
// Stations and lines every 6 hours (static data)
this.stationsInterval = setInterval(
() => this.loadStationsAndLines(),
6 * 60 * 60 * 1000
);
logger.info('Renfe Fleet Poller started');
}
async loadStationsAndLines() {
logger.info('Loading stations and lines from Renfe...');
try {
// Fetch stations
const stationsResponse = await fetch(this.STATIONS_URL, { timeout: 30000 });
if (stationsResponse.ok) {
const stationsGeoJSON = await stationsResponse.json();
await this.processStations(stationsGeoJSON);
}
} catch (error) {
logger.error({ error: error.message }, 'Error loading stations');
}
try {
// Fetch lines
const linesResponse = await fetch(this.LINES_URL, { timeout: 30000 });
if (linesResponse.ok) {
const linesGeoJSON = await linesResponse.json();
await this.processLines(linesGeoJSON);
}
} catch (error) {
logger.error({ error: error.message }, 'Error loading lines');
}
}
async processStations(geoJSON) {
if (!geoJSON.features || geoJSON.features.length === 0) {
logger.warn('No stations found in GeoJSON');
return;
}
const client = await db.pool.connect();
try {
await client.query('BEGIN');
let inserted = 0;
let updated = 0;
for (const feature of geoJSON.features) {
const props = feature.properties;
const coords = feature.geometry?.coordinates;
if (!props.CODIGO_ESTACION || !coords) continue;
// Map Renfe station to our schema
const stationData = {
station_id: `renfe_${props.CODIGO_ESTACION}`,
station_code: props.CODIGO_ESTACION,
station_name: props.NOMBRE_ESTACION || 'Unknown',
latitude: props.LATITUD || coords[1],
longitude: props.LONGITUD || coords[0],
station_type: this.inferStationType(props),
metadata: {
nucleo: props.NUCLEO,
nucleo_name: props.NOMBRE_NUCLEO,
lineas: props.LINEAS,
color: props.COLOR,
accesibilidad: props.ACCESIBILIDAD,
parking_bicis: props.PARKING_BICIS,
bus_urbano: props.COR_BUS?.includes('Urbano'),
bus_interurbano: props.COR_BUS?.includes('Interurbano'),
metro: props.COR_METRO,
source: 'renfe_visor',
},
};
const result = await client.query(`
INSERT INTO stations (station_id, station_code, station_name, latitude, longitude, station_type, metadata, position)
VALUES ($1, $2, $3, $4, $5, $6, $7, ST_SetSRID(ST_MakePoint($8, $9), 4326))
ON CONFLICT (station_id) DO UPDATE SET
station_name = EXCLUDED.station_name,
latitude = EXCLUDED.latitude,
longitude = EXCLUDED.longitude,
station_type = EXCLUDED.station_type,
metadata = stations.metadata || EXCLUDED.metadata,
position = EXCLUDED.position,
updated_at = NOW()
RETURNING (xmax = 0) as is_insert
`, [
stationData.station_id,
stationData.station_code,
stationData.station_name,
stationData.latitude,
stationData.longitude,
stationData.station_type,
JSON.stringify(stationData.metadata),
stationData.longitude,
stationData.latitude,
]);
if (result.rows[0]?.is_insert) {
inserted++;
} else {
updated++;
}
}
await client.query('COMMIT');
logger.info({ inserted, updated, total: geoJSON.features.length }, 'Processed Renfe stations');
} catch (error) {
await client.query('ROLLBACK');
logger.error({ error: error.message }, 'Error processing stations');
throw error;
} finally {
client.release();
}
}
inferStationType(props) {
// Infer station importance based on available metadata
const lineas = props.LINEAS || '';
const lineCount = lineas.split(',').filter(Boolean).length;
if (lineCount >= 3 || props.COR_METRO) {
return 'MAJOR';
} else if (lineCount >= 2 || props.COR_BUS) {
return 'MEDIUM';
}
return 'MINOR';
}
async processLines(geoJSON) {
if (!geoJSON.features || geoJSON.features.length === 0) {
logger.warn('No lines found in GeoJSON');
return;
}
const client = await db.pool.connect();
try {
await client.query('BEGIN');
// Create lines table if not exists
await client.query(`
CREATE TABLE IF NOT EXISTS train_lines (
line_id VARCHAR(50) PRIMARY KEY,
line_code VARCHAR(20) NOT NULL,
line_name VARCHAR(255),
nucleo_id VARCHAR(20),
nucleo_name VARCHAR(255),
color VARCHAR(20),
geometry GEOMETRY(LineString, 4326),
metadata JSONB DEFAULT '{}',
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
)
`);
let processed = 0;
for (const feature of geoJSON.features) {
const props = feature.properties;
const coords = feature.geometry?.coordinates;
if (!props.CODIGO || !coords) continue;
const lineId = `${props.IDNUCLEO}_${props.CODIGO}`;
// Convert coordinates to WKT LineString
const coordsWKT = coords.map(c => `${c[0]} ${c[1]}`).join(',');
await client.query(`
INSERT INTO train_lines (line_id, line_code, line_name, nucleo_id, nucleo_name, color, geometry, metadata)
VALUES ($1, $2, $3, $4, $5, $6, ST_SetSRID(ST_GeomFromText('LINESTRING(' || $7 || ')'), 4326), $8)
ON CONFLICT (line_id) DO UPDATE SET
line_name = EXCLUDED.line_name,
color = EXCLUDED.color,
geometry = EXCLUDED.geometry,
metadata = EXCLUDED.metadata,
updated_at = NOW()
`, [
lineId,
props.CODIGO,
props.NOMBRE || props.CODIGO,
String(props.IDNUCLEO),
props.NUCLEO,
props.COLOR,
coordsWKT,
JSON.stringify({ idLinea: props.IDLINEA }),
]);
processed++;
}
await client.query('COMMIT');
logger.info({ processed, total: geoJSON.features.length }, 'Processed Renfe lines');
} catch (error) {
await client.query('ROLLBACK');
logger.error({ error: error.message }, 'Error processing lines');
} finally {
client.release();
}
}
async pollFleet() {
if (!this.isRunning) {
return;
}
this.stats.totalPolls++;
const startTime = Date.now();
try {
logger.debug('Polling Renfe fleet data...');
const response = await fetch(this.FLEET_URL, { timeout: 15000 });
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const data = await response.json();
if (!data.trenes || !Array.isArray(data.trenes)) {
logger.warn('No trains found in fleet data');
return;
}
logger.info({
trains: data.trenes.length,
updateTime: data.fechaActualizacion,
}, 'Received Renfe fleet data');
// Process and store fleet data
await this.processFleetData(data.trenes, data.fechaActualizacion);
this.stats.successfulPolls++;
this.stats.totalTrains = data.trenes.length;
this.stats.lastPollTime = new Date();
} catch (error) {
this.stats.failedPolls++;
this.stats.errors.push({
timestamp: new Date(),
message: error.message,
});
if (this.stats.errors.length > 10) {
this.stats.errors = this.stats.errors.slice(-10);
}
logger.error({
error: error.message,
duration: Date.now() - startTime,
}, 'Error polling Renfe fleet');
}
}
async processFleetData(trains, updateTime) {
// Store fleet data in Redis for quick access
// This enriches the GTFS-RT data with additional info
const promises = [];
for (const train of trains) {
// Extract train number from tripId or codTren
const trainId = train.codTren;
const fleetData = {
tripId: train.tripId,
codTren: train.codTren,
codLinea: train.codLinea,
retrasoMin: parseInt(train.retrasoMin, 10) || 0,
codEstAct: train.codEstAct,
codEstSig: train.codEstSig,
horaLlegadaSigEst: train.horaLlegadaSigEst,
codEstDest: train.codEstDest,
codEstOrig: train.codEstOrig,
porAvanc: train.porAvanc, // E = en estación, C = circulando
latitud: train.latitud,
longitud: train.longitud,
nucleo: train.nucleo,
accesible: train.accesible,
via: train.via,
updatedAt: updateTime,
};
// Store in Redis with 5 minute expiration
promises.push(
redis.set(
`fleet:${trainId}`,
JSON.stringify(fleetData),
{ EX: 300 }
)
);
// Also index by tripId for cross-referencing
if (train.tripId) {
promises.push(
redis.set(
`fleet:trip:${train.tripId}`,
trainId,
{ EX: 300 }
)
);
}
}
// Store the full fleet list for API access
promises.push(
redis.set(
'fleet:all',
JSON.stringify(trains),
{ EX: 60 }
)
);
promises.push(redis.set('fleet:lastUpdate', updateTime));
await Promise.all(promises);
// Save punctuality data to database for historical analysis
await this.savePunctualityData(trains, updateTime);
logger.debug({ count: trains.length }, 'Fleet data stored in Redis');
}
async savePunctualityData(trains, updateTime) {
if (!trains || trains.length === 0) return;
const client = await db.pool.connect();
try {
// Use batch insert for efficiency
const values = [];
const params = [];
let paramIndex = 1;
for (const train of trains) {
const delayMinutes = parseInt(train.retrasoMin, 10) || 0;
values.push(`($${paramIndex}, $${paramIndex + 1}, $${paramIndex + 2}, $${paramIndex + 3}, $${paramIndex + 4}, $${paramIndex + 5}, $${paramIndex + 6}, $${paramIndex + 7}, $${paramIndex + 8}, $${paramIndex + 9}, $${paramIndex + 10}, $${paramIndex + 11})`);
params.push(
train.codTren, // train_id
train.tripId || null, // trip_id
train.codLinea || null, // line_code
train.nucleo || null, // nucleo
train.codEstOrig || null, // origin_station_code
train.codEstDest || null, // destination_station_code
train.codEstAct || null, // current_station_code
train.codEstSig || null, // next_station_code
delayMinutes, // delay_minutes
train.accesible || false, // is_accessible
train.via || null, // platform
updateTime || new Date() // renfe_update_time
);
paramIndex += 12;
}
const query = `
INSERT INTO train_punctuality (
train_id, trip_id, line_code, nucleo,
origin_station_code, destination_station_code,
current_station_code, next_station_code,
delay_minutes, is_accessible, platform, renfe_update_time
) VALUES ${values.join(', ')}
`;
await client.query(query, params);
logger.debug({ count: trains.length }, 'Punctuality data saved to database');
} catch (error) {
logger.error({ error: error.message }, 'Error saving punctuality data');
} finally {
client.release();
}
}
async stop() {
logger.info('Stopping Renfe Fleet Poller...');
this.isRunning = false;
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
if (this.stationsInterval) {
clearInterval(this.stationsInterval);
this.stationsInterval = null;
}
await db.disconnect();
await redis.disconnect();
logger.info('Renfe Fleet Poller stopped');
}
}
// Main execution
const poller = new RenfeFleetPoller();
// Graceful shutdown
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await poller.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
// Start poller
poller.start().catch((error) => {
logger.fatal({ error }, 'Failed to start Renfe Fleet Poller');
process.exit(1);
});
export default RenfeFleetPoller;

View File

@@ -0,0 +1,292 @@
import GtfsRealtimeBindings from 'gtfs-realtime-bindings';
import fetch from 'node-fetch';
import config from '../config/index.js';
import logger from '../lib/logger.js';
import db from '../lib/db.js';
import redis from '../lib/redis.js';
class TripUpdatesPoller {
constructor() {
this.isRunning = false;
this.pollInterval = null;
this.stats = {
totalPolls: 0,
successfulPolls: 0,
failedPolls: 0,
totalUpdates: 0,
lastPollTime: null,
};
}
async start() {
logger.info('Starting Trip Updates Poller...');
await db.connect();
await redis.connect();
this.isRunning = true;
// Initial poll
await this.poll();
// Setup polling interval
this.pollInterval = setInterval(
() => this.poll(),
config.gtfsRT.pollingInterval
);
logger.info({
interval: config.gtfsRT.pollingInterval,
url: config.gtfsRT.tripUpdatesUrl,
}, 'Trip Updates Poller started');
}
async poll() {
if (!this.isRunning) return;
this.stats.totalPolls++;
const startTime = Date.now();
try {
logger.debug('Polling Trip Updates feed...');
const response = await fetch(config.gtfsRT.tripUpdatesUrl, {
timeout: 10000,
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
const buffer = await response.arrayBuffer();
const feed = GtfsRealtimeBindings.transit_realtime.FeedMessage.decode(
new Uint8Array(buffer)
);
logger.debug({
entities: feed.entity?.length || 0,
}, 'Trip Updates feed decoded');
const updates = [];
for (const entity of feed.entity || []) {
if (entity.tripUpdate) {
const update = this.parseTripUpdate(entity);
if (update) {
updates.push(update);
}
}
}
logger.info({
updates: updates.length,
duration: Date.now() - startTime,
}, 'Processed trip updates');
if (updates.length > 0) {
await this.storeUpdates(updates);
await this.updateRedisCache(updates);
}
this.stats.successfulPolls++;
this.stats.totalUpdates = updates.length;
this.stats.lastPollTime = new Date();
} catch (error) {
this.stats.failedPolls++;
logger.error({
error: error.message,
stack: error.stack,
}, 'Error polling Trip Updates feed');
}
}
parseTripUpdate(entity) {
try {
const tu = entity.tripUpdate;
const trip = tu.trip;
const timestamp = tu.timestamp
? new Date(tu.timestamp * 1000)
: new Date();
if (!trip?.tripId) {
logger.warn({ entity: entity.id }, 'Trip update missing trip_id');
return null;
}
const update = {
trip_id: trip.tripId,
route_id: trip.routeId || null,
start_time: trip.startTime || null,
start_date: trip.startDate || null,
schedule_relationship: this.mapScheduleRelationship(trip.scheduleRelationship),
delay_seconds: tu.delay || null,
timestamp: timestamp,
recorded_at: new Date(),
stop_time_updates: [],
};
// Parse stop time updates
for (const stu of tu.stopTimeUpdate || []) {
const stopUpdate = {
stop_sequence: stu.stopSequence,
stop_id: stu.stopId,
arrival_delay: stu.arrival?.delay || null,
arrival_time: stu.arrival?.time
? new Date(stu.arrival.time * 1000)
: null,
departure_delay: stu.departure?.delay || null,
departure_time: stu.departure?.time
? new Date(stu.departure.time * 1000)
: null,
schedule_relationship: this.mapScheduleRelationship(
stu.scheduleRelationship
),
};
update.stop_time_updates.push(stopUpdate);
}
return update;
} catch (error) {
logger.error({
error: error.message,
entity: entity.id,
}, 'Error parsing trip update');
return null;
}
}
mapScheduleRelationship(relationship) {
const map = {
0: 'SCHEDULED',
1: 'ADDED',
2: 'UNSCHEDULED',
3: 'CANCELED',
};
return map[relationship] || 'SCHEDULED';
}
async storeUpdates(updates) {
const client = await db.pool.connect();
try {
await client.query('BEGIN');
for (const update of updates) {
// Insert trip update
const result = await client.query(`
INSERT INTO trip_updates (
trip_id, route_id, start_time, start_date,
schedule_relationship, delay_seconds, timestamp, recorded_at
)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
RETURNING id
`, [
update.trip_id,
update.route_id,
update.start_time,
update.start_date,
update.schedule_relationship,
update.delay_seconds,
update.timestamp,
update.recorded_at,
]);
const tripUpdateId = result.rows[0].id;
// Insert stop time updates
for (const stu of update.stop_time_updates) {
await client.query(`
INSERT INTO stop_time_updates (
trip_update_id, stop_sequence, stop_id,
arrival_delay, arrival_time, departure_delay,
departure_time, schedule_relationship
)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
`, [
tripUpdateId,
stu.stop_sequence,
stu.stop_id,
stu.arrival_delay,
stu.arrival_time,
stu.departure_delay,
stu.departure_time,
stu.schedule_relationship,
]);
}
}
await client.query('COMMIT');
logger.debug({ count: updates.length }, 'Trip updates stored');
} catch (error) {
await client.query('ROLLBACK');
logger.error({ error: error.message }, 'Error storing trip updates');
throw error;
} finally {
client.release();
}
}
async updateRedisCache(updates) {
try {
// Store delays in Redis for quick access
for (const update of updates) {
const key = `trip_update:${update.trip_id}`;
await redis.set(key, JSON.stringify(update), { EX: 300 }); // 5 min TTL
// If delayed or canceled, add to delayed set
if (update.delay_seconds > 0 || update.schedule_relationship === 'CANCELED') {
await redis.sAdd('trips:delayed', update.trip_id);
}
// If canceled, add to canceled set
if (update.schedule_relationship === 'CANCELED') {
await redis.sAdd('trips:canceled', update.trip_id);
}
}
logger.debug({ count: updates.length }, 'Redis cache updated');
} catch (error) {
logger.error({ error: error.message }, 'Error updating Redis cache');
}
}
async stop() {
logger.info('Stopping Trip Updates Poller...');
this.isRunning = false;
if (this.pollInterval) {
clearInterval(this.pollInterval);
this.pollInterval = null;
}
await db.disconnect();
await redis.disconnect();
logger.info('Trip Updates Poller stopped');
}
}
// Main execution
const poller = new TripUpdatesPoller();
const shutdown = async (signal) => {
logger.info({ signal }, 'Received shutdown signal');
await poller.stop();
process.exit(0);
};
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));
poller.start().catch((error) => {
logger.fatal({ error }, 'Failed to start Trip Updates Poller');
process.exit(1);
});
export default TripUpdatesPoller;

311
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,311 @@
version: '3.8'
# Docker Compose para producción
# Uso: docker compose -f docker-compose.prod.yml up -d --build
#
# Requisitos previos:
# 1. Configurar .env con valores de producción (ver .env.example)
# 2. Configurar certificados SSL en ./nginx/ssl/ o usar certbot
# 3. Configurar nginx/prod.conf con tu dominio
services:
# Base de datos PostgreSQL con extensión PostGIS
postgres:
image: postgis/postgis:16-3.4-alpine # IMPORTANTE: usar versión 16 para compatibilidad
container_name: trenes-postgres
restart: unless-stopped
environment:
POSTGRES_DB: ${POSTGRES_DB:-trenes}
POSTGRES_USER: ${POSTGRES_USER:-trenes}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- postgres_data:/var/lib/postgresql/data
- ./database/init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-trenes} -d ${POSTGRES_DB:-trenes}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- trenes-network
# Redis para cache
redis:
image: redis:7-alpine
container_name: trenes-redis
restart: unless-stopped
command: redis-server --appendonly yes
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- trenes-network
# Flyway - Gestor de migraciones de base de datos
flyway:
image: flyway/flyway:10-alpine
container_name: trenes-flyway
command: migrate
environment:
FLYWAY_URL: jdbc:postgresql://postgres:5432/${POSTGRES_DB:-trenes}
FLYWAY_USER: ${POSTGRES_USER:-trenes}
FLYWAY_PASSWORD: ${POSTGRES_PASSWORD}
FLYWAY_BASELINE_ON_MIGRATE: "true"
FLYWAY_BASELINE_VERSION: "0"
FLYWAY_SCHEMAS: public
FLYWAY_LOCATIONS: filesystem:/flyway/sql
FLYWAY_VALIDATE_ON_MIGRATE: "true"
FLYWAY_OUT_OF_ORDER: "false"
volumes:
- ./database/migrations:/flyway/sql
depends_on:
postgres:
condition: service_healthy
networks:
- trenes-network
profiles:
- migration
# Worker para polling GTFS-RT Vehicle Positions
worker:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-worker
restart: unless-stopped
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
GTFS_RT_URL: https://gtfsrt.renfe.com/vehicle_positions.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para sincronización GTFS Static
gtfs-static-syncer:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-gtfs-static-syncer
restart: unless-stopped
command: node src/worker/gtfs-static-syncer.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
GTFS_STATIC_URL: https://data.renfe.com/dataset/horarios-trenes-largo-recorrido-ave/resource/horarios-trenes-largo-recorrido-ave-gtfs.zip
SYNC_SCHEDULE: 0 3 * * *
LOG_LEVEL: info
volumes:
- gtfs_static_data:/tmp/gtfs
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para polling GTFS-RT Trip Updates
trip-updates-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-trip-updates-poller
restart: unless-stopped
command: node src/worker/trip-updates-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
GTFS_RT_TRIP_UPDATES_URL: https://gtfsrt.renfe.com/trip_updates_cercanias.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para polling GTFS-RT Service Alerts
alerts-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-alerts-poller
restart: unless-stopped
command: node src/worker/alerts-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
GTFS_RT_ALERTS_URL: https://gtfsrt.renfe.com/alerts.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para datos de flota Renfe
renfe-fleet-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-renfe-fleet-poller
restart: unless-stopped
command: node src/worker/renfe-fleet-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para refrescar vistas de analytics
analytics-refresher:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-analytics-refresher
restart: unless-stopped
command: node src/worker/analytics-refresher.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
ANALYTICS_REFRESH_SCHEDULE: "*/15 * * * *"
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# API Backend
api:
build:
context: ./backend
dockerfile: Dockerfile
target: api
container_name: trenes-api
restart: unless-stopped
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://${POSTGRES_USER:-trenes}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-trenes}
REDIS_URL: redis://redis:6379
CORS_ORIGIN: ${CORS_ORIGINS:-https://localhost}
JWT_SECRET: ${JWT_SECRET}
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
networks:
- trenes-network
# Frontend
# IMPORTANTE: Las variables VITE_* deben pasarse como build args, no como environment
# ya que se procesan en tiempo de compilación, no en runtime
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: production
args:
VITE_API_URL: ${VITE_API_URL}
VITE_WS_URL: ${VITE_WS_URL}
container_name: trenes-frontend
restart: unless-stopped
depends_on:
- api
networks:
- trenes-network
# Nginx como reverse proxy con SSL
nginx:
image: nginx:alpine
container_name: trenes-nginx
restart: unless-stopped
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf:ro
- letsencrypt_certs:/etc/letsencrypt:ro
- certbot_webroot:/var/www/certbot:ro
ports:
- "80:80"
- "443:443"
depends_on:
- api
- frontend
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
networks:
- trenes-network
# Certbot para renovación automática de certificados
certbot:
image: certbot/certbot
container_name: trenes-certbot
volumes:
- letsencrypt_certs:/etc/letsencrypt
- certbot_webroot:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
networks:
- trenes-network
volumes:
postgres_data:
driver: local
redis_data:
driver: local
gtfs_static_data:
driver: local
letsencrypt_certs:
driver: local
certbot_webroot:
driver: local
networks:
trenes-network:
driver: bridge

331
docker-compose.yml Normal file
View File

@@ -0,0 +1,331 @@
version: '3.8'
services:
# Base de datos PostgreSQL con extensión PostGIS
# NOTA: Usar versión 16 para compatibilidad con datos migrados de producción
postgres:
image: postgis/postgis:16-3.4-alpine
container_name: trenes-postgres
restart: unless-stopped
environment:
POSTGRES_DB: trenes_db
POSTGRES_USER: trenes_user
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-trenes_password_change_me}
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- postgres_data:/var/lib/postgresql/data
- ./database/init:/docker-entrypoint-initdb.d
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U trenes_user -d trenes_db"]
interval: 10s
timeout: 5s
retries: 5
networks:
- trenes-network
# Redis para cache
redis:
image: redis:7-alpine
container_name: trenes-redis
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD:-redis_password_change_me}
volumes:
- redis_data:/data
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- trenes-network
# Flyway - Gestor de migraciones de base de datos
flyway:
image: flyway/flyway:10-alpine
container_name: trenes-flyway
command: migrate
environment:
FLYWAY_URL: jdbc:postgresql://postgres:5432/trenes_db
FLYWAY_USER: trenes_user
FLYWAY_PASSWORD: ${POSTGRES_PASSWORD:-trenes_password_change_me}
FLYWAY_BASELINE_ON_MIGRATE: "true"
FLYWAY_BASELINE_VERSION: "0"
FLYWAY_SCHEMAS: public
FLYWAY_LOCATIONS: filesystem:/flyway/sql
FLYWAY_VALIDATE_ON_MIGRATE: "true"
FLYWAY_OUT_OF_ORDER: "false"
volumes:
- ./database/migrations:/flyway/sql
depends_on:
postgres:
condition: service_healthy
networks:
- trenes-network
profiles:
- migration
# Worker para polling GTFS-RT Vehicle Positions
worker:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-worker
restart: unless-stopped
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
GTFS_RT_URL: https://gtfsrt.renfe.com/vehicle_positions.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para sincronización GTFS Static
gtfs-static-syncer:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-gtfs-static-syncer
restart: unless-stopped
command: node src/worker/gtfs-static-syncer.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
GTFS_STATIC_URL: https://data.renfe.com/dataset/horarios-trenes-largo-recorrido-ave/resource/horarios-trenes-largo-recorrido-ave-gtfs.zip
SYNC_SCHEDULE: 0 3 * * *
LOG_LEVEL: info
volumes:
- gtfs_static_data:/tmp/gtfs
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para polling GTFS-RT Trip Updates
trip-updates-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-trip-updates-poller
restart: unless-stopped
command: node src/worker/trip-updates-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
GTFS_RT_TRIP_UPDATES_URL: https://gtfsrt.renfe.com/trip_updates_cercanias.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para polling GTFS-RT Service Alerts
alerts-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-alerts-poller
restart: unless-stopped
command: node src/worker/alerts-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
GTFS_RT_ALERTS_URL: https://gtfsrt.renfe.com/alerts.pb
POLLING_INTERVAL: 30000
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para datos de flota Renfe (delay, estaciones, etc.)
renfe-fleet-poller:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-renfe-fleet-poller
restart: unless-stopped
command: node src/worker/renfe-fleet-poller.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# Worker para refrescar vistas de analytics
analytics-refresher:
build:
context: ./backend
dockerfile: Dockerfile
target: worker
container_name: trenes-analytics-refresher
restart: unless-stopped
command: node src/worker/analytics-refresher.js
environment:
NODE_ENV: production
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
ANALYTICS_REFRESH_SCHEDULE: "*/15 * * * *"
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- trenes-network
# API Backend
api:
build:
context: ./backend
dockerfile: Dockerfile
target: api
container_name: trenes-api
restart: unless-stopped
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://trenes_user:${POSTGRES_PASSWORD:-trenes_password_change_me}@postgres:5432/trenes_db
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password_change_me}@redis:6379
CORS_ORIGIN: ${CORS_ORIGIN:-http://localhost:80}
JWT_SECRET: ${JWT_SECRET:-jwt_secret_change_me}
LOG_LEVEL: info
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
ports:
- "3000:3000"
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
networks:
- trenes-network
# Frontend
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: production
args:
# IMPORTANTE: VITE_WS_URL no debe incluir /ws, Socket.io añade /socket.io/ automáticamente
VITE_API_URL: ${VITE_API_URL:-http://localhost/api}
VITE_WS_URL: ${VITE_WS_URL:-http://localhost}
container_name: trenes-frontend
restart: unless-stopped
ports:
- "5173:80"
depends_on:
- api
networks:
- trenes-network
# Nginx como reverse proxy
nginx:
image: nginx:alpine
container_name: trenes-nginx
restart: unless-stopped
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- nginx_logs:/var/log/nginx
ports:
- "80:80"
- "443:443"
depends_on:
- api
- frontend
healthcheck:
test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
networks:
- trenes-network
# Adminer (opcional) - UI para gestionar PostgreSQL
adminer:
image: adminer:latest
container_name: trenes-adminer
restart: unless-stopped
ports:
- "8080:8080"
environment:
ADMINER_DEFAULT_SERVER: postgres
ADMINER_DESIGN: dracula
depends_on:
- postgres
networks:
- trenes-network
profiles:
- debug
# Redis Commander (opcional) - UI para gestionar Redis
redis-commander:
image: rediscommander/redis-commander:latest
container_name: trenes-redis-commander
restart: unless-stopped
environment:
REDIS_HOSTS: local:redis:6379:0:${REDIS_PASSWORD:-redis_password_change_me}
ports:
- "8081:8081"
depends_on:
- redis
networks:
- trenes-network
profiles:
- debug
volumes:
postgres_data:
driver: local
redis_data:
driver: local
nginx_logs:
driver: local
gtfs_static_data:
driver: local
networks:
trenes-network:
driver: bridge

69
frontend/Dockerfile Normal file
View File

@@ -0,0 +1,69 @@
# Multi-stage Dockerfile para Frontend React + Vite
FROM node:20-alpine AS build
# Argumentos de construcción
ARG VITE_API_URL
ARG VITE_WS_URL
ENV VITE_API_URL=${VITE_API_URL}
ENV VITE_WS_URL=${VITE_WS_URL}
WORKDIR /app
# Copiar archivos de dependencias
COPY package*.json ./
# Instalar dependencias
RUN npm install
# Copiar código fuente
COPY . .
# Build de producción
RUN npm run build
# ================================
# Stage de producción con Nginx
# ================================
FROM nginx:alpine AS production
# Copiar archivos compilados
COPY --from=build /app/dist /usr/share/nginx/html
# Copiar configuración de nginx personalizada
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Crear usuario no-root y configurar permisos
RUN chown -R nginx:nginx /usr/share/nginx/html && \
chown -R nginx:nginx /var/cache/nginx && \
chown -R nginx:nginx /var/log/nginx && \
chown nginx:nginx /etc/nginx/conf.d/default.conf && \
chmod 644 /etc/nginx/conf.d/default.conf && \
touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/run/nginx.pid
USER nginx
EXPOSE 80
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost/ || exit 1
CMD ["nginx", "-g", "daemon off;"]
# ================================
# Stage de desarrollo
# ================================
FROM node:20-alpine AS development
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 5173
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0"]

16
frontend/index.html Normal file
View File

@@ -0,0 +1,16 @@
<!DOCTYPE html>
<html lang="es">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Trenes en Tiempo Real - España</title>
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.9.4/dist/leaflet.css"
integrity="sha256-p4NxAoJBhIIN+hmNHrzRCf9tD/miZyoHS5obTRR9BMY="
crossorigin=""/>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.jsx"></script>
</body>
</html>

44
frontend/nginx.conf Normal file
View File

@@ -0,0 +1,44 @@
# Configuración de nginx para el contenedor frontend
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
# Logs
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/x-javascript application/xhtml+xml;
# SPA fallback - todas las rutas devuelven index.html
location / {
try_files $uri $uri/ /index.html;
}
# Cache para assets estáticos con hash
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
# No cachear index.html
location = /index.html {
expires -1;
add_header Cache-Control "no-cache, no-store, must-revalidate";
}
# Denegar acceso a archivos ocultos
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}

31
frontend/package.json Normal file
View File

@@ -0,0 +1,31 @@
{
"name": "trenes-frontend",
"version": "1.0.0",
"description": "Frontend para sistema de tracking de trenes en tiempo real",
"type": "module",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"leaflet": "^1.9.4",
"react-leaflet": "^4.2.1",
"socket.io-client": "^4.6.1",
"date-fns": "^3.0.6",
"lucide-react": "^0.309.0"
},
"devDependencies": {
"@types/react": "^18.2.43",
"@types/react-dom": "^18.2.17",
"@vitejs/plugin-react": "^4.2.1",
"vite": "^5.0.8",
"eslint": "^8.55.0",
"eslint-plugin-react": "^7.33.2",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-react-refresh": "^0.4.5"
}
}

188
frontend/src/App.jsx Normal file
View File

@@ -0,0 +1,188 @@
import React, { useState } from 'react';
import { TrainMap } from './components/TrainMap';
import { TrainInfo } from './components/TrainInfo';
import { Timeline } from './components/Timeline';
import { Dashboard } from './components/Dashboard';
import { useTrains } from './hooks/useTrains';
import { useTimeline } from './hooks/useTimeline';
import { Train, Activity, History, BarChart3, Map } from 'lucide-react';
function App() {
const [activeView, setActiveView] = useState('map'); // 'map' or 'dashboard'
const { trains, selectedTrain, selectTrain, isConnected, error, stats } = useTrains();
const {
isTimelineMode,
isPlaying,
isLoading: isTimelineLoading,
currentTime,
timeRange,
playbackSpeed,
timelinePositions,
toggleTimelineMode,
togglePlay,
skip,
seekTo,
changeSpeed,
} = useTimeline();
// Use timeline positions when in timeline mode, otherwise live trains
const displayTrains = isTimelineMode ? timelinePositions : trains;
const formatLastUpdate = (timestamp) => {
if (!timestamp) return 'Nunca';
const date = new Date(timestamp);
const now = new Date();
const diff = Math.floor((now - date) / 1000);
if (diff < 60) return `Hace ${diff} segundos`;
if (diff < 3600) return `Hace ${Math.floor(diff / 60)} minutos`;
return date.toLocaleTimeString('es-ES');
};
if (error && activeView === 'map') {
return (
<div className="app">
<div className="error">
<h2>Error de Conexion</h2>
<p>{error}</p>
<p style={{ fontSize: '0.9rem', marginTop: '10px' }}>
Verifica que el servidor este ejecutandose
</p>
</div>
</div>
);
}
return (
<div className="app">
<header className="header">
<div className="header-title">
{activeView === 'dashboard' ? (
<BarChart3 size={32} />
) : isTimelineMode ? (
<History size={32} />
) : (
<Train size={32} />
)}
<span>
{activeView === 'dashboard'
? 'Dashboard de Trenes'
: isTimelineMode
? 'Reproduccion Historica - Espana'
: 'Trenes en Tiempo Real - Espana'}
</span>
</div>
<div className="nav-tabs">
<button
className={`nav-tab ${activeView === 'map' ? 'active' : ''}`}
onClick={() => setActiveView('map')}
>
<Map size={18} />
Mapa
</button>
<button
className={`nav-tab ${activeView === 'dashboard' ? 'active' : ''}`}
onClick={() => setActiveView('dashboard')}
>
<BarChart3 size={18} />
Dashboard
</button>
</div>
{activeView === 'map' && (
<div className="header-stats">
<div className="stat">
<span className="stat-label">Modo</span>
<span className="stat-value">
<span className={`status-indicator ${isTimelineMode ? 'timeline' : (isConnected ? 'active' : 'inactive')}`} />
{isTimelineMode ? 'Historico' : (isConnected ? 'Tiempo Real' : 'Desconectado')}
</span>
</div>
<div className="stat">
<span className="stat-label">Trenes</span>
<span className="stat-value">{displayTrains.length}</span>
</div>
<div className="stat">
<span className="stat-label">{isTimelineMode ? 'Tiempo Actual' : 'Ultima Actualizacion'}</span>
<span className="stat-value">
{isTimelineMode
? new Date(currentTime).toLocaleTimeString('es-ES')
: formatLastUpdate(stats.last_update)}
</span>
</div>
</div>
)}
</header>
{activeView === 'dashboard' ? (
<Dashboard />
) : (
<main className="main-content">
<TrainMap
trains={displayTrains}
selectedTrain={selectedTrain}
onTrainClick={selectTrain}
/>
<aside className="sidebar">
<div className="sidebar-header">
<h2>
{selectedTrain ? `Tren ${selectedTrain.train_id}` : 'Informacion'}
</h2>
<p className="sidebar-subtitle">
{selectedTrain
? 'Detalles del tren seleccionado'
: `${displayTrains.length} trenes en el mapa`}
</p>
</div>
<div className="sidebar-content">
{!isConnected && !displayTrains.length && !isTimelineMode ? (
<div className="loading">
<Activity size={48} style={{ animation: 'pulse 2s ease-in-out infinite' }} />
<p style={{ marginTop: '10px' }}>Conectando...</p>
</div>
) : isTimelineLoading ? (
<div className="loading">
<Activity size={48} style={{ animation: 'pulse 2s ease-in-out infinite' }} />
<p style={{ marginTop: '10px' }}>Cargando historico...</p>
</div>
) : (
<TrainInfo train={selectedTrain} onClose={() => selectTrain(null)} />
)}
</div>
</aside>
<Timeline
isTimelineMode={isTimelineMode}
isPlaying={isPlaying}
isLoading={isTimelineLoading}
currentTime={currentTime}
timeRange={timeRange}
playbackSpeed={playbackSpeed}
onToggleMode={toggleTimelineMode}
onTogglePlay={togglePlay}
onSkip={skip}
onSeek={seekTo}
onChangeSpeed={changeSpeed}
/>
</main>
)}
</div>
);
}
export default App;
// Add pulse animation
const style = document.createElement('style');
style.textContent = `
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
`;
document.head.appendChild(style);

View File

@@ -0,0 +1,447 @@
import React, { useMemo } from 'react';
import {
BarChart3,
Clock,
Train,
AlertTriangle,
CheckCircle,
TrendingUp,
Play,
Pause,
SkipBack,
SkipForward,
Radio,
Calendar,
} from 'lucide-react';
import { useDashboard } from '../hooks/useDashboard';
// Mini chart component for timeline
function MiniChart({ data, dataKey, color, height = 60 }) {
if (!data || data.length === 0) return null;
const values = data.map(d => d[dataKey] || 0);
const max = Math.max(...values, 1);
const min = Math.min(...values, 0);
const range = max - min || 1;
const points = data.map((d, i) => {
const x = (i / (data.length - 1)) * 100;
const y = 100 - ((d[dataKey] - min) / range) * 100;
return `${x},${y}`;
}).join(' ');
return (
<svg viewBox="0 0 100 100" preserveAspectRatio="none" style={{ width: '100%', height }}>
<polyline
points={points}
fill="none"
stroke={color}
strokeWidth="2"
vectorEffect="non-scaling-stroke"
/>
</svg>
);
}
// Stat card component
function StatCard({ icon: Icon, label, value, subValue, color = '#3498DB', trend }) {
return (
<div className="stat-card">
<div className="stat-card-icon" style={{ backgroundColor: `${color}20`, color }}>
<Icon size={24} />
</div>
<div className="stat-card-content">
<span className="stat-card-label">{label}</span>
<span className="stat-card-value">{value}</span>
{subValue && <span className="stat-card-subvalue">{subValue}</span>}
{trend !== undefined && (
<span className={`stat-card-trend ${trend >= 0 ? 'positive' : 'negative'}`}>
{trend >= 0 ? '+' : ''}{trend}%
</span>
)}
</div>
</div>
);
}
// Progress bar component
function ProgressBar({ value, max, color, label }) {
const percentage = max > 0 ? (value / max) * 100 : 0;
return (
<div className="progress-bar-container">
<div className="progress-bar-header">
<span className="progress-bar-label">{label}</span>
<span className="progress-bar-value">{value}</span>
</div>
<div className="progress-bar-track">
<div
className="progress-bar-fill"
style={{ width: `${percentage}%`, backgroundColor: color }}
/>
</div>
</div>
);
}
// Punctuality donut chart
function PunctualityDonut({ data }) {
if (!data) return null;
const total = Object.values(data).reduce((a, b) => a + b, 0);
if (total === 0) return null;
const segments = [
{ key: 'on_time', color: '#27AE60', label: 'Puntual' },
{ key: 'minor_delay', color: '#F39C12', label: '1-5 min' },
{ key: 'moderate_delay', color: '#E67E22', label: '6-15 min' },
{ key: 'severe_delay', color: '#E74C3C', label: '>15 min' },
{ key: 'early', color: '#3498DB', label: 'Adelantado' },
];
let currentAngle = 0;
const paths = segments.map(({ key, color }) => {
const value = data[key] || 0;
const percentage = value / total;
const angle = percentage * 360;
if (value === 0) return null;
const startAngle = currentAngle;
const endAngle = currentAngle + angle;
currentAngle = endAngle;
const startRad = (startAngle - 90) * (Math.PI / 180);
const endRad = (endAngle - 90) * (Math.PI / 180);
const x1 = 50 + 40 * Math.cos(startRad);
const y1 = 50 + 40 * Math.sin(startRad);
const x2 = 50 + 40 * Math.cos(endRad);
const y2 = 50 + 40 * Math.sin(endRad);
const largeArc = angle > 180 ? 1 : 0;
return (
<path
key={key}
d={`M 50 50 L ${x1} ${y1} A 40 40 0 ${largeArc} 1 ${x2} ${y2} Z`}
fill={color}
/>
);
});
return (
<div className="donut-chart">
<svg viewBox="0 0 100 100">
{paths}
<circle cx="50" cy="50" r="25" fill="var(--bg-secondary)" />
</svg>
<div className="donut-legend">
{segments.map(({ key, color, label }) => {
const value = data[key] || 0;
if (value === 0) return null;
return (
<div key={key} className="donut-legend-item">
<span className="donut-legend-color" style={{ backgroundColor: color }} />
<span className="donut-legend-label">{label}</span>
<span className="donut-legend-value">{value}</span>
</div>
);
})}
</div>
</div>
);
}
// Lines ranking table
function LinesTable({ lines }) {
if (!lines || lines.length === 0) {
return <div className="empty-state">Sin datos de lineas</div>;
}
return (
<div className="lines-table">
<div className="lines-table-header">
<span>Linea</span>
<span>Trenes</span>
<span>Retraso Med.</span>
<span>Puntualidad</span>
</div>
{lines.slice(0, 10).map((line, index) => (
<div key={`${line.nucleo}:${line.line_code}-${index}`} className="lines-table-row">
<span className="line-code">
{line.line_code}
{line.nucleo_name && <span className="line-nucleo"> ({line.nucleo_name})</span>}
</span>
<span>{line.unique_trains}</span>
<span style={{ color: line.avg_delay > 5 ? '#E74C3C' : '#27AE60' }}>
{parseFloat(line.avg_delay).toFixed(1)} min
</span>
<span style={{ color: line.punctuality_pct >= 80 ? '#27AE60' : '#E74C3C' }}>
{line.punctuality_pct}%
</span>
</div>
))}
</div>
);
}
// Time control bar
function TimeControl({ currentTime, isLive, availableRange, onSeek, onGoLive, onSkip }) {
const formatTime = (date) => {
return date.toLocaleString('es-ES', {
day: '2-digit',
month: '2-digit',
year: 'numeric',
hour: '2-digit',
minute: '2-digit',
});
};
const handleSliderChange = (e) => {
if (!availableRange.earliest || !availableRange.latest) return;
const percentage = parseFloat(e.target.value);
const range = availableRange.latest.getTime() - availableRange.earliest.getTime();
const newTime = new Date(availableRange.earliest.getTime() + range * percentage);
onSeek(newTime);
};
const getSliderValue = () => {
if (!availableRange.earliest || !availableRange.latest) return 1;
const range = availableRange.latest.getTime() - availableRange.earliest.getTime();
if (range === 0) return 1;
return (currentTime.getTime() - availableRange.earliest.getTime()) / range;
};
return (
<div className="time-control">
<div className="time-control-header">
<div className="time-display">
<Clock size={20} />
<span className="current-time">{formatTime(currentTime)}</span>
{isLive && (
<span className="live-badge">
<Radio size={14} />
EN VIVO
</span>
)}
</div>
<div className="time-buttons">
<button onClick={() => onSkip(-60)} title="Retroceder 1 hora">
<SkipBack size={16} />
-1h
</button>
<button onClick={() => onSkip(-10)} title="Retroceder 10 min">
-10m
</button>
<button onClick={() => onSkip(10)} title="Avanzar 10 min">
+10m
</button>
<button onClick={() => onSkip(60)} title="Avanzar 1 hora">
+1h
<SkipForward size={16} />
</button>
<button
onClick={onGoLive}
className={`live-button ${isLive ? 'active' : ''}`}
title="Ver en tiempo real"
>
<Radio size={16} />
En Vivo
</button>
</div>
</div>
<div className="time-slider-container">
<input
type="range"
min="0"
max="1"
step="0.001"
value={getSliderValue()}
onChange={handleSliderChange}
className="time-slider"
/>
<div className="time-range-labels">
<span>{availableRange.earliest ? formatTime(availableRange.earliest) : '...'}</span>
<span>{availableRange.latest ? formatTime(availableRange.latest) : '...'}</span>
</div>
</div>
</div>
);
}
export function Dashboard() {
const {
isLive,
currentTime,
stats,
timeline,
linesRanking,
availableRange,
isLoading,
error,
seekTo,
goLive,
skip,
} = useDashboard();
// Calculate status totals
const statusTotal = useMemo(() => {
if (!stats?.status_breakdown) return 0;
return Object.values(stats.status_breakdown).reduce((a, b) => a + b, 0);
}, [stats]);
if (error) {
return (
<div className="dashboard-error">
<AlertTriangle size={48} />
<h3>Error al cargar el dashboard</h3>
<p>{error}</p>
</div>
);
}
return (
<div className="dashboard">
<TimeControl
currentTime={currentTime}
isLive={isLive}
availableRange={availableRange}
onSeek={seekTo}
onGoLive={goLive}
onSkip={skip}
/>
{isLoading && !stats ? (
<div className="dashboard-loading">
<div className="spinner" />
<p>Cargando datos...</p>
</div>
) : (
<>
{/* Main stats row */}
<div className="stats-row">
<StatCard
icon={Train}
label="Trenes Activos"
value={stats?.total_trains || 0}
color="#3498DB"
/>
<StatCard
icon={CheckCircle}
label="Puntualidad"
value={`${stats?.punctuality_percentage || 0}%`}
subValue="<= 5 min retraso"
color="#27AE60"
/>
<StatCard
icon={Clock}
label="Retraso Medio"
value={`${stats?.average_delay || 0} min`}
color={parseFloat(stats?.average_delay) > 5 ? '#E74C3C' : '#27AE60'}
/>
<StatCard
icon={AlertTriangle}
label="Retrasos Graves"
value={stats?.punctuality_breakdown?.severe_delay || 0}
subValue=">15 min retraso"
color="#E74C3C"
/>
</div>
<div className="dashboard-grid">
{/* Status breakdown */}
<div className="dashboard-card">
<h3>
<Train size={18} />
Estado de Trenes
</h3>
<div className="status-bars">
<ProgressBar
label="En transito"
value={stats?.status_breakdown?.IN_TRANSIT_TO || 0}
max={statusTotal}
color="#27AE60"
/>
<ProgressBar
label="Parado en estacion"
value={stats?.status_breakdown?.STOPPED_AT || 0}
max={statusTotal}
color="#E74C3C"
/>
<ProgressBar
label="Llegando a estacion"
value={stats?.status_breakdown?.INCOMING_AT || 0}
max={statusTotal}
color="#F39C12"
/>
</div>
</div>
{/* Punctuality breakdown */}
<div className="dashboard-card">
<h3>
<Clock size={18} />
Distribucion de Puntualidad
</h3>
<PunctualityDonut data={stats?.punctuality_breakdown} />
</div>
{/* Timeline chart */}
<div className="dashboard-card wide">
<h3>
<TrendingUp size={18} />
Evolucion Temporal (Ultima hora)
</h3>
<div className="timeline-charts">
<div className="timeline-chart">
<span className="chart-label">Trenes</span>
<MiniChart data={timeline} dataKey="train_count" color="#3498DB" height={50} />
</div>
<div className="timeline-chart">
<span className="chart-label">Puntualidad %</span>
<MiniChart data={timeline} dataKey="punctuality_pct" color="#27AE60" height={50} />
</div>
<div className="timeline-chart">
<span className="chart-label">Retraso Medio</span>
<MiniChart data={timeline} dataKey="avg_delay" color="#E74C3C" height={50} />
</div>
</div>
</div>
{/* Lines by activity */}
<div className="dashboard-card">
<h3>
<BarChart3 size={18} />
Trenes por Linea
</h3>
<div className="lines-bars">
{stats?.lines_breakdown &&
Array.isArray(stats.lines_breakdown) &&
stats.lines_breakdown
.slice(0, 8)
.map((line) => (
<ProgressBar
key={`${line.nucleo}:${line.line_code}`}
label={`${line.line_code} (${line.nucleo_name})`}
value={line.count}
max={stats.lines_breakdown[0]?.count || 1}
color="#3498DB"
/>
))}
</div>
</div>
{/* Lines ranking */}
<div className="dashboard-card">
<h3>
<AlertTriangle size={18} />
Ranking de Lineas (Peor Puntualidad)
</h3>
<LinesTable lines={linesRanking} />
</div>
</div>
</>
)}
</div>
);
}

View File

@@ -0,0 +1,175 @@
import React from 'react';
import { Play, Pause, SkipBack, SkipForward, Clock, History, X, Loader } from 'lucide-react';
export function Timeline({
isTimelineMode,
isPlaying,
isLoading,
currentTime,
timeRange,
playbackSpeed,
onToggleMode,
onTogglePlay,
onSkip,
onSeek,
onChangeSpeed,
}) {
const formatTime = (timestamp) => {
const date = new Date(timestamp);
return date.toLocaleTimeString('es-ES', {
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
});
};
const formatDate = (timestamp) => {
const date = new Date(timestamp);
return date.toLocaleDateString('es-ES', {
day: '2-digit',
month: '2-digit',
});
};
const progress = timeRange.end > timeRange.start
? ((currentTime - timeRange.start) / (timeRange.end - timeRange.start)) * 100
: 0;
const speedOptions = [1, 2, 5, 10];
return (
<div className="timeline-container">
{!isTimelineMode ? (
// Modo tiempo real - botón para activar timeline
<div className="timeline-realtime">
<div className="timeline-realtime-info">
<Clock size={18} />
<span>Modo Tiempo Real</span>
</div>
<button
className="timeline-btn timeline-btn-primary"
onClick={onToggleMode}
title="Activar modo histórico"
>
<History size={16} />
Ver Histórico
</button>
</div>
) : (
// Modo timeline
<>
<div className="timeline-header">
<div className="timeline-title-section">
<History size={18} />
<span className="timeline-title">Reproducción Histórica</span>
{isLoading && (
<Loader size={16} className="timeline-loading" />
)}
</div>
<button
className="timeline-btn timeline-btn-close"
onClick={onToggleMode}
title="Volver a tiempo real"
>
<X size={16} />
</button>
</div>
<div className="timeline-controls">
<button
className="timeline-btn"
onClick={() => onSkip(-300)}
title="Retroceder 5 minutos"
disabled={isLoading}
>
<SkipBack size={16} />
</button>
<button
className="timeline-btn"
onClick={() => onSkip(-60)}
title="Retroceder 1 minuto"
disabled={isLoading}
>
-1m
</button>
<button
className="timeline-btn timeline-btn-play"
onClick={onTogglePlay}
title={isPlaying ? 'Pausar' : 'Reproducir'}
disabled={isLoading}
>
{isPlaying ? <Pause size={18} /> : <Play size={18} />}
</button>
<button
className="timeline-btn"
onClick={() => onSkip(60)}
title="Avanzar 1 minuto"
disabled={isLoading}
>
+1m
</button>
<button
className="timeline-btn"
onClick={() => onSkip(300)}
title="Avanzar 5 minutos"
disabled={isLoading}
>
<SkipForward size={16} />
</button>
<div className="timeline-speed">
<span>Velocidad:</span>
<select
value={playbackSpeed}
onChange={(e) => onChangeSpeed(Number(e.target.value))}
disabled={isLoading}
>
{speedOptions.map(speed => (
<option key={speed} value={speed}>{speed}x</option>
))}
</select>
</div>
</div>
<div className="timeline-slider-container">
<input
type="range"
className="timeline-slider"
min={timeRange.start}
max={timeRange.end}
value={currentTime}
onChange={(e) => onSeek(parseInt(e.target.value, 10))}
disabled={isLoading}
/>
<div
className="timeline-progress"
style={{ width: `${progress}%` }}
/>
</div>
<div className="timeline-time-display">
<span className="timeline-time-label">
{formatDate(timeRange.start)} {formatTime(timeRange.start)}
</span>
<span className="timeline-time-current">
{formatTime(currentTime)}
</span>
<span className="timeline-time-label">
{formatDate(timeRange.end)} {formatTime(timeRange.end)}
</span>
</div>
{isLoading && (
<div className="timeline-loading-message">
Cargando datos históricos...
</div>
)}
</>
)}
</div>
);
}

View File

@@ -0,0 +1,310 @@
import React from 'react';
import { X } from 'lucide-react';
import { getTrainTypeFromId, formatTrainType as formatTrainTypeUtil } from '../utils/trainTypes';
export function TrainInfo({ train, onClose }) {
if (!train) {
return (
<div className="empty-state">
<p>Selecciona un tren en el mapa</p>
<p style={{ fontSize: '0.85rem', opacity: 0.7 }}>
Haz clic en cualquier marcador para ver información detallada
</p>
</div>
);
}
const formatTimestamp = (timestamp) => {
if (!timestamp) return 'N/A';
const date = new Date(timestamp);
return date.toLocaleString('es-ES', {
day: '2-digit',
month: '2-digit',
year: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
});
};
const formatSpeed = (speed) => {
if (speed == null) return 'N/A';
return `${Math.round(speed)} km/h`;
};
const formatBearing = (bearing) => {
if (bearing == null) return 'N/A';
const directions = ['N', 'NE', 'E', 'SE', 'S', 'SW', 'W', 'NW'];
const index = Math.round(bearing / 45) % 8;
return `${Math.round(bearing)}° (${directions[index]})`;
};
const formatStatus = (status) => {
const statusMap = {
'INCOMING_AT': 'Llegando a estacion',
'STOPPED_AT': 'Parado en estacion',
'IN_TRANSIT_TO': 'En transito',
'UNKNOWN': 'Desconocido',
};
return statusMap[status] || status || 'N/A';
};
// Get train type - use API value if available, otherwise infer from ID
// Returns { text: string, isInferred: boolean }
const getDisplayTrainType = () => {
// If we have a valid train_type from API (not UNKNOWN), use it
if (train.train_type && train.train_type !== 'UNKNOWN') {
return { text: formatTrainTypeUtil(train.train_type), isInferred: false };
}
// Try to infer from train ID
const inferred = getTrainTypeFromId(train.train_id);
if (inferred) {
return { text: formatTrainTypeUtil(inferred.type, inferred.name), isInferred: true };
}
return { text: 'No especificado', isInferred: false };
};
const trainTypeInfo = getDisplayTrainType();
const formatOccupancy = (status) => {
const occupancyMap = {
'EMPTY': 'Vacío',
'MANY_SEATS_AVAILABLE': 'Muchos asientos libres',
'FEW_SEATS_AVAILABLE': 'Pocos asientos libres',
'STANDING_ROOM_ONLY': 'Solo de pie',
'CRUSHED_STANDING_ROOM_ONLY': 'Muy lleno',
'FULL': 'Completo',
'NOT_ACCEPTING_PASSENGERS': 'No admite pasajeros',
};
return occupancyMap[status] || status || 'N/A';
};
const formatDelay = (minutes) => {
if (minutes == null || minutes === 0) return 'Puntual';
if (minutes < 0) return `${Math.abs(minutes)} min adelantado`;
return `${minutes} min de retraso`;
};
const formatTime = (isoString) => {
if (!isoString) return null;
const date = new Date(isoString);
return date.toLocaleTimeString('es-ES', { hour: '2-digit', minute: '2-digit' });
};
// Check if we have fleet data
const hasFleetData = train.codLinea || train.retrasoMin !== undefined;
return (
<div className="train-info">
<div className="info-section">
<h3>Identificación</h3>
<div className="info-grid">
<div className="info-item">
<span className="info-label">ID Tren</span>
<span className="info-value">{train.train_id}</span>
</div>
<div className="info-item">
<span className="info-label">Tipo</span>
<span className="info-value">
{trainTypeInfo.text}
{trainTypeInfo.isInferred && (
<span style={{
marginLeft: '6px',
fontSize: '0.7rem',
color: '#888',
fontStyle: 'italic'
}}>
(inferido)
</span>
)}
</span>
</div>
{train.service_name && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Servicio</span>
<span className="info-value">{train.service_name}</span>
</div>
)}
{train.trip_id && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">ID Viaje</span>
<span className="info-value" style={{ fontSize: '0.85rem' }}>{train.trip_id}</span>
</div>
)}
{train.route_id && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Ruta</span>
<span className="info-value">{train.route_id}</span>
</div>
)}
{train.codLinea && (
<div className="info-item">
<span className="info-label">Linea</span>
<span className="info-value" style={{ fontWeight: 'bold', color: '#3498DB' }}>
{train.codLinea}
</span>
</div>
)}
{train.nucleo && (
<div className="info-item">
<span className="info-label">Nucleo</span>
<span className="info-value">{train.nucleo}</span>
</div>
)}
</div>
</div>
{hasFleetData && (
<div className="info-section">
<h3>Trayecto</h3>
<div className="info-grid">
{(train.codEstOrig || train.estacionOrigen) && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Origen</span>
<span className="info-value">
{train.estacionOrigen || train.codEstOrig}
{train.estacionOrigen && train.codEstOrig && (
<span style={{ marginLeft: '8px', color: '#888', fontSize: '0.8rem' }}>
({train.codEstOrig})
</span>
)}
</span>
</div>
)}
{(train.codEstDest || train.estacionDestino) && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Destino</span>
<span className="info-value">
{train.estacionDestino || train.codEstDest}
{train.estacionDestino && train.codEstDest && (
<span style={{ marginLeft: '8px', color: '#888', fontSize: '0.8rem' }}>
({train.codEstDest})
</span>
)}
</span>
</div>
)}
{(train.codEstAct || train.estacionActual) && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Estacion actual</span>
<span className="info-value">
{train.estacionActual || train.codEstAct}
{train.estacionActual && train.codEstAct && (
<span style={{ marginLeft: '8px', color: '#888', fontSize: '0.8rem' }}>
({train.codEstAct})
</span>
)}
</span>
</div>
)}
{(train.codEstSig || train.estacionSiguiente) && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Siguiente estacion</span>
<span className="info-value">
{train.estacionSiguiente || train.codEstSig}
{train.estacionSiguiente && train.codEstSig && (
<span style={{ marginLeft: '8px', color: '#888', fontSize: '0.8rem' }}>
({train.codEstSig})
</span>
)}
{train.horaLlegadaSigEst && (
<span style={{ marginLeft: '8px', color: '#666', fontSize: '0.9rem' }}>
- llegada: {formatTime(train.horaLlegadaSigEst)}
</span>
)}
</span>
</div>
)}
{train.via && (
<div className="info-item">
<span className="info-label">Via</span>
<span className="info-value">{train.via}</span>
</div>
)}
{train.accesible !== undefined && (
<div className="info-item">
<span className="info-label">Accesible</span>
<span className="info-value">{train.accesible ? 'Si' : 'No'}</span>
</div>
)}
</div>
</div>
)}
{train.retrasoMin !== undefined && (
<div className="info-section">
<h3>Puntualidad</h3>
<div className="info-grid">
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Estado</span>
<span className="info-value" style={{
color: train.retrasoMin > 0 ? '#E74C3C' : '#2ECC71',
fontWeight: 'bold'
}}>
{formatDelay(train.retrasoMin)}
</span>
</div>
</div>
</div>
)}
<div className="info-section">
<h3>Posición</h3>
<div className="info-grid">
<div className="info-item">
<span className="info-label">Latitud</span>
<span className="info-value">{train.latitude.toFixed(6)}</span>
</div>
<div className="info-item">
<span className="info-label">Longitud</span>
<span className="info-value">{train.longitude.toFixed(6)}</span>
</div>
</div>
</div>
<div className="info-section">
<h3>Estado</h3>
<div className="info-grid">
<div className="info-item">
<span className="info-label">Estado</span>
<span className="info-value">{formatStatus(train.status)}</span>
</div>
<div className="info-item">
<span className="info-label">Velocidad</span>
<span className="info-value">{formatSpeed(train.speed)}</span>
</div>
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Dirección</span>
<span className="info-value">{formatBearing(train.bearing)}</span>
</div>
{train.occupancy_status && (
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Ocupación</span>
<span className="info-value">{formatOccupancy(train.occupancy_status)}</span>
</div>
)}
</div>
</div>
<div className="info-section">
<h3>Tiempo</h3>
<div className="info-grid">
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Última actualización</span>
<span className="info-value" style={{ fontSize: '0.9rem' }}>
{formatTimestamp(train.timestamp)}
</span>
</div>
<div className="info-item" style={{ gridColumn: '1 / -1' }}>
<span className="info-label">Registrado</span>
<span className="info-value" style={{ fontSize: '0.9rem' }}>
{formatTimestamp(train.recorded_at)}
</span>
</div>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,361 @@
import React, { useMemo, useState } from 'react';
import { MapContainer, TileLayer, Marker, Popup, useMap, Tooltip } from 'react-leaflet';
import L from 'leaflet';
import 'leaflet/dist/leaflet.css';
import { useStations } from '../hooks/useStations';
import { getTrainTypeFromId, getTrainTypeColor } from '../utils/trainTypes';
// Fix for default marker icon
delete L.Icon.Default.prototype._getIconUrl;
L.Icon.Default.mergeOptions({
iconRetinaUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.7.1/images/marker-icon-2x.png',
iconUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.7.1/images/marker-icon.png',
shadowUrl: 'https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.7.1/images/marker-shadow.png',
});
// Create custom train icon with train SVG design and background
const createTrainIcon = (bearing, isSelected = false, customColor = null) => {
const color = customColor || (isSelected ? '#E74C3C' : '#1a1a2e');
const hasBearing = bearing !== null && bearing !== undefined;
const rotation = hasBearing ? bearing : 0;
// Train SVG icon - scaled up and with custom color
const trainSvg = `
<svg width="20" height="28" viewBox="0 0 10 14" fill="none" xmlns="http://www.w3.org/2000/svg">
<!-- Train window/top section -->
<path d="M3.87387 2.35688C3.60022 2.35688 3.37838 2.58903 3.37838 2.8754C3.37838 3.16177 3.60022 3.39392 3.87388 3.39392H6.12613C6.39978 3.39392 6.62162 3.16177 6.62162 2.8754C6.62162 2.58903 6.39978 2.35688 6.12613 2.35688H3.87387Z" fill="white"/>
<!-- Train front light -->
<path d="M4.72472 10.1042C4.8062 10.0472 4.902 10.0168 5 10.0168C5.13141 10.0168 5.25745 10.0715 5.35037 10.1687C5.44329 10.2659 5.4955 10.3978 5.4955 10.5353C5.4955 10.6379 5.46644 10.7382 5.41199 10.8234C5.35754 10.9087 5.28016 10.9752 5.18962 11.0144C5.09908 11.0536 4.99945 11.0639 4.90333 11.0439C4.80722 11.0239 4.71893 10.9745 4.64963 10.902C4.58034 10.8295 4.53314 10.7371 4.51403 10.6365C4.49491 10.5359 4.50472 10.4317 4.54222 10.3369C4.57972 10.2422 4.64323 10.1612 4.72472 10.1042Z" fill="white"/>
<!-- Train body outline -->
<path fill-rule="evenodd" clip-rule="evenodd" d="M9.05522 11.6668L9.68743 13.2847C9.82082 13.6261 9.5809 14 9.22848 14C9.02714 14 8.84578 13.8726 8.76955 13.6776L8.18555 12.1836L8.12293 12.2105C7.13363 12.6366 6.07099 12.8446 5.0014 12.8215L4.99861 12.8216C3.92902 12.8446 2.86637 12.6366 1.87707 12.2105L1.81445 12.1836L1.23042 13.6776C1.15421 13.8726 0.972893 14 0.771592 14C0.419198 14 0.179324 13.6261 0.31279 13.2848L0.944797 11.6686L0.899378 11.6364C0.585747 11.4141 0.30963 11.1388 0.0823217 10.8217L0 10.6909V2.28945C0.0276524 1.98124 0.137204 1.68727 0.31628 1.44083C0.495975 1.19353 0.738459 1.00407 1.01607 0.893983L1.01797 0.893158C2.28253 0.34503 3.63187 0.042379 5 0C6.36812 0.0424441 7.71744 0.345123 8.98202 0.893189L8.98393 0.893947C9.26151 1.00409 9.50395 1.1936 9.68364 1.44089C9.86271 1.68733 9.97228 1.98127 10 2.28945V10.6907L9.91779 10.8198C9.69046 11.1369 9.4143 11.4123 9.10062 11.6346L9.05522 11.6668ZM1.10865 2.08512C1.04029 2.18656 1.00123 2.30642 0.996121 2.43043L0.996075 2.43502C1.02193 3.6747 1.37848 5.12344 2.04792 6.26436C2.71743 7.40538 3.70721 8.24915 5 8.24915C6.29279 8.24915 7.28257 7.40538 7.95208 6.26436C8.62152 5.12344 8.97822 3.6747 9.00407 2.43502L9.00388 2.43043C8.99877 2.30642 8.95971 2.18656 8.89135 2.08512C8.82307 1.98379 8.72842 1.90511 8.61866 1.85842C7.46927 1.36263 6.24485 1.08432 5.00247 1.03687L4.99754 1.03706C3.75522 1.08451 2.53085 1.3626 1.38152 1.85835C1.27168 1.90503 1.17697 1.98374 1.10865 2.08512ZM9.00901 10.3547V6.39395L8.88325 6.617C7.99932 8.18476 6.64652 9.28619 5 9.28619C3.35348 9.28619 2.00068 8.18476 1.11675 6.617L0.990991 6.39395V10.3552L1.0085 10.3754C1.33139 10.7482 2.46073 11.7845 5 11.7845C7.53929 11.7845 8.66295 10.7529 8.99143 10.3749L9.00901 10.3547Z" fill="${color}"/>
</svg>
`;
// If we have bearing data, rotate the train icon
if (hasBearing) {
return L.divIcon({
html: `
<div style="
width: 36px;
height: 36px;
display: flex;
align-items: center;
justify-content: center;
background: white;
border-radius: 50%;
box-shadow: 0 2px 6px rgba(0,0,0,0.3);
border: 2px solid ${color};
">
<div style="transform: rotate(${rotation}deg);">
${trainSvg}
</div>
</div>
`,
className: 'train-marker',
iconSize: [36, 36],
iconAnchor: [18, 18],
});
}
// No bearing - show train icon without rotation
return L.divIcon({
html: `
<div style="
width: 36px;
height: 36px;
display: flex;
align-items: center;
justify-content: center;
background: white;
border-radius: 50%;
box-shadow: 0 2px 6px rgba(0,0,0,0.3);
border: 2px solid ${color};
">
${trainSvg}
</div>
`,
className: 'train-marker',
iconSize: [36, 36],
iconAnchor: [18, 18],
});
};
function MapUpdater({ center, zoom }) {
const map = useMap();
React.useEffect(() => {
if (center) {
map.setView(center, zoom);
}
}, [center, zoom, map]);
return null;
}
// Get station icon size based on type
const getStationSize = (stationType) => {
switch (stationType) {
case 'MAJOR': return 20;
case 'MEDIUM': return 16;
default: return 12;
}
};
// Create Cercanías station icon
const createStationIcon = (stationType) => {
const size = getStationSize(stationType);
// Cercanías logo SVG
const stationSvg = `
<svg width="${size}" height="${size}" viewBox="0 0 300 300" xmlns="http://www.w3.org/2000/svg">
<circle cx="150" cy="150" r="150" style="fill:rgb(239,44,48);"/>
<path d="M150,100C177.429,100 200,122.571 200,150C200,177.429 177.429,200 150,200C122.571,200 100,177.429 100,150L50,150C50,204.858 95.142,250 150,250C204.858,250 250,204.858 250,150C250,95.142 204.858,50 150,50L150,100Z" style="fill:white;"/>
</svg>
`;
return L.divIcon({
html: `
<div style="
width: ${size}px;
height: ${size}px;
display: flex;
align-items: center;
justify-content: center;
filter: drop-shadow(0 1px 2px rgba(0,0,0,0.3));
">
${stationSvg}
</div>
`,
className: 'station-marker',
iconSize: [size, size],
iconAnchor: [size / 2, size / 2],
});
};
export function TrainMap({ trains, selectedTrain, onTrainClick }) {
const { stations } = useStations();
const [showStations, setShowStations] = useState(true);
const center = useMemo(() => {
if (selectedTrain) {
return [selectedTrain.latitude, selectedTrain.longitude];
}
return [40.4168, -3.7038]; // Madrid center
}, [selectedTrain]);
const zoom = selectedTrain ? 12 : 6;
// Get train color based on movement status
// Green = moving (IN_TRANSIT_TO), Red = stopped (STOPPED_AT)
const getTrainColor = (train, isSelected) => {
if (isSelected) return '#9B59B6'; // Purple for selected
// Check if train is stopped or moving
const isStopped = train.status === 'STOPPED_AT' || train.status === 'INCOMING_AT';
if (isStopped) {
return '#E74C3C'; // Red for stopped
}
return '#27AE60'; // Green for moving
};
return (
<div className="map-container">
{/* Station toggle button */}
<div style={{
position: 'absolute',
top: '10px',
right: '10px',
zIndex: 1000,
background: 'white',
padding: '8px 12px',
borderRadius: '6px',
boxShadow: '0 2px 6px rgba(0,0,0,0.2)',
display: 'flex',
alignItems: 'center',
gap: '8px',
fontSize: '0.85rem',
}}>
<label style={{ display: 'flex', alignItems: 'center', gap: '6px', cursor: 'pointer' }}>
<input
type="checkbox"
checked={showStations}
onChange={(e) => setShowStations(e.target.checked)}
/>
Mostrar estaciones
</label>
</div>
<MapContainer
center={center}
zoom={zoom}
style={{ width: '100%', height: '100%' }}
zoomControl={true}
>
<MapUpdater center={selectedTrain ? center : null} zoom={zoom} />
<TileLayer
attribution='&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"
/>
{/* Stations layer */}
{showStations && stations.map((station) => (
<Marker
key={station.station_id}
position={[station.latitude, station.longitude]}
icon={createStationIcon(station.station_type)}
>
<Tooltip direction="top" offset={[0, -10]} permanent={false}>
<div style={{ textAlign: 'center' }}>
<strong>{station.station_name}</strong>
{station.metadata?.platforms && (
<div style={{ fontSize: '0.8rem', color: '#666' }}>
{station.metadata.platforms} andenes
</div>
)}
</div>
</Tooltip>
<Popup>
<div style={{ minWidth: '220px', maxWidth: '280px' }}>
<h3 style={{ margin: '0 0 8px 0', fontSize: '1rem' }}>
{station.station_name}
</h3>
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Codigo:</strong> {station.station_code}
</p>
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Tipo:</strong> {station.station_type === 'MAJOR' ? 'Principal' : station.station_type === 'MEDIUM' ? 'Media' : 'Secundaria'}
</p>
{station.metadata?.nucleo_name && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Nucleo:</strong> {station.metadata.nucleo_name}
</p>
)}
{station.metadata?.lineas && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Lineas:</strong>{' '}
{station.metadata.lineas.split(',').map((linea, i) => (
<span key={i} style={{
display: 'inline-block',
background: '#3498DB',
color: 'white',
padding: '1px 6px',
borderRadius: '4px',
fontSize: '0.8rem',
marginRight: '4px',
marginBottom: '2px'
}}>
{linea.trim()}
</span>
))}
</p>
)}
{station.metadata?.metro && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Metro:</strong> {station.metadata.metro}
</p>
)}
{(station.metadata?.bus_urbano || station.metadata?.bus_interurbano) && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Bus:</strong>{' '}
{station.metadata.bus_urbano && <span>Urbano</span>}
{station.metadata.bus_urbano && station.metadata.bus_interurbano && ', '}
{station.metadata.bus_interurbano && <span>Interurbano</span>}
</p>
)}
{station.metadata?.parking_bicis && (
<p style={{ margin: '4px 0', fontSize: '0.85rem', color: '#27ae60' }}>
<strong>Bicis:</strong> {station.metadata.parking_bicis}
</p>
)}
{station.metadata?.accesibilidad && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Accesibilidad:</strong> {station.metadata.accesibilidad}
</p>
)}
{station.metadata?.platforms && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Andenes:</strong> {station.metadata.platforms}
</p>
)}
{station.metadata?.capacity && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Capacidad:</strong> {station.metadata.capacity}
</p>
)}
</div>
</Popup>
</Marker>
))}
{/* Trains layer */}
{trains.map((train) => {
const isSelected = selectedTrain?.train_id === train.train_id;
const trainColor = getTrainColor(train, isSelected);
// Offset train position slightly when stopped at a station
// so both station and train markers can be clicked
const isAtStation = train.status === 'STOPPED_AT' || train.codEstAct;
const offsetLat = isAtStation ? 0.0003 : 0; // ~30m offset north
const offsetLng = isAtStation ? 0.0003 : 0; // ~30m offset east
return (
<Marker
key={train.train_id}
position={[train.latitude + offsetLat, train.longitude + offsetLng]}
icon={createTrainIcon(train.bearing, isSelected, trainColor)}
eventHandlers={{
click: () => onTrainClick(train),
}}
>
<Popup>
<div style={{ minWidth: '200px' }}>
<h3 style={{ margin: '0 0 8px 0', fontSize: '1rem' }}>
Tren {train.train_id}
{train.codLinea && (
<span style={{ marginLeft: '8px', padding: '2px 6px', background: '#3498DB', color: 'white', borderRadius: '4px', fontSize: '0.8rem' }}>
{train.codLinea}
</span>
)}
</h3>
{train.route_id && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Ruta:</strong> {train.route_id}
</p>
)}
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Estado:</strong> {train.status || 'N/A'}
</p>
{train.speed != null && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Velocidad:</strong> {Math.round(train.speed)} km/h
</p>
)}
{train.retrasoMin !== undefined && (
<p style={{ margin: '4px 0', fontSize: '0.9rem', color: train.retrasoMin > 0 ? '#E74C3C' : '#2ECC71', fontWeight: 'bold' }}>
{train.retrasoMin === 0 ? 'Puntual' : train.retrasoMin > 0 ? `+${train.retrasoMin} min retraso` : `${train.retrasoMin} min adelantado`}
</p>
)}
{(train.estacionSiguiente || train.codEstSig) && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Proxima:</strong> {train.estacionSiguiente || train.codEstSig}
</p>
)}
{(train.estacionDestino || train.codEstDest) && (
<p style={{ margin: '4px 0', fontSize: '0.9rem' }}>
<strong>Destino:</strong> {train.estacionDestino || train.codEstDest}
</p>
)}
<p style={{ margin: '8px 0 0 0', fontSize: '0.8rem', color: '#666' }}>
{new Date(train.timestamp).toLocaleString('es-ES')}
</p>
</div>
</Popup>
</Marker>
);
})}
</MapContainer>
</div>
);
}

View File

@@ -0,0 +1,183 @@
import { useState, useEffect, useCallback, useRef } from 'react';
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:3000';
export function useDashboard() {
const [isLive, setIsLive] = useState(true);
const [currentTime, setCurrentTime] = useState(new Date());
const [stats, setStats] = useState(null);
const [timeline, setTimeline] = useState([]);
const [linesRanking, setLinesRanking] = useState([]);
const [availableRange, setAvailableRange] = useState({ earliest: null, latest: null });
const [isLoading, setIsLoading] = useState(true);
const [error, setError] = useState(null);
const refreshIntervalRef = useRef(null);
// Fetch available data range
const fetchAvailableRange = useCallback(async () => {
try {
const response = await fetch(`${API_URL}/dashboard/available-range`);
if (!response.ok) throw new Error('Failed to fetch available range');
const data = await response.json();
setAvailableRange({
earliest: data.earliest ? new Date(data.earliest) : null,
latest: data.latest ? new Date(data.latest) : null,
});
} catch (err) {
console.error('Error fetching available range:', err);
}
}, []);
// Fetch current/live stats
const fetchCurrentStats = useCallback(async () => {
try {
const response = await fetch(`${API_URL}/dashboard/current`);
if (!response.ok) throw new Error('Failed to fetch current stats');
const data = await response.json();
setStats(data);
setCurrentTime(new Date(data.timestamp));
setError(null);
} catch (err) {
console.error('Error fetching current stats:', err);
setError(err.message);
}
}, []);
// Fetch snapshot at specific time
const fetchSnapshotStats = useCallback(async (timestamp) => {
try {
setIsLoading(true);
const response = await fetch(`${API_URL}/dashboard/snapshot?timestamp=${timestamp.toISOString()}`);
if (!response.ok) throw new Error('Failed to fetch snapshot stats');
const data = await response.json();
setStats(data);
setError(null);
} catch (err) {
console.error('Error fetching snapshot stats:', err);
setError(err.message);
} finally {
setIsLoading(false);
}
}, []);
// Fetch timeline data
const fetchTimeline = useCallback(async (start, end, interval = 5) => {
try {
const params = new URLSearchParams({
start: start.toISOString(),
end: end.toISOString(),
interval: interval.toString(),
});
const response = await fetch(`${API_URL}/dashboard/timeline?${params}`);
if (!response.ok) throw new Error('Failed to fetch timeline');
const data = await response.json();
setTimeline(data.data);
} catch (err) {
console.error('Error fetching timeline:', err);
}
}, []);
// Fetch lines ranking
const fetchLinesRanking = useCallback(async (timestamp, hours = 24) => {
try {
const params = new URLSearchParams({
timestamp: timestamp.toISOString(),
hours: hours.toString(),
});
const response = await fetch(`${API_URL}/dashboard/lines-ranking?${params}`);
if (!response.ok) throw new Error('Failed to fetch lines ranking');
const data = await response.json();
setLinesRanking(data);
} catch (err) {
console.error('Error fetching lines ranking:', err);
}
}, []);
// Seek to specific time
const seekTo = useCallback((timestamp) => {
setIsLive(false);
setCurrentTime(timestamp);
fetchSnapshotStats(timestamp);
// Fetch timeline for 2 hours around the timestamp
const start = new Date(timestamp.getTime() - 3600000);
const end = new Date(timestamp.getTime() + 3600000);
fetchTimeline(start, end, 5);
fetchLinesRanking(timestamp, 24);
}, [fetchSnapshotStats, fetchTimeline, fetchLinesRanking]);
// Go live
const goLive = useCallback(() => {
setIsLive(true);
setCurrentTime(new Date());
fetchCurrentStats();
// Fetch last hour timeline
const now = new Date();
const start = new Date(now.getTime() - 3600000);
fetchTimeline(start, now, 5);
fetchLinesRanking(now, 24);
}, [fetchCurrentStats, fetchTimeline, fetchLinesRanking]);
// Skip forward/backward
const skip = useCallback((minutes) => {
const newTime = new Date(currentTime.getTime() + minutes * 60000);
seekTo(newTime);
}, [currentTime, seekTo]);
// Initial load
useEffect(() => {
const init = async () => {
setIsLoading(true);
await fetchAvailableRange();
await fetchCurrentStats();
const now = new Date();
const start = new Date(now.getTime() - 3600000);
await fetchTimeline(start, now, 5);
await fetchLinesRanking(now, 24);
setIsLoading(false);
};
init();
}, [fetchAvailableRange, fetchCurrentStats, fetchTimeline, fetchLinesRanking]);
// Auto-refresh when live
useEffect(() => {
if (isLive) {
refreshIntervalRef.current = setInterval(() => {
fetchCurrentStats();
const now = new Date();
const start = new Date(now.getTime() - 3600000);
fetchTimeline(start, now, 5);
}, 10000); // Refresh every 10 seconds
} else {
if (refreshIntervalRef.current) {
clearInterval(refreshIntervalRef.current);
refreshIntervalRef.current = null;
}
}
return () => {
if (refreshIntervalRef.current) {
clearInterval(refreshIntervalRef.current);
}
};
}, [isLive, fetchCurrentStats, fetchTimeline]);
return {
isLive,
currentTime,
stats,
timeline,
linesRanking,
availableRange,
isLoading,
error,
seekTo,
goLive,
skip,
setIsLive,
};
}

View File

@@ -0,0 +1,44 @@
import { useState, useEffect } from 'react';
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:3000';
export function useStations() {
const [stations, setStations] = useState([]);
const [isLoading, setIsLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
const fetchStations = async () => {
try {
const response = await fetch(`${API_URL}/stations`);
if (!response.ok) {
throw new Error('Failed to fetch stations');
}
const data = await response.json();
// Convert lat/lon to numbers
const stationsWithNumbers = data.map(station => ({
...station,
latitude: parseFloat(station.latitude),
longitude: parseFloat(station.longitude),
}));
setStations(stationsWithNumbers);
setError(null);
} catch (err) {
console.error('Error fetching stations:', err);
setError(err.message);
} finally {
setIsLoading(false);
}
};
fetchStations();
}, []);
return {
stations,
isLoading,
error,
};
}

View File

@@ -0,0 +1,221 @@
import { useState, useEffect, useCallback, useRef } from 'react';
import { calculateBearing, calculateDistance, MIN_DISTANCE_FOR_BEARING } from '../utils/bearing';
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:3000';
/**
* Pre-calculate bearings for historical data based on consecutive positions
* @param {Array} positions - Array of positions sorted by timestamp ASC
* @returns {Array} - Positions with calculated bearings
*/
function calculateHistoricalBearings(positions) {
// Group positions by train_id
const trainPositions = new Map();
for (const pos of positions) {
if (!trainPositions.has(pos.train_id)) {
trainPositions.set(pos.train_id, []);
}
trainPositions.get(pos.train_id).push(pos);
}
// Calculate bearings for each train's positions
const result = [];
for (const [trainId, trainPos] of trainPositions) {
// Sort by timestamp (should already be sorted, but ensure)
trainPos.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp));
for (let i = 0; i < trainPos.length; i++) {
const current = trainPos[i];
let bearing = current.bearing; // Use existing bearing if available
if (bearing === null || bearing === undefined) {
// Calculate bearing from previous position
if (i > 0) {
const prev = trainPos[i - 1];
const distance = calculateDistance(prev.latitude, prev.longitude, current.latitude, current.longitude);
if (distance >= MIN_DISTANCE_FOR_BEARING) {
bearing = calculateBearing(prev.latitude, prev.longitude, current.latitude, current.longitude);
}
}
}
result.push({
...current,
bearing,
});
}
}
// Re-sort by timestamp to maintain chronological order
result.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp));
return result;
}
export function useTimeline() {
const [isTimelineMode, setIsTimelineMode] = useState(false);
const [isPlaying, setIsPlaying] = useState(false);
const [currentTime, setCurrentTime] = useState(Date.now());
const [historyData, setHistoryData] = useState([]);
const [isLoading, setIsLoading] = useState(false);
const [playbackSpeed, setPlaybackSpeed] = useState(1); // 1x, 2x, 5x, 10x
const playIntervalRef = useRef(null);
// Time range: last hour by default
const [timeRange, setTimeRange] = useState({
start: Date.now() - 3600000, // 1 hour ago
end: Date.now(),
});
// Load all historical positions for the time range
const loadAllHistory = useCallback(async () => {
setIsLoading(true);
try {
const startISO = new Date(timeRange.start).toISOString();
const endISO = new Date(timeRange.end).toISOString();
// Fetch all positions in the time range with a single request
const response = await fetch(
`${API_URL}/trains/history/all?from=${startISO}&to=${endISO}&limit=10000`
);
if (!response.ok) {
console.error('Failed to fetch historical data');
setIsLoading(false);
return;
}
const allHistory = await response.json();
// Calculate bearings based on consecutive positions
const historyWithBearings = calculateHistoricalBearings(allHistory);
setHistoryData(historyWithBearings);
console.log(`Loaded ${historyWithBearings.length} historical positions with bearings calculated`);
} catch (err) {
console.error('Error loading history:', err);
}
setIsLoading(false);
}, [timeRange]);
// Get positions at a specific time
const getPositionsAtTime = useCallback((timestamp) => {
if (historyData.length === 0) return [];
// Group by train_id and get the closest position to the timestamp
const trainPositions = new Map();
for (const position of historyData) {
const posTime = new Date(position.timestamp).getTime();
// Only consider positions up to the current playback time
if (posTime <= timestamp) {
const existing = trainPositions.get(position.train_id);
if (!existing || new Date(existing.timestamp).getTime() < posTime) {
// latitude/longitude already come as numbers from the API
trainPositions.set(position.train_id, position);
}
}
}
return Array.from(trainPositions.values());
}, [historyData]);
// Toggle timeline mode
const toggleTimelineMode = useCallback(() => {
if (!isTimelineMode) {
// Entering timeline mode
setIsTimelineMode(true);
setCurrentTime(timeRange.start);
loadAllHistory();
} else {
// Exiting timeline mode
setIsTimelineMode(false);
setIsPlaying(false);
setHistoryData([]);
}
}, [isTimelineMode, timeRange.start, loadAllHistory]);
// Play/pause
const togglePlay = useCallback(() => {
setIsPlaying(prev => !prev);
}, []);
// Skip forward/backward
const skip = useCallback((seconds) => {
setCurrentTime(prev => {
const newTime = prev + (seconds * 1000);
return Math.max(timeRange.start, Math.min(timeRange.end, newTime));
});
}, [timeRange]);
// Seek to specific time
const seekTo = useCallback((timestamp) => {
setCurrentTime(timestamp);
setIsPlaying(false);
}, []);
// Change playback speed
const changeSpeed = useCallback((speed) => {
setPlaybackSpeed(speed);
}, []);
// Update time range
const updateTimeRange = useCallback((start, end) => {
setTimeRange({ start, end });
setCurrentTime(start);
if (isTimelineMode) {
loadAllHistory();
}
}, [isTimelineMode, loadAllHistory]);
// Playback effect
useEffect(() => {
if (isPlaying && isTimelineMode) {
playIntervalRef.current = setInterval(() => {
setCurrentTime(prev => {
const next = prev + (1000 * playbackSpeed); // Advance based on speed
if (next >= timeRange.end) {
setIsPlaying(false);
return timeRange.end;
}
return next;
});
}, 100); // Update every 100ms for smooth animation
} else {
if (playIntervalRef.current) {
clearInterval(playIntervalRef.current);
playIntervalRef.current = null;
}
}
return () => {
if (playIntervalRef.current) {
clearInterval(playIntervalRef.current);
}
};
}, [isPlaying, isTimelineMode, playbackSpeed, timeRange.end]);
// Get current positions based on mode
const timelinePositions = isTimelineMode ? getPositionsAtTime(currentTime) : [];
return {
isTimelineMode,
isPlaying,
isLoading,
currentTime,
timeRange,
playbackSpeed,
timelinePositions,
historyData,
toggleTimelineMode,
togglePlay,
skip,
seekTo,
changeSpeed,
updateTimeRange,
};
}

View File

@@ -0,0 +1,230 @@
import { useState, useEffect, useCallback, useRef } from 'react';
import { io } from 'socket.io-client';
import { calculateBearing, calculateDistance, MIN_DISTANCE_FOR_BEARING } from '../utils/bearing';
const API_URL = import.meta.env.VITE_API_URL || 'http://localhost:3000';
const WS_URL = import.meta.env.VITE_WS_URL || 'http://localhost:3000';
// Calculate bearing for trains based on previous positions
function addCalculatedBearings(newTrains, previousPositions) {
return newTrains.map(train => {
// If train already has bearing from API, use it
if (train.bearing !== null && train.bearing !== undefined) {
previousPositions.set(train.train_id, { lat: train.latitude, lon: train.longitude });
return train;
}
const prevPos = previousPositions.get(train.train_id);
let calculatedBearing = null;
if (prevPos) {
const distance = calculateDistance(prevPos.lat, prevPos.lon, train.latitude, train.longitude);
// Only calculate bearing if the train moved enough
if (distance >= MIN_DISTANCE_FOR_BEARING) {
calculatedBearing = calculateBearing(prevPos.lat, prevPos.lon, train.latitude, train.longitude);
}
}
// Update previous position
previousPositions.set(train.train_id, { lat: train.latitude, lon: train.longitude });
return {
...train,
bearing: calculatedBearing,
};
});
}
export function useTrains() {
const [trains, setTrains] = useState([]);
const [selectedTrain, setSelectedTrain] = useState(null);
const [isConnected, setIsConnected] = useState(false);
const [error, setError] = useState(null);
const [stats, setStats] = useState({
active_trains: 0,
last_update: null,
});
const socketRef = useRef(null);
// Store previous positions to calculate bearing
const previousPositionsRef = useRef(new Map());
// Initialize WebSocket connection
useEffect(() => {
console.log('Connecting to WebSocket:', WS_URL);
const socket = io(WS_URL, {
transports: ['websocket', 'polling'],
reconnection: true,
reconnectionDelay: 1000,
reconnectionAttempts: 5,
});
socket.on('connect', () => {
console.log('WebSocket connected');
setIsConnected(true);
setError(null);
});
socket.on('disconnect', () => {
console.log('WebSocket disconnected');
setIsConnected(false);
});
socket.on('connect_error', (err) => {
console.error('WebSocket connection error:', err);
setError(err.message);
setIsConnected(false);
});
socket.on('trains:update', (positions) => {
console.log('Received train updates:', positions.length);
const trainsWithBearing = addCalculatedBearings(positions, previousPositionsRef.current);
setTrains(trainsWithBearing);
});
socket.on('train:update', (position) => {
console.log('Received individual train update:', position.train_id);
setTrains((prev) => {
const [updatedPosition] = addCalculatedBearings([position], previousPositionsRef.current);
const index = prev.findIndex(t => t.train_id === position.train_id);
if (index >= 0) {
const updated = [...prev];
updated[index] = updatedPosition;
return updated;
}
return [...prev, updatedPosition];
});
});
socketRef.current = socket;
return () => {
socket.disconnect();
};
}, []);
// Fetch initial data
useEffect(() => {
const fetchInitialData = async () => {
try {
const response = await fetch(`${API_URL}/trains/current`);
if (!response.ok) {
throw new Error('Failed to fetch trains');
}
const data = await response.json();
console.log('Fetched initial trains:', data.length);
// Store initial positions (no bearing calculation for first load)
data.forEach(train => {
previousPositionsRef.current.set(train.train_id, {
lat: train.latitude,
lon: train.longitude,
});
});
setTrains(data);
} catch (err) {
console.error('Error fetching initial data:', err);
setError(err.message);
}
};
fetchInitialData();
}, []);
// Fetch stats periodically
useEffect(() => {
const fetchStats = async () => {
try {
const response = await fetch(`${API_URL}/stats`);
if (!response.ok) return;
const data = await response.json();
setStats(data);
} catch (err) {
console.error('Error fetching stats:', err);
}
};
fetchStats();
const interval = setInterval(fetchStats, 30000);
return () => clearInterval(interval);
}, []);
// Subscribe to specific train
const subscribeTrain = useCallback((trainId) => {
if (socketRef.current && trainId) {
socketRef.current.emit('subscribe:train', trainId);
}
}, []);
// Unsubscribe from specific train
const unsubscribeTrain = useCallback((trainId) => {
if (socketRef.current && trainId) {
socketRef.current.emit('unsubscribe:train', trainId);
}
}, []);
// Fetch full train details
const fetchTrainDetails = useCallback(async (trainId) => {
try {
const response = await fetch(`${API_URL}/trains/${trainId}`);
if (!response.ok) return null;
return await response.json();
} catch (err) {
console.error('Error fetching train details:', err);
return null;
}
}, []);
// Select train and fetch full details
const selectTrain = useCallback(async (train) => {
if (selectedTrain) {
unsubscribeTrain(selectedTrain.train_id);
}
if (train) {
subscribeTrain(train.train_id);
// Fetch full train details including type, service name, etc.
const details = await fetchTrainDetails(train.train_id);
if (details) {
// Merge position data with full train details
// Keep all fleet data from the original train object (codLinea, estaciones, etc.)
setSelectedTrain({
...train,
train_type: details.train_type,
service_name: details.service_name,
first_seen: details.first_seen,
last_seen: details.last_seen,
metadata: details.metadata,
// Also merge fleet_data if available from details
...(details.fleet_data && {
codLinea: details.fleet_data.codLinea,
retrasoMin: details.fleet_data.retrasoMin,
codEstAct: details.fleet_data.codEstAct,
estacionActual: details.fleet_data.estacionActual,
codEstSig: details.fleet_data.codEstSig,
estacionSiguiente: details.fleet_data.estacionSiguiente,
horaLlegadaSigEst: details.fleet_data.horaLlegadaSigEst,
codEstDest: details.fleet_data.codEstDest,
estacionDestino: details.fleet_data.estacionDestino,
codEstOrig: details.fleet_data.codEstOrig,
estacionOrigen: details.fleet_data.estacionOrigen,
nucleo: details.fleet_data.nucleo,
accesible: details.fleet_data.accesible,
via: details.fleet_data.via,
}),
});
return;
}
}
setSelectedTrain(train);
}, [selectedTrain, subscribeTrain, unsubscribeTrain, fetchTrainDetails]);
return {
trains,
selectedTrain,
selectTrain,
isConnected,
error,
stats,
};
}

10
frontend/src/main.jsx Normal file
View File

@@ -0,0 +1,10 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App.jsx';
import './styles/index.css';
ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<App />
</React.StrictMode>,
);

View File

@@ -0,0 +1,857 @@
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
overflow: hidden;
}
#root {
width: 100vw;
height: 100vh;
display: flex;
flex-direction: column;
}
.app {
width: 100%;
height: 100%;
display: flex;
flex-direction: column;
}
.header {
height: 60px;
background: #1a1a2e;
color: white;
display: flex;
align-items: center;
justify-content: space-between;
padding: 0 20px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.15);
z-index: 1000;
}
.header-title {
font-size: 1.5rem;
font-weight: bold;
display: flex;
align-items: center;
gap: 10px;
}
.header-stats {
display: flex;
gap: 20px;
font-size: 0.9rem;
}
.stat {
display: flex;
flex-direction: column;
align-items: flex-end;
}
.stat-label {
opacity: 0.7;
font-size: 0.8rem;
}
.stat-value {
font-weight: bold;
font-size: 1.1rem;
}
.main-content {
flex: 1;
display: flex;
position: relative;
overflow: hidden;
}
.map-container {
flex: 1;
position: relative;
}
.leaflet-container {
width: 100%;
height: 100%;
}
.sidebar {
width: 350px;
background: white;
border-left: 1px solid #e0e0e0;
display: flex;
flex-direction: column;
overflow: hidden;
}
.sidebar-header {
padding: 15px 20px;
border-bottom: 1px solid #e0e0e0;
background: #f5f5f5;
}
.sidebar-header h2 {
font-size: 1.2rem;
margin-bottom: 5px;
}
.sidebar-subtitle {
font-size: 0.85rem;
color: #666;
}
.sidebar-content {
flex: 1;
overflow-y: auto;
padding: 20px;
}
.train-info {
display: flex;
flex-direction: column;
gap: 15px;
}
.info-section {
background: #f9f9f9;
padding: 15px;
border-radius: 8px;
border: 1px solid #e0e0e0;
}
.info-section h3 {
font-size: 1rem;
margin-bottom: 10px;
color: #333;
}
.info-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 10px;
}
.info-item {
display: flex;
flex-direction: column;
}
.info-label {
font-size: 0.75rem;
color: #666;
text-transform: uppercase;
letter-spacing: 0.5px;
}
.info-value {
font-size: 1rem;
font-weight: 500;
color: #333;
margin-top: 2px;
}
.timeline-container {
position: absolute;
bottom: 20px;
left: 20px;
right: 370px;
background: white;
border-radius: 12px;
padding: 20px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
z-index: 1000;
}
.timeline-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 15px;
}
.timeline-title {
font-size: 1rem;
font-weight: bold;
color: #333;
}
.timeline-controls {
display: flex;
gap: 10px;
}
.timeline-btn {
background: #1a1a2e;
color: white;
border: none;
border-radius: 6px;
padding: 8px 16px;
cursor: pointer;
font-size: 0.9rem;
display: flex;
align-items: center;
gap: 5px;
transition: background 0.2s;
}
.timeline-btn:hover {
background: #2a2a3e;
}
.timeline-btn:disabled {
background: #ccc;
cursor: not-allowed;
}
.timeline-slider {
width: 100%;
height: 6px;
border-radius: 3px;
background: #e0e0e0;
outline: none;
-webkit-appearance: none;
appearance: none;
cursor: pointer;
}
.timeline-slider::-webkit-slider-thumb {
-webkit-appearance: none;
appearance: none;
width: 18px;
height: 18px;
border-radius: 50%;
background: #1a1a2e;
cursor: pointer;
}
.timeline-slider::-moz-range-thumb {
width: 18px;
height: 18px;
border-radius: 50%;
background: #1a1a2e;
cursor: pointer;
border: none;
}
.timeline-time {
text-align: center;
margin-top: 10px;
font-size: 0.9rem;
color: #666;
}
.loading {
display: flex;
justify-content: center;
align-items: center;
height: 100%;
font-size: 1.2rem;
color: #666;
}
.error {
display: flex;
justify-content: center;
align-items: center;
height: 100%;
flex-direction: column;
gap: 10px;
color: #d32f2f;
}
.empty-state {
display: flex;
justify-content: center;
align-items: center;
height: 100%;
flex-direction: column;
gap: 10px;
color: #666;
font-size: 1rem;
}
.status-indicator {
display: inline-block;
width: 8px;
height: 8px;
border-radius: 50%;
margin-right: 8px;
}
.status-indicator.active {
background: #4caf50;
box-shadow: 0 0 4px #4caf50;
}
.status-indicator.inactive {
background: #9e9e9e;
}
.status-indicator.timeline {
background: #ff9800;
box-shadow: 0 0 4px #ff9800;
}
/* Timeline Styles */
.timeline-realtime {
display: flex;
justify-content: space-between;
align-items: center;
}
.timeline-realtime-info {
display: flex;
align-items: center;
gap: 8px;
color: #666;
}
.timeline-title-section {
display: flex;
align-items: center;
gap: 10px;
}
.timeline-btn-primary {
background: #ff9800;
}
.timeline-btn-primary:hover {
background: #f57c00;
}
.timeline-btn-close {
background: transparent;
color: #666;
padding: 6px;
}
.timeline-btn-close:hover {
background: #f0f0f0;
color: #333;
}
.timeline-btn-play {
background: #4caf50;
padding: 8px 20px;
}
.timeline-btn-play:hover {
background: #43a047;
}
.timeline-slider-container {
position: relative;
margin: 15px 0;
}
.timeline-progress {
position: absolute;
top: 0;
left: 0;
height: 6px;
background: #ff9800;
border-radius: 3px;
pointer-events: none;
}
.timeline-time-display {
display: flex;
justify-content: space-between;
align-items: center;
font-size: 0.85rem;
}
.timeline-time-label {
color: #999;
}
.timeline-time-current {
font-size: 1.1rem;
font-weight: bold;
color: #ff9800;
}
.timeline-speed {
display: flex;
align-items: center;
gap: 8px;
margin-left: 10px;
padding-left: 10px;
border-left: 1px solid #e0e0e0;
}
.timeline-speed span {
font-size: 0.85rem;
color: #666;
}
.timeline-speed select {
padding: 4px 8px;
border: 1px solid #e0e0e0;
border-radius: 4px;
background: white;
font-size: 0.85rem;
cursor: pointer;
}
.timeline-loading {
animation: spin 1s linear infinite;
}
.timeline-loading-message {
text-align: center;
color: #999;
font-size: 0.85rem;
margin-top: 10px;
}
@keyframes spin {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
/* Navigation */
.nav-tabs {
display: flex;
gap: 5px;
background: rgba(255, 255, 255, 0.1);
padding: 4px;
border-radius: 8px;
}
.nav-tab {
background: transparent;
border: none;
color: rgba(255, 255, 255, 0.7);
padding: 8px 16px;
border-radius: 6px;
cursor: pointer;
font-size: 0.9rem;
display: flex;
align-items: center;
gap: 6px;
transition: all 0.2s;
}
.nav-tab:hover {
background: rgba(255, 255, 255, 0.1);
color: white;
}
.nav-tab.active {
background: white;
color: #1a1a2e;
}
/* Dashboard Styles */
.dashboard {
flex: 1;
overflow-y: auto;
padding: 20px;
background: #f5f6fa;
}
.dashboard-loading,
.dashboard-error {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
height: 100%;
gap: 15px;
color: #666;
}
.dashboard-error {
color: #E74C3C;
}
.spinner {
width: 40px;
height: 40px;
border: 3px solid #e0e0e0;
border-top-color: #3498DB;
border-radius: 50%;
animation: spin 1s linear infinite;
}
/* Time Control */
.time-control {
background: white;
border-radius: 12px;
padding: 20px;
margin-bottom: 20px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
}
.time-control-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 15px;
flex-wrap: wrap;
gap: 15px;
}
.time-display {
display: flex;
align-items: center;
gap: 10px;
font-size: 1.1rem;
color: #333;
}
.current-time {
font-weight: 600;
}
.live-badge {
display: flex;
align-items: center;
gap: 5px;
background: #E74C3C;
color: white;
padding: 4px 10px;
border-radius: 20px;
font-size: 0.75rem;
font-weight: 600;
animation: pulse-live 2s ease-in-out infinite;
}
@keyframes pulse-live {
0%, 100% { opacity: 1; }
50% { opacity: 0.7; }
}
.time-buttons {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.time-buttons button {
background: #f0f0f0;
border: none;
padding: 8px 12px;
border-radius: 6px;
cursor: pointer;
font-size: 0.85rem;
display: flex;
align-items: center;
gap: 4px;
transition: all 0.2s;
color: #333;
}
.time-buttons button:hover {
background: #e0e0e0;
}
.time-buttons .live-button {
background: #E74C3C;
color: white;
}
.time-buttons .live-button:hover {
background: #c0392b;
}
.time-buttons .live-button.active {
background: #27AE60;
}
.time-slider-container {
margin-top: 10px;
}
.time-slider {
width: 100%;
height: 8px;
border-radius: 4px;
background: #e0e0e0;
outline: none;
-webkit-appearance: none;
cursor: pointer;
}
.time-slider::-webkit-slider-thumb {
-webkit-appearance: none;
width: 20px;
height: 20px;
border-radius: 50%;
background: #3498DB;
cursor: pointer;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.2);
}
.time-slider::-moz-range-thumb {
width: 20px;
height: 20px;
border-radius: 50%;
background: #3498DB;
cursor: pointer;
border: none;
}
.time-range-labels {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: #999;
margin-top: 5px;
}
/* Stats Row */
.stats-row {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 15px;
margin-bottom: 20px;
}
.stat-card {
background: white;
border-radius: 12px;
padding: 20px;
display: flex;
align-items: flex-start;
gap: 15px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
}
.stat-card-icon {
width: 50px;
height: 50px;
border-radius: 12px;
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.stat-card-content {
display: flex;
flex-direction: column;
}
.stat-card-label {
font-size: 0.85rem;
color: #666;
}
.stat-card-value {
font-size: 1.8rem;
font-weight: 700;
color: #333;
line-height: 1.2;
}
.stat-card-subvalue {
font-size: 0.75rem;
color: #999;
}
.stat-card-trend {
font-size: 0.85rem;
font-weight: 600;
}
.stat-card-trend.positive {
color: #27AE60;
}
.stat-card-trend.negative {
color: #E74C3C;
}
/* Dashboard Grid */
.dashboard-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(350px, 1fr));
gap: 20px;
}
.dashboard-card {
background: white;
border-radius: 12px;
padding: 20px;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.08);
}
.dashboard-card.wide {
grid-column: 1 / -1;
}
.dashboard-card h3 {
font-size: 1rem;
color: #333;
margin-bottom: 15px;
display: flex;
align-items: center;
gap: 8px;
}
/* Progress Bars */
.progress-bar-container {
margin-bottom: 12px;
}
.progress-bar-header {
display: flex;
justify-content: space-between;
margin-bottom: 5px;
}
.progress-bar-label {
font-size: 0.85rem;
color: #666;
}
.progress-bar-value {
font-size: 0.85rem;
font-weight: 600;
color: #333;
}
.progress-bar-track {
height: 8px;
background: #e0e0e0;
border-radius: 4px;
overflow: hidden;
}
.progress-bar-fill {
height: 100%;
border-radius: 4px;
transition: width 0.3s ease;
}
/* Donut Chart */
.donut-chart {
display: flex;
align-items: center;
gap: 20px;
}
.donut-chart svg {
width: 120px;
height: 120px;
flex-shrink: 0;
}
.donut-legend {
flex: 1;
}
.donut-legend-item {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 8px;
font-size: 0.85rem;
}
.donut-legend-color {
width: 12px;
height: 12px;
border-radius: 3px;
flex-shrink: 0;
}
.donut-legend-label {
flex: 1;
color: #666;
}
.donut-legend-value {
font-weight: 600;
color: #333;
}
/* Timeline Charts */
.timeline-charts {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 20px;
}
.timeline-chart {
text-align: center;
}
.timeline-chart .chart-label {
font-size: 0.8rem;
color: #666;
margin-bottom: 5px;
display: block;
}
/* Lines Table */
.lines-table {
font-size: 0.85rem;
}
.lines-table-header {
display: grid;
grid-template-columns: 1fr 1fr 1fr 1fr;
padding: 10px 0;
border-bottom: 2px solid #e0e0e0;
font-weight: 600;
color: #666;
}
.lines-table-row {
display: grid;
grid-template-columns: 1fr 1fr 1fr 1fr;
padding: 10px 0;
border-bottom: 1px solid #f0f0f0;
align-items: center;
}
.lines-table-row:hover {
background: #f9f9f9;
}
.line-code {
font-weight: 600;
color: #3498DB;
}
.line-nucleo {
font-weight: 400;
color: #999;
font-size: 0.8em;
}
/* Responsive */
@media (max-width: 768px) {
.sidebar {
position: absolute;
right: 0;
top: 0;
bottom: 0;
z-index: 1001;
transform: translateX(100%);
transition: transform 0.3s;
}
.sidebar.open {
transform: translateX(0);
}
.timeline-container {
right: 20px;
}
}

View File

@@ -0,0 +1,53 @@
/**
* Calculate bearing (direction) between two geographic points
* @param {number} lat1 - Starting latitude in degrees
* @param {number} lon1 - Starting longitude in degrees
* @param {number} lat2 - Ending latitude in degrees
* @param {number} lon2 - Ending longitude in degrees
* @returns {number} Bearing in degrees (0-360, where 0=North, 90=East, 180=South, 270=West)
*/
export function calculateBearing(lat1, lon1, lat2, lon2) {
// Convert to radians
const φ1 = lat1 * Math.PI / 180;
const φ2 = lat2 * Math.PI / 180;
const Δλ = (lon2 - lon1) * Math.PI / 180;
const y = Math.sin(Δλ) * Math.cos(φ2);
const x = Math.cos(φ1) * Math.sin(φ2) - Math.sin(φ1) * Math.cos(φ2) * Math.cos(Δλ);
let θ = Math.atan2(y, x);
// Convert to degrees and normalize to 0-360
let bearing = (θ * 180 / Math.PI + 360) % 360;
return bearing;
}
/**
* Calculate distance between two points in meters (Haversine formula)
* @param {number} lat1 - Starting latitude
* @param {number} lon1 - Starting longitude
* @param {number} lat2 - Ending latitude
* @param {number} lon2 - Ending longitude
* @returns {number} Distance in meters
*/
export function calculateDistance(lat1, lon1, lat2, lon2) {
const R = 6371000; // Earth radius in meters
const φ1 = lat1 * Math.PI / 180;
const φ2 = lat2 * Math.PI / 180;
const Δφ = (lat2 - lat1) * Math.PI / 180;
const Δλ = (lon2 - lon1) * Math.PI / 180;
const a = Math.sin(Δφ / 2) * Math.sin(Δφ / 2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ / 2) * Math.sin(Δλ / 2);
const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return R * c;
}
/**
* Minimum distance (in meters) to consider for bearing calculation
* Too small distances can result in inaccurate bearings
*/
export const MIN_DISTANCE_FOR_BEARING = 10;

View File

@@ -0,0 +1,126 @@
/**
* Infer train type based on train ID ranges
* Based on official Renfe train numbering system
*/
const TRAIN_TYPE_RANGES = [
{ min: 0, max: 1999, type: 'LONG_DISTANCE', name: 'Larga Distancia' },
{ min: 2000, max: 8999, type: 'MEDIUM_DISTANCE', name: 'Media Distancia' },
{ min: 9000, max: 9099, type: 'HIGH_SPEED', name: 'Media Distancia AV / AVE LD' },
{ min: 9100, max: 9299, type: 'HIGH_SPEED', name: 'Talgo 200' },
{ min: 9300, max: 9499, type: 'HIGH_SPEED', name: 'AVE Programacion Especial' },
{ min: 9500, max: 9899, type: 'HIGH_SPEED', name: 'AV MD / AVE LD' },
{ min: 9900, max: 9999, type: 'SERVICE', name: 'Servicio Interno AV' },
{ min: 10000, max: 10399, type: 'LONG_DISTANCE', name: 'Ramas Larga Distancia' },
{ min: 14000, max: 15999, type: 'LONG_DISTANCE', name: 'Prog. Especial LD' },
{ min: 16000, max: 26999, type: 'COMMUTER', name: 'Cercanias' },
{ min: 27000, max: 29999, type: 'COMMUTER', name: 'Prog. Especial Cercanias' },
{ min: 30000, max: 30999, type: 'LONG_DISTANCE', name: 'Especiales LD' },
{ min: 31000, max: 31999, type: 'MEDIUM_DISTANCE', name: 'Especiales MD' },
{ min: 32000, max: 33999, type: 'MEDIUM_DISTANCE', name: 'Prog. Especial MD' },
{ min: 34000, max: 35999, type: 'COMMUTER', name: 'Especiales Cercanias' },
{ min: 36000, max: 36999, type: 'SPECIAL', name: 'Prog. Especial Varios Op.' },
{ min: 37000, max: 39999, type: 'SPECIAL', name: 'Especiales Varios Op.' },
{ min: 40000, max: 49999, type: 'FREIGHT', name: 'Internacional Mercancias' },
{ min: 50000, max: 50999, type: 'FREIGHT', name: 'TECOS Red Terrestre' },
{ min: 51000, max: 51999, type: 'FREIGHT', name: 'TECOS Red Maritima' },
{ min: 52000, max: 52999, type: 'FREIGHT', name: 'TECOS Internacional' },
{ min: 53000, max: 53999, type: 'FREIGHT', name: 'Petroquimicos' },
{ min: 54000, max: 54999, type: 'FREIGHT', name: 'Material Agricola' },
{ min: 55000, max: 55999, type: 'FREIGHT', name: 'Minero/Construccion' },
{ min: 56000, max: 56999, type: 'FREIGHT', name: 'Prod. Manufacturados' },
{ min: 57000, max: 57999, type: 'FREIGHT', name: 'Portacoches' },
{ min: 58000, max: 58999, type: 'FREIGHT', name: 'Siderurgicos' },
{ min: 59000, max: 59999, type: 'FREIGHT', name: 'Auto/Sider. Internacional' },
{ min: 60000, max: 60999, type: 'FREIGHT', name: 'Surcos Polivalentes' },
{ min: 61000, max: 61999, type: 'FREIGHT', name: 'Servicio Interno Merc.' },
{ min: 62000, max: 79999, type: 'FREIGHT', name: 'Mercancias Varios Op.' },
{ min: 80000, max: 80999, type: 'FREIGHT', name: 'TECO P.E.' },
{ min: 81000, max: 81999, type: 'FREIGHT', name: 'Manufacturados/Agricola' },
{ min: 82000, max: 82999, type: 'FREIGHT', name: 'Prod. Industriales' },
{ min: 83000, max: 84999, type: 'FREIGHT', name: 'Polivalentes' },
{ min: 85000, max: 85999, type: 'FREIGHT', name: 'Estacional/Militar' },
{ min: 86000, max: 86999, type: 'FREIGHT', name: 'Servicio Interno Merc.' },
{ min: 87000, max: 89999, type: 'FREIGHT', name: 'Prog. Especial Varios Op.' },
{ min: 90000, max: 90999, type: 'FREIGHT', name: 'Transportes Excepcionales' },
{ min: 91000, max: 92999, type: 'FREIGHT', name: 'Excepc. Intermodales' },
{ min: 93000, max: 94999, type: 'FREIGHT', name: 'Excepc. Prod. Industriales' },
{ min: 95000, max: 96999, type: 'FREIGHT', name: 'Excepc. Polivalente' },
{ min: 97000, max: 99999, type: 'FREIGHT', name: 'Excepc. Varios Op.' },
];
/**
* Get train type info from train ID
* @param {string|number} trainId - The train ID
* @returns {{ type: string, name: string } | null}
*/
export function getTrainTypeFromId(trainId) {
// Extract numeric part from train_id (handle formats like "15000", "AVE-15000", etc.)
const numericMatch = String(trainId).match(/\d+/);
if (!numericMatch) return null;
const numericId = parseInt(numericMatch[0], 10);
for (const range of TRAIN_TYPE_RANGES) {
if (numericId >= range.min && numericId <= range.max) {
return {
type: range.type,
name: range.name,
};
}
}
return null;
}
/**
* Format train type for display
* @param {string} type - Train type code
* @param {string} inferredName - Inferred name from ID
* @returns {string}
*/
export function formatTrainType(type, inferredName = null) {
const typeMap = {
'HIGH_SPEED': 'Alta Velocidad (AVE)',
'LONG_DISTANCE': 'Larga Distancia',
'MEDIUM_DISTANCE': 'Media Distancia',
'REGIONAL': 'Regional',
'COMMUTER': 'Cercanias',
'FREIGHT': 'Mercancias',
'SERVICE': 'Servicio Interno',
'SPECIAL': 'Especial',
'RAIL': 'Ferrocarril',
'UNKNOWN': 'No especificado',
};
// If we have an inferred name, use it for more detail
if (inferredName) {
const baseType = typeMap[type] || type;
if (inferredName !== baseType) {
return `${baseType} - ${inferredName}`;
}
return baseType;
}
return typeMap[type] || type || 'No especificado';
}
/**
* Get color for train type (for map markers)
* @param {string} type - Train type code
* @returns {string} - Hex color
*/
export function getTrainTypeColor(type) {
const colorMap = {
'HIGH_SPEED': '#E74C3C', // Red - AVE
'LONG_DISTANCE': '#3498DB', // Blue - Long distance
'MEDIUM_DISTANCE': '#9B59B6', // Purple - Medium distance
'COMMUTER': '#2ECC71', // Green - Cercanias
'FREIGHT': '#95A5A6', // Gray - Freight
'SERVICE': '#F39C12', // Orange - Service
'SPECIAL': '#E67E22', // Dark orange - Special
'REGIONAL': '#1ABC9C', // Teal - Regional
};
return colorMap[type] || '#1a1a2e'; // Default dark blue
}

14
frontend/vite.config.js Normal file
View File

@@ -0,0 +1,14 @@
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
server: {
port: 5173,
host: true,
},
preview: {
port: 5173,
host: true,
},
});

159
nginx/conf.d/default.conf Normal file
View File

@@ -0,0 +1,159 @@
# Upstream para el backend API
upstream api_backend {
server api:3000;
keepalive 32;
}
# Upstream para el frontend
upstream frontend_app {
server frontend:80;
keepalive 16;
}
# Configuración del servidor principal
server {
listen 80;
server_name localhost;
# Logs
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# API REST endpoints
location /api {
# Reescribir la URL quitando el prefijo /api
rewrite ^/api/(.*)$ /$1 break;
proxy_pass http://api_backend;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# CORS headers (si es necesario)
add_header Access-Control-Allow-Origin * always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
if ($request_method = 'OPTIONS') {
return 204;
}
}
# WebSocket endpoint
location /ws {
proxy_pass http://api_backend;
proxy_http_version 1.1;
# WebSocket upgrade headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Headers estándar
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts más largos para WebSocket
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Deshabilitar buffering para WebSocket
proxy_buffering off;
}
# Socket.io endpoint
location /socket.io/ {
proxy_pass http://api_backend;
proxy_http_version 1.1;
# WebSocket upgrade headers
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Headers estándar
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts más largos para WebSocket
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Deshabilitar buffering para WebSocket
proxy_buffering off;
}
# Frontend - SPA con fallback para React Router
location / {
proxy_pass http://frontend_app;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Cache de activos estáticos
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
proxy_pass http://frontend_app;
expires 1y;
add_header Cache-Control "public, immutable";
}
}
# Denegar acceso a archivos ocultos
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
# Configuración HTTPS (descomentar cuando tengas certificados)
# server {
# listen 443 ssl http2;
# server_name localhost;
#
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# ssl_protocols TLSv1.2 TLSv1.3;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
#
# # Incluir las mismas configuraciones de location que arriba
# # ...
# }
# Redirección HTTP a HTTPS (descomentar cuando tengas HTTPS configurado)
# server {
# listen 80;
# server_name localhost;
# return 301 https://$server_name$request_uri;
# }

40
nginx/nginx.conf Normal file
View File

@@ -0,0 +1,40 @@
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 20M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/x-javascript application/xhtml+xml
image/svg+xml;
# Incluir configuraciones de sitios
include /etc/nginx/conf.d/*.conf;
}

148
nginx/prod.conf Normal file
View File

@@ -0,0 +1,148 @@
# Configuración nginx para producción con SSL
#
# IMPORTANTE: Reemplazar YOUR_DOMAIN.com con tu dominio real
#
# Lecciones aprendidas del despliegue:
# 1. Usar resolver 127.0.0.11 para DNS de Docker (evita errores al iniciar)
# 2. Usar variables con set $backend para que nginx arranque aunque los upstreams no estén listos
# 3. http2 debe ir como directiva separada, no en listen (nginx moderno)
# 4. Socket.io añade /socket.io/ automáticamente, la URL base debe ser https://dominio sin path
# DNS resolver de Docker - CRÍTICO para arranque correcto
resolver 127.0.0.11 valid=30s;
# Redirección HTTP a HTTPS + endpoint ACME para Let's Encrypt
server {
listen 80;
server_name YOUR_DOMAIN.com;
# Endpoint para verificación de Let's Encrypt
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# Health check (también útil para load balancers)
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Redirección a HTTPS para todo lo demás
location / {
return 301 https://$host$request_uri;
}
}
# Servidor HTTPS principal
server {
listen 443 ssl;
http2 on; # NOTA: En nginx moderno, http2 va como directiva separada
server_name YOUR_DOMAIN.com;
# Certificados SSL de Let's Encrypt
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN.com/privkey.pem;
# Configuración SSL moderna y segura
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
# Logs
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Health check
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Socket.io endpoint - CRÍTICO para WebSocket en tiempo real
# NOTA: Socket.io añade /socket.io/ automáticamente a la URL base
# Por eso VITE_WS_URL debe ser https://dominio sin /ws o /socket.io
location /socket.io/ {
set $backend_socketio api:3000;
proxy_pass http://$backend_socketio;
proxy_http_version 1.1;
# Headers para WebSocket upgrade
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Headers estándar
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts largos para conexiones WebSocket persistentes
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
# Deshabilitar buffering para WebSocket
proxy_buffering off;
}
# API REST endpoints
# NOTA: Cuando se usa variable en proxy_pass, nginx NO hace reemplazo automático de URI
# Por eso necesitamos el rewrite explícito
location /api/ {
set $backend_api api:3000;
# Reescribir la URL quitando el prefijo /api
rewrite ^/api/(.*)$ /$1 break;
proxy_pass http://$backend_api;
proxy_http_version 1.1;
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# Frontend - SPA React
location / {
set $backend_frontend frontend:80;
proxy_pass http://$backend_frontend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Cache de activos estáticos
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
set $backend_frontend frontend:80;
proxy_pass http://$backend_frontend;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Denegar acceso a archivos ocultos
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}