LLMSystemTrading: Multi-Agent Forex Trading System

AI-powered automated forex trading system with a multi-agent LLM pipeline, APScheduler 24/7 cron jobs, MT5 bridge, and real-time WebSocket dashboard.

Solo Developer2026 — OngoingSelf-hosted / Active
fastapipythonnextjsreacttypescriptpostgresqlredisquestdb
~110
API Endpoints
27
Frontend Pages
15
DB Tables
3+
LLM Providers

The Problem

Running forex strategies on MetaTrader 5 means staring at charts, manually deciding when signals are valid, and keeping the terminal open around the clock. Switching between MT5, a spreadsheet for tracking performance, and various news feeds is friction-heavy and doesn't scale past one account. There was no way to ask an LLM "should I enter this trade?" and get a traceable, logged answer.

LLMSystemTrading replaces that manual loop with a fully automated pipeline: LLM agents research the market, generate signals, route orders to MT5, and log every decision to a queryable database — with a full-featured dashboard to monitor and control it all.

System Architecture

The frontend and backend run in Docker alongside three databases. MT5 runs natively on Windows — python-mt5 bridges the gap over a local socket. Nginx routes all traffic: /api/** proxies to FastAPI, everything else hits the Next.js SSR server.

Multi-Agent LLM Pipeline

Each scheduled run passes through a sequential agent chain:

StageAgentOutput
1. ResearchMarket Research AgentNews sentiment, macro context
2. AnalysisMarket Analysis AgentChart pattern + indicator summary
3. SignalSignal Generation AgentBUY / SELL / HOLD + confidence score
4. ExecutionMT5 BridgeOrder placed, ticket logged
5. AuditLLM Audit LoggerFull token trace written to llm_calls + ai_journal

APScheduler triggers the chain on configurable cron intervals per strategy. Every LLM call is logged with provider, model, token count, cost, and the full response — visible in the LLM Analytics and LLM Usage pages.

Features

PagePurpose
DashboardLive positions, P&L, account equity, WebSocket clock
AccountsMT5 account credentials and trading config per account
StrategiesDefine strategies with LLM overrides and risk parameters
BacktestRun and compare historical backtests with full metrics
LLM AnalyticsPer-agent signal accuracy, confidence distribution
LLM UsageToken consumption and cost breakdown by provider/model
ScheduleView and control APScheduler cron jobs
Pipeline LogsStep-by-step execution trace for every pipeline run
NewsAggregated news feed used by research agents
AnalyticsAggregate P&L, win rate, drawdown charts
TradesFull trade history with entry/exit/ticket details
SettingsGlobal risk settings, Telegram alerts, LLM provider keys
StorageQuestDB candle data storage management
System UsageHost CPU, memory, and disk utilisation

Key Technical Decisions

DecisionChosenRejectedWhy
Time-series storageQuestDBPostgreSQL for allOHLCV candle inserts via InfluxDB line protocol are orders of magnitude faster in a columnar TSDB
Job schedulingAPSchedulerCelery + RabbitMQSingle-machine deployment; APScheduler runs in-process with zero infra overhead
MT5 connectionpython-mt5 (native)Docker containerMT5 is a Windows application; it cannot run inside a container
LLM providerAbstracted (Anthropic / OpenAI / Gemini / OpenRouter)Hard-coded providerAllows hot-swapping models per task without code changes
LLM key storageFernet-encrypted in DBPlain env varsKeys survive container rebuilds; displayed masked in the UI
Frontend stateZustandRedux / React QueryLightweight, no boilerplate; fits a 27-page internal dashboard

Screenshots

Dashboard — live positions and account equity

Dashboard

Accounts — MT5 credentials and trading config

Accounts

Analytics — P&L and win rate charts

Analytics

Backtest — historical strategy performance

Backtest

LLM Usage — token consumption and cost by provider

LLM Usage

MetaTrader 5 — connected terminal

MT5 Terminal

Pipeline Logs — step-by-step agent execution trace

Pipeline Logs

What I'd Do Differently

Move to Celery with a Redis broker from the start. APScheduler is convenient for a single machine, but the moment a second instance needs to share the job queue — or a run needs to be offloaded to a worker process — it becomes a bottleneck. Designing around a message queue from day one would make horizontal scaling straightforward.

I'd also add authentication before building any features. The system currently has no auth layer — it's safe behind a firewall, but adding it retroactively across 27 pages and ~110 endpoints is a larger lift than wiring it in at route setup time.