Momento + Capital
Analytics Core · LLM Layer
PORTFOLIO ANALYTICS

One place for
data, risk and AI.

A clean console on top of live MySQL data, with an on-prem LLM that reads your portfolios, computes core metrics and returns short, actionable answers.

ANALYTICS CONSOLE
Mode
Single / Multi
Panels
Metrics · Holdings
Trades
Top / Bottom 5
AI Panel
Narratives (next)
The analytics stack is structured in three layers: console, LLM engine, and agents on top.
Layer 1
Analytics Console
UI · Metrics · Tables
  • Controls card with single / multi mode, portfolio selector and date-range sliders wired to live MySQL dates.
  • Key Metrics panel (CAGR, cumulative return, Sharpe, Sortino, hit rate, drawdowns) for the exact interval you choose.
  • Holdings view at the latest date in the database plus Trade Metrics for top and bottom trades.
  • A dedicated AI Analysis card ready to show LLM narratives on top of all these metrics and tables.
What this layer solves Gives you a single, consistent place where PMs and analysts can see positions, performance and trade-level behaviour without opening spreadsheets.
Console layout
Layer 2
LLM Intelligence
On-prem LLM
  • On-prem Ollama instance that only receives compact JSONs with portfolio metrics, holdings and trade summaries.
  • All quant calculations (Sharpe, Sortino, max drawdown, annualized volatility) are done in the engine before hitting the model.
  • The model focuses on explanations, ranking portfolios, highlighting outliers and describing regime changes.
  • Exposed via /api/ask, reusing the same data and logic as the Analytics page.
What this layer solves Brings language around your numbers without sending portfolio data outside your infra. Every answer is grounded in the same MySQL datasets that power the console.
Engine view
LLM
Layer 3
Agents & Workflows
Decision Layer
  • Monthly review agents that generate commentary packs using the same metrics engine and LLM narratives.
  • Risk watchers that monitor drawdowns, volatility and concentration by portfolio or client.
  • Compliance helpers that flag exposures and regimes using pre-defined rules plus AI-assisted explanations.
  • All agents can be triggered from the console, scheduled jobs or programmatic calls to /api/ask.
What this layer solves Turns analytics into repeatable workflows: reviews, risk alerts and narrative reports that can be shared with PMs, ICs and clients.
Workflow steps
1. Metrics & data
2. LLM narrative
3. Alert / report
4. PM / IC actions
On-prem by design The LLM runs inside your infrastructure; portfolio data never leaves your servers.
Quant-first Metrics are computed in Python, not guessed by the model.
UI + API Use the Analytics page or call /api/ask — same engine, same answers.