pineforge
Engine comparison · v0.2 · 50-strategy benchmark

PineForge vs PyneCore.
Reproducible, not rhetorical.

Every number on this page is generated by bash benchmarks/run_all.sh in the open-source pineforge-engine repo, against the same 41,307-bar Binance ETH/USDT 15-minute feed. Reproduce in ~3 minutes from a clean clone, zero external API calls.

50-strategy match degree

How many of 50 strategies hit excellent tier against TradingView.

C++ static lib
PineForge
49 / 50
Excellent49Strong1Moderate0Weak0
Python (PyneSys cloud-compiled)
PyneCore
46 / 50
Excellent46Strong1Moderate2Weak1
TypeScript (LuxAlgo)
PineTS
indicators only
Strategy backtesterPer-bar indicators10/10 indicatorsmatch

Strategy execution is on the PineTS roadmap. We benchmark indicator-precision against PineTS to triangulate floating-point divergences.

Tiers follow the canonical PineForge parity sweep: excellent = all four dimensions (count delta, entry p90, exit p90, P&L p90) within strict thresholds and ≥95% trades matched; strong within 5× strict; moderate / weak / minimal step down from there. Strategies that use TradingView’s trail_* exits get the production threshold profile (looser exit + P&L tolerances).

The 3-strategy delta

Three strategies draw all the daylight.

On 47 of 50 reference strategies PineForge and PyneCore both hit excellent. The 3-strategy gap is not random — every divergence is in the same category: bracket exits, trailing stops, or partial position closes. PyneCore’s broker emulator differs from TV here; PineForge mirrors TV trade-for-trade.

06-liquidity-sweep
bracket exit
PineForgeexcellent (88 / 88)·PyneCoremoderate (91)
93 TV trades in window. PineForge matches 88 within strict tolerances. PyneCore generates 91 trades — a +3 count drift, plus exit-price drift on bracket-stopped exits.
07-scalping-strategy
trailing stop (production thresholds)
PineForgeexcellent (412 / 429)·PyneCoremoderate (412)
429 TV trades in window. PineForge: 412 matched, all four parity dimensions inside production thresholds. PyneCore: same matched count but exit-price p90 outside threshold — broker-emulator trail_offset arithmetic diverges from TV.
49-partial-exit-qty-percent
partial close (qty_percent)
PineForgeexcellent (683 / 725)·PyneCoreweak (2,671)
The clearest divergence in the corpus. 725 TV trades, PineForge matches 683 at strict parity. PyneCore generates 2,671 trades — 3.7× the correct count. Root cause: strategy.close(qty_percent=…) in PyneCore splits each entry into per-percentage sub-exits instead of a single partial close. Open upstream issue as of this commit.
Architecture & capability matrix

What each engine actually is.

CapabilityPineForgePyneCorePineTS
Backend languageC++17 static libPython (cloud-transpiled)TypeScript
Distributionopen-source runtime + closed transpileropen-sourceopen-source
50-strategy benchmark49 excellent + 1 strong46 excellent + 1 strong + 2 moderate + 1 weak— (not in scope)
158/162 strict TV parity— (different corpus)— (indicators only)
Determinism guaranteebyte-reproduciblebest-effortbest-effort
Bracket / trail / partial-exit semanticsTV-faithfuldrift documentedN/A
Bar magnifier (intra-bar fills)6 distributionssubsetN/A
Forward-test engineQ3 2026todayN/A
Live broker integration2027today (subset)N/A
Hosted Studio UIQ4 2026CLI onlyCLI only
Strategy marketplace + DRM2027 (designed)
Where each engine wins

We don’t hide our gaps. Neither should they.

CHOOSE PINEFORGE WHEN
  • You need byte-reproducible determinism (CI gates, audit trails, paid-parity claims to clients).
  • You need TV-faithful semantics on bracket exits, trailing stops, or partial closes. Three concrete strategies above are unambiguous on this.
  • You need native compiled speed for parameter sweeps (Optuna across thousands of parameter combinations on 50k-bar feeds).
  • You want a hosted Studio UI later — Code · Backtest · Optimize · Compare · Reports tabs are coming Q4 2026.
  • You eventually want to sell compiled strategies to other traders. The encrypted-distribution + license-server design is in the public engine repo.
CHOOSE PYNECORE WHEN
  • You need forward-testing or live broker execution today. PineForge ships those Q3-Q4 2026; PyneCore has them now.
  • You need a fully-Python strategy execution path (deeper integration with NumPy/Pandas backtesting tooling, Jupyter-native iteration).
  • You’re comfortable on the bracket/trail/partial-exit caveats (47/50 of strategies don’t exercise them).
  • The fully open-source ethos matters more than the closed transpiler tradeoff. PyneCore is open end-to-end; PineForge’s runtime is OSS but the codegen is closed.
  • You’re a heavy contributor and want a project where your PRs land directly in the strategy execution path.
Indicator precision · 10 canonical TA functions

PineForge & PineTS agree to 10⁻¹⁰.
PyneCore drifts at 10⁻⁸.

On the 41,307-bar feed, PineForge and PineTS produce per-bar relative errors at the floating-point limit (≈10⁻¹⁰ p90). PineForge↔PyneCore drifts ~100× higher (~10⁻⁸ p90). Both are well below trade-execution tolerance — but the divergence is consistent, suggesting a systematic rounding-mode difference rather than randomness.

IndicatorPF↔PineTS p90PF↔PyneCore p90
ema211.9e-101.9e-08
sma211.9e-101.9e-08
rsi149.7e-119.7e-09
atr142.8e-102.8e-08
macd_line2.3e-102.3e-08
macd_signal2.4e-102.4e-08
bb_basis00
bb_upper1.9e-101.9e-08

Don’t trust the table. Reproduce it.

Every number on this page is generated by the public benchmark suite. No hidden config, no API keys, no committed-snapshot tricks. ~3 minutes from a clean clone.

# 1. Clone the open-source engine + benchmark suite
git clone https://github.com/fullpass-4pass/pineforge-engine
cd pineforge-engine

# 2. Pull the LFS-tracked OHLCV (2.3 MB)
git lfs install && git lfs pull

# 3. Run the full three-engine sweep (~3 min)
bash benchmarks/run_all.sh

# 4. Read the results — same table that's on this page
cat benchmarks/results/summary.md