Blog
Solana HFT execution guide: infrastructure, feeds, and transaction landing

Solana HFT execution guide: infrastructure, feeds, and transaction landing

Written by:

Maksym Bohdan

8

min read

Date:

April 28, 2026

Updated on:

April 28, 2026

Most HFT teams on Solana spend months on strategy and days on infrastructure. Then they hit a contested slot during a memecoin launch and realize the order was exactly backwards.

On Solana, the slot window is 400 milliseconds. A competing bot with identical logic but 50ms less latency lands its transaction in the target block. Your bot lands in the next one—or not at all. The difference is not code quality. It's where your server sits, what data feed it connects to, and how your transactions reach the leader.

This guide covers the full execution stack for competitive Solana HFT in 2026: hardware selection and colocation, data ingestion via ShredStream and gRPC, transaction routing through Jito and bloXroute, and parallel submission strategies that reduce p99 latency without introducing failure modes. Dysnix manages this infrastructure for over 100 active trading setups. Here's how it's built.

Why the execution layer decides HFT outcomes on Solana

Strategy quality sets the ceiling. Infrastructure determines how close you get to it.

Solana's architecture creates a specific set of latency challenges that don't exist on other chains. The network distributes blocks as shreds—1KB fragments produced by the slot leader and propagated through Turbine, a tree-based dissemination protocol. Standard p2p propagation adds 100–300ms before your node sees the full picture. In a 400ms slot, that's most of your window spent waiting on data that a colocated competitor already has.

Leader rotation compounds this. The validator responsible for the next block changes with each slot. A transaction submitted to the wrong TPU address gets forwarded—adding latency—or dropped during congestion. Knowing which validator leads the next slot and routing directly to it is the difference between first-slot inclusion and a retry.

The teams that win consistently on Solana HFT in 2026 share three infrastructure properties: dedicated bare-metal hardware colocated near validators, pre-confirmation data feeds that bypass standard gossip propagation, and multi-path transaction submission that hedges against single-relay failures. Everything in this guide builds toward those three properties.

Hardware foundation: what the node actually needs

Shared cloud infrastructure is not viable for production Solana HFT. Virtual machines share CPU cycles, memory bandwidth, and network capacity with other tenants. Under the sustained I/O load of shred parsing, account state updates, and transaction signing, that contention produces latency variance—and variance in HFT is loss.

Dysnix provisions bare-metal servers specifically configured for Solana's workload profile. The current standard configuration:

Component Specification Why it matters for HFT
Processor AMD EPYC TURIN 9005 or GENOA 9354 High core count handles parallel shred parsing and gossip processing without thread contention
Memory 512GB to 1.5TB DDR5 RAM Hot account data stays in memory across sessions; eliminates I/O waits on cache misses
Storage Enterprise NVMe (Samsung PM9A3 or equivalent) Ledger writes at 1GB/s+ without throttling; separate volumes for ledger and accounts
Network 10Gbps dedicated port ShredStream subscriptions require ~32 Mbit/s; headroom prevents saturation under peak load
Tenancy Bare-metal, no virtualization Eliminates noisy-neighbor effects on CPU and network that produce latency spikes

Colocation: the geographic variable

The physical distance between the trading server and the current slot leader is the latency floor that no software optimization can reduce. Network packets travel at a fixed speed. A server 200ms from the nearest validator cluster loses 200ms on every transaction submission before a single line of code executes.

Dysnix colocates in Frankfurt, London, and New York facilities—OVH, Latitude, Equinix, and TeraSwitch—chosen for their peering density with Solana validator clusters and bloXroute's SWQoS relay network. These locations achieve sub-30ms round-trip latency to the nearest leader under typical epoch conditions. During leader rotations that shift the active validator geographically, multi-region redundancy maintains competitive latency regardless of which node holds the slot.

Post-provisioning tuning

Hardware performance under theoretical benchmarks rarely matches production performance without OS-level tuning. Dysnix engineers apply a standard tuning profile after provisioning:

  • TCP buffer sizing via sysctl: increases receive and send buffer sizes to handle Solana's high-throughput shred streams without packet loss
  • IRQ affinity via irqbalance: pins NIC interrupts to specific CPU cores, preventing interrupt processing from competing with application threads
  • eBPF traffic filtering: prioritizes Solana protocol traffic at the kernel level, reducing processing overhead for irrelevant packets
  • Turbine fanout optimization: adjusts validator peer configuration to improve shred reception from the leader tree

Continuous benchmarking runs every 15 minutes against live network conditions. When leader schedule changes shift the optimal routing configuration, the system adapts automatically rather than requiring manual intervention.

Data feeds: getting block data before the network does

Standard Solana node operation waits for shreds to propagate through Turbine's gossip tree before the local node sees them. For a node two or three hops from the slot leader, this adds 100–300ms of unavoidable wait time. Three feed options reduce or eliminate this delay:

Feed Mechanism Latency advantage Best suited for
Jito ShredStream Direct gRPC stream of shreds from slot leader, bypassing gossip entirely 200–500ms faster than standard propagation MEV strategies, pre-confirmation signal detection, bundle construction
bloxroute OFR / BDN Global relay mesh delivering shreds via private low-latency paths 30–50ms faster than public propagation Geographic diversity, leader-schedule edge cases, redundancy alongside Jito
Yellowstone gRPC Filtered account and slot state streams via Geyser plugin Under 10ms local latency once connected Strategy logic that reacts to confirmed account state changes

Jito ShredStream

ShredStream delivers raw shreds directly from the slot leader via gRPC subscription, before those shreds enter Turbine propagation. Subscribers reconstruct blocks locally from the shred stream, seeing block content hundreds of milliseconds before standard nodes receive the full block through gossip.

For HFT strategies that depend on detecting large incoming transactions before they confirm—arbitrage setups, MEV bundle construction, liquidation monitoring—ShredStream provides the signal window that makes those strategies viable. Dysnix enables ShredStream by default on all dedicated HFT node configurations.

bloXroute OFR and BDN

bloXroute operates a global relay network that distributes shreds through private peered paths rather than public Turbine propagation. The Open Fabric Relay (OFR) and Blockchain Distribution Network (BDN) collectively reduce propagation latency by 30–50ms over standard gossip on most paths.

Internal Dysnix benchmarks from late 2025 show Jito with a latency edge for most Solana setups due to direct leader connectivity. bloXroute's value in production configurations is geographic diversity—its relay mesh catches leader-schedule edge cases where the active validator is geographically distant from Jito's primary infrastructure. The two feeds complement rather than compete.

Yellowstone gRPC

Where ShredStream and bloXroute deliver raw block data, Yellowstone gRPC delivers structured, filtered state updates. Strategy logic that needs to react to specific account changes—a pool reserve crossing a threshold, a collateral ratio dropping below a target—can subscribe to exactly those accounts rather than processing the full shred stream.

Yellowstone reduces compute overhead for strategies that don't require raw block data, and its filter configuration prevents the feed from becoming a resource bottleneck during high-activity periods. Many production setups use Yellowstone for strategy signal processing and ShredStream for transaction timing.

Transaction submission: landing in the target slot

Data arrives. Strategy fires. The transaction is constructed and signed. Now it needs to reach the slot leader before the 400ms window closes and the next validator takes over.

Standard sendTransaction routes through the RPC node's internal forwarding logic to the leader's TPU port. Under light network conditions this works. Under congestion—during a token launch, a liquidation cascade, a large coordinated market move—the public TPU port saturates and transactions queue or drop. Two submission paths avoid this:

Jito Block Engine

Jito's Block Engine accepts transaction bundles—groups of transactions that execute atomically in sequence—and routes them directly to validators running Jito-enhanced clients. The bundle tip, denominated in SOL, determines priority in the block engine's inclusion auction. Higher tips displace lower tips for the same block position, making tip calibration a continuous competitive process rather than a one-time configuration.

Jito bundles provide two properties that standard submission cannot: guaranteed execution order within the bundle (critical for multi-step strategies like arbitrage), and MEV protection by removing the transaction from the public mempool where sandwich attacks originate. Dysnix engineers configure dynamic tip calibration during onboarding, targeting the 75th–90th percentile of recent acceptance prices for the target program.

bloXroute Trading API

bloXroute's Trading API propagates transactions through its private SWQoS relay network rather than public TPU ports. In Dysnix's production benchmarks, 83% of transactions submitted via the Trading API land in the first block versus 40–60% on public endpoints during congested periods. The private propagation path bypasses the public bandwidth pool that saturates during high-traffic events.

bloXroute also provides built-in MEV protection options that shield transactions from mempool-level sandwich attacks—useful for strategies involving large single transactions rather than Jito bundles. The Dysnix infrastructure stack includes both Jito and bloXroute submission paths, with routing logic that selects between them based on strategy type and current network conditions.

Parallel submission: turning network variance into reliability

No single submission path owns the fast lane on every slot. Leader rotation changes which validators are geographically closest to different relay endpoints. Network conditions shift. A relay that performs optimally during one session may be slower the next.

Parallel submission addresses this by broadcasting the same signed transaction to multiple endpoints simultaneously. The first path to reach the leader wins the inclusion. Other paths' submissions either fail silently (the transaction already landed) or are deduplicated by the network. The net effect: p99 latency drops because the worst-case single-path outcome is replaced by the best-case outcome across all paths.

Dysnix implements parallel submission natively across its infrastructure stack—dedicated node RPC, bloXroute Trading API, and Jito Block Engine fire in parallel for each qualifying transaction. Clients see p99 latency reductions of 30–50% compared to single-path submission in production conditions.

Implementation checklist

For teams implementing parallel submission independently:

  • Select 3–5 endpoints: include at least one leader-aware relay (Jito or bloXroute), your dedicated RPC, and one geographically diverse backup endpoint
  • Fan out the signed transaction: parallel POST to all endpoints with no sequencing dependency—Rust's Tokio or JavaScript async/await handle this cleanly
  • Track confirmation state: poll getSignatureStatuses after sendTransaction acceptance; stop retrying after first confirmed status
  • Collect latency telemetry: measure p50, p95, and p99 per endpoint under both quiet and congested conditions—the distribution matters more than the average
  • Use durable nonces for retry windows: standard blockhashes expire after ~90 seconds; durable nonces allow retries over multi-minute windows for strategies that require it

Trade-offs to manage

Parallel submission increases submission volume and, consequently, costs on metered endpoints. Rate limits apply per path regardless of whether a transaction lands, so submitting to five endpoints uses five times the request budget. Dysnix manages this through per-path quota monitoring with dashboard visibility before billing thresholds are reached.

Relay diversity also introduces MEV exposure variation—different relays have different relationships with block builders, and some may reorder transactions for MEV extraction. Dysnix rotates relay configurations quarterly and simulates MEV exposure in client stack reviews to maintain diversity without introducing new attack surfaces.

Recommended deployment sequence

For teams building from scratch or migrating from a shared endpoint setup, this is the sequence Dysnix follows for new HFT client deployments:

  1. Provision bare-metal servers in leader-adjacent data centers. Hardware selection, OS installation, and initial network configuration handled end-to-end by Dysnix.
  2. Apply tuning profile: TCP buffer sizing, IRQ affinity, eBPF traffic filters, Turbine fanout configuration. Validated against benchmark targets before go-live.
  3. Enable Jito ShredStream subscription. Configured by default; delivers pre-confirmation block data from slot open.
  4. Activate Yellowstone gRPC with filtered account subscriptions specific to the client's strategy target programs.
  5. Configure bloXroute OFR integration for shred redundancy and parallel submission path.
  6. Wire Jito Block Engine and bloXroute Trading API into the parallel submission layer. Dynamic tip calibration configured based on target program fee history.
  7. Establish monitoring: Grafana dashboards for slot lag, landing rate, bundle acceptance rate, and p99 latency. Alert thresholds set for sync drift over 5 seconds and landing rate below 80%.
  8. Continuous benchmarking loop activated. Dysnix probes run every 15 minutes; configuration adapts automatically as leader schedule and network conditions evolve.

Cost and trade-off reality

Competitive Solana HFT infrastructure is not cheap. A fully configured bare-metal setup with colocation, premium data feeds, and managed DevOps runs $5,000–$20,000 per month depending on hardware tier and geographic footprint. The relevant comparison is not the absolute cost—it's the cost relative to the opportunities the infrastructure makes accessible.

In Solana's DEX ecosystem, which processed over $1.6 trillion in spot volume in 2025, the difference between 60% transaction landing rate and 90% landing rate across a high-frequency strategy is not incremental. At meaningful volume, it determines whether the strategy is profitable at all.

Dysnix structures infrastructure costs transparently: fixed monthly pricing per node tier, with clear performance SLAs tied to each tier. New clients receive a benchmarked ROI estimate based on strategy type and expected transaction volume before committing to infrastructure spend.

Ready to build competitive Solana HFT infrastructure?

Dysnix provisions and manages the full HFT stack—bare-metal colocation, ShredStream, Yellowstone gRPC, Jito and bloXroute integration, parallel submission, and 24/7 DevOps. Over 100 active trading configurations on Solana. The team handles infrastructure; you focus on strategy.

Request an infrastructure audit →
dysnix.com
Maksym Bohdan
Writer at Dysnix
Author, Web3 enthusiast, and innovator in new technologies
Share
Related articles
Subscribe to the blog
The best source of information for customer service, sales tips, guides, and industry best practices. Join us.
Thanks for subscribing to the Dysnix blog
Now you’ll be the first to know when we publish a new post
Got it
Oops! Something went wrong while submitting the form.
Copied to Clipboard
Paste it wherever you like