On-chain solvers are programs that continuously listen to activity on blockchain networks and react to profitable events like MEV opportunities, liquidations, or auctions. Their edge comes down to speed—the ability to interpret, decide, and act faster than anyone else.

Achieving that speed requires minimizing friction between the solver and the chain. Native, bare-metal hardware allows solvers to operate with minimal performance penalties or slowdowns. There are no virtualization layers, no noisy neighbors, and no throttling. You're directly connected to compute—like a private car instead of a crowded bus.

In contrast, solvers running in the cloud face layers of abstraction and shared resource contention. Traffic from other users, virtualization overhead, and unpredictable latency can all introduce delays that cost you the race.

Traditional high-frequency trading (HFT) firms already know this. They've spent years embedding their logic directly into networking hardware, optimizing every millisecond to beat the market. That same playbook—ultra-low-latency, hardware-accelerated execution—is now making its way into Web3, where the next generation of solvers is looking to gain an edge.

In this context, even your network card becomes a battlefield. The closer your code is to the flow of information, the faster you can act. And in a zero-sum game where only one solver wins a profitable transaction, hardware is often the deciding factor.

Today, most solvers operate within a latency window of just a few seconds. But the trajectory is clear: as competition intensifies and networks evolve, that window is shrinking fast — and sub-second latency is inevitable.

The Solver Stack: Software, Networks, and Hardware

Behind every high-performing on-chain solver is a layered system of tightly integrated components. Each one plays a specific role in reacting to on-chain events with maximum speed and precision. Think of it as a stack built for a single goal: winning the race to opportunity.

A typical solver setup includes:

  • Decision Engine – The core logic layer. It ingests on-chain data and makes split-second decisions based on pre-programmed strategies.

  • Indexer – Continuously monitors blockchain state and translates raw data into actionable insights.

  • MPC Signer with Rule Engine – Responsible for transaction signing. Private keys are stored within secure hardware, and signing rules are enforced directly on the signer machines. Even if the decision engine is compromised, the system will not authorize transactions to non-whitelisted addresses.

  • RPC Server – Serves as the communication link between the solver and the blockchain, enabling both read and write access.

Because solvers operate autonomously and often manage large amounts of capital, this entire stack must run in real time and without human intervention. While the mempool may sit outside of the solver’s infrastructure, every other component depends on hardware. This is where gains are made—or lost.

Running on dedicated machines eliminates the unpredictability of shared infrastructure. Rather than competing for resources in a virtualized environment, solvers benefit from dedicated capacity with consistent performance. It's like swapping out public transit for a private driver.

The hardware doesn't just support the software—it enables it. And when infrastructure is purpose-built for its workload, delays from resource contention, virtualization layers, and network congestion disappear.

But the real performance unlock comes from deep integration between hardware and software. When those layers are designed together, bottlenecks vanish. Execution becomes consistent and optimized. It's the same reason Apple achieves such seamless performance: both the chip and the operating system are tuned to work together.

Top-performing teams often take this further by offering white-glove infrastructure services. These solver-ready systems are not general-purpose machines. They're built, tested, and optimized specifically to run on-chain strategies with maximum efficiency and reliability—right down to how transactions are signed and transmitted.

Inside the Hardware: What Actually Matters

Not all hardware is created equal. In the world of on-chain solvers, the right configuration can mean the difference between capturing or missing a high-value opportunity.

When building infrastructure for solver performance, four components stand out as absolutely essential:

  • CPU – The backbone of fast, low-latency execution. Solvers rely on high-speed CPUs to process decision logic in real time, where every microsecond counts.

  • RAM – Ensures smooth, uninterrupted access to memory, allowing systems to handle large volumes of data quickly and efficiently.

  • SSD – Provides fast disk access for caching, indexing, and managing logs, which are crucial for staying synced with real-time blockchain state.

  • Networking Cards – Often overlooked, but vital. Low-latency, high-throughput NICs (network interface cards) manage rapid packet transmission and reception. This becomes especially important in co-located environments or latency-sensitive setups.

These components don’t work in isolation. Just like muscles in a high-performance athlete, they need to function together. A top-tier CPU paired with underpowered RAM or a generic NIC will underperform. Real efficiency comes from balance—tuning each part of the system to support the rest.

One piece of hardware that tends to matter less in solver infrastructure is the GPU. While GPUs are key for AI workloads and zero-knowledge proof generation, most solvers don’t require parallelized graphical processing. Instead, they rely on compute speed, memory, and network responsiveness.

There’s also no one-size-fits-all answer to the question of single-core performance vs. multi-threading. Some solvers are optimized for fast, single-threaded execution. Others benefit from running logic across multiple threads. The best approach depends entirely on the specific implementation, which is why close coordination between engineering and infrastructure teams is essential.

This is also where custom builds outperform cloud solutions.

Cloud machines from providers like AWS or GCP are built for flexibility. They’re designed to serve a wide variety of use cases, not the precise demands of on-chain solvers. Even the most powerful cloud instances can introduce overhead and unpredictability through virtualization, shared tenancy, and general-purpose design.

Custom-built machines, by contrast, are designed for performance from the start. Configurations are tailored to reduce bottlenecks—whether that means tuning I/O throughput, streamlining operating systems, or balancing CPU and memory based on workload needs. Every detail, down to firmware and airflow, can be adjusted to serve one goal: reacting to on-chain events faster than anyone else.

Each solver workload calls for its own setup. That’s why serious teams don’t rent general-purpose infrastructure—they build their own.

How RPC Performance Amplifies or Bottlenecks Solvers

No matter how fast or well-optimized your solver is, it’s effectively blind without one thing: an RPC node.

RPCs (Remote Procedure Calls) are the bridge between solvers and the blockchain. They handle the continuous flow of information in both directions, fetching real-time on-chain data and sending transactions back to the network. Without a reliable, low-latency RPC connection, solvers can’t “see” or “act” in time.

Latency at this layer is critical. The faster a solver receives data, the more time it has to analyze, decide, and act. It’s similar to bidding on an item during the final seconds of an eBay auction—if your notification comes even slightly later than someone else’s, the opportunity is already gone. A faster RPC not only lets you react quicker, it also gives you more time to think through your move before committing it.

Not all RPCs are created equal, and understanding the difference between public and private options is essential.

  • Public RPCs are open, free to use, and sufficient for many everyday applications. But because they’re shared, they come with significant downsides. External traffic spikes can impact performance, leading to unpredictable delays and queuing—especially during high-demand events.

  • Private RPCs, on the other hand, are purpose-built for performance. They’re isolated from public congestion and can be tuned for specific workloads. Think of them like a tailor-made suit: designed for your shape, optimized for your movements. In the RPC context, this means better consistency, lower latency, and higher reliability.

One overlooked advantage of private RPCs is how they respond under pressure. During major network events or gas spikes, public RPC users can get stuck in traffic. Private users, however, might still experience congestion, but not to the same degree—like being in a taxi during a parade instead of a city bus. You may still slow down, but you’re not crammed in with a hundred other riders.

That’s why many solver teams are turning to custom or self-hosted RPC configurations. These allow for complete control—everything from throughput tuning to node placement. Solvers can position nodes closer to validators, strip away unnecessary features, and monitor performance at a granular level. All of this adds up to a tighter feedback loop between seeing a profitable opportunity and acting on it.

Geographic proximity plays a role as well. Solvers running in the same data center as their RPC node—or even close to validator clusters—can shave off precious milliseconds from their communication time. In some cases, proximity is so tight that it blurs the line between solver and node entirely.

And in latency-sensitive environments, those milliseconds aren’t just optimizations. They’re the competitive edge.

Location, Location, Location: Physical Proximity and Co-Location

In the race for on-chain execution, geography matters more than most people think.

Just like traditional high-frequency trading firms that spend millions to place their servers near stock exchanges, crypto solvers gain a measurable advantage by being physically closer to key infrastructure—especially validator nodes and RPC endpoints. The closer a solver is to the source of truth, the sooner it receives data, processes it, and reacts.

While shaving off a few milliseconds might sound trivial, in this environment, milliseconds are everything. Reduced roundtrip time means faster awareness of on-chain events and quicker response windows—advantages that compound when competing for high-value actions like MEV capture or liquidation opportunities.

This dynamic isn’t hypothetical. In centralized crypto markets, we’ve already seen it in practice. Binance, for instance, operates core infrastructure out of Tokyo. In response, many arbitrage players and trading bots moved their servers to Tokyo data centers to minimize latency when interacting with Binance APIs. Simply being geographically closer increased their odds of executing profitable trades ahead of others.

Validator co-location in decentralized finance is still emerging, but the same principles apply. As competition intensifies, proximity-based optimization will become more common—and more strategic.

Interestingly, physical proximity isn’t the only way to achieve these gains. Some teams are exploring how to replicate the benefits of co-location through smarter infrastructure design. By minimizing hops, optimizing packet paths, or using programmable RPCs, it’s possible to reduce latency significantly—even if you’re not in the same data center as the validator.

This reflects a broader shift in mindset: it’s not just about where your solver runs, but how efficiently it connects.

The parallels to traditional finance remain striking. Whether in Wall Street trading or DeFi infrastructure, those who optimize the physical edge often come out ahead.

Optimization in Practice: How Our Team Approaches Hardware

While much of our infrastructure is intentionally kept under wraps, what we can say is this: we’ve spent years fine-tuning every layer of our stack to gain a consistent edge in on-chain performance.

This didn’t happen overnight. It’s the product of deep collaboration, relentless iteration, and a team that blends backgrounds in traditional tech, AI, trading, and Web3. That mix of experience allows us to draw from best practices across industries—giving us a unique perspective on how to design, build, and operate high-performance solver infrastructure.

Over the years, we’ve tested countless configurations. Some worked well. Others didn’t. But that process of testing, breaking, tuning, and rebuilding is the point. It’s what keeps us ahead. Our infrastructure today is the result of that ongoing feedback loop, shaped not just by engineering talent but by a culture committed to staying at the edge of what’s possible.

We also don’t think about performance and cost as opposing forces. With bare metal, you can have both. When custom hardware is tailored to the exact workload, waste disappears. You get better throughput, higher stability, and more efficiency—not in spite of cost, but because the system is designed with purpose.

One of the biggest lessons we’ve learned came early on: Solana is tough on hardware. It can overwhelm even expensive systems if the configuration isn’t right. Some setups that looked promising on paper simply couldn’t keep up under real-world network conditions. That’s why tuning for the realities of high-throughput chains—not just theoretical specs—has become central to our approach.

Those hard-earned lessons helped shape the infrastructure we run today. We’ve gone through every major phase in Solana’s evolution and came out the other side with a resilient, high-performance stack.

Here’s a fun proof point: you can run 100,000 Ethereum validators on a single six-core bare-metal machine—if it’s architected properly. That’s the kind of precision we aim for. Not just raw power, but performance that’s engineered to fit.

The Future of Hardware for On-Chain Solvers

The infrastructure race is far from over. If anything, it’s just getting started.

As solver strategies grow more sophisticated and on-chain competition intensifies, the demands on hardware, or the rewards from a superior stack, will only rise. Staying ahead means paying close attention to the technologies shaping the next generation of performance.

One promising innovation is the rise of FPGAs (Field Programmable Gate Arrays). Unlike traditional CPUs, FPGAs can be reconfigured at the hardware level, making them highly adaptable and incredibly efficient for specialized workloads. They offer higher performance per watt and smarter communication between components—improving both bandwidth and latency inside the system itself. For solver teams looking to stay ahead of the curve, FPGAs could mark the next frontier in custom infrastructure.

There’s also increasing discussion around modular blockchains and intent-based systems, though the reality is more nuanced. While modular architectures aim to simplify the development stack, they often introduce new layers of complexity and haven’t yet demonstrated significant infrastructure savings for latency-sensitive use cases. Intent-based systems may open the door to new solver models, but it’s still too early to know how they’ll reshape the performance landscape.

What’s clear is that hardware isn’t going anywhere. Whether the advantage comes from speed, configurability, or proximity to critical infrastructure, solvers will continue to depend on well-architected machines to maintain their edge.

If you're running a solver or designing infrastructure to match your on-chain strategy, we can help.

Get in touch with our team

Sign-up for our newsletter

Thank you for signing up to our newsletter.
Oops! Something went wrong while submitting the form.