Back to articles
10 min read

Choosing Between REST APIs and Batch Processing

I

Isaac Lacort Magán

Mar 21, 2026

Why this architectural choice matters

In application design, code quality is not the only obstacle when building optimal and maintainable programs. In my experience, the greatest challenge is defining a system’s architecture and understanding how it integrates into its context.

This article reflects on when REST APIs are a poor fit for scheduled heavy workloads.

The real problem with API-by-default thinking

Not every architecture works well for every use case. APIs are often appropriate; the problem is using them by default, without evaluating workload type, isolation needs, traffic patterns, and infrastructure cost.

When batch processing is a better fit

For some workloads, exposing functionality through an API is less suitable than executing it as a scheduled or on-demand isolated process.

Batch processes are well suited to the execution of large scheduled jobs in their own isolated execution environment. This reduces the number of parallel processes that can be affected and helps contain failures. It also reduces coupling in the system and removes the need to keep an interface permanently available.

Batch processing is useful, but its advantages can disappear when it depends on shared runtime services.

When the problem starts: batch depending on shared APIs

Reusing a shared API from a batch service can be risky.

In the first place, you are overloading a service that is shared with other applications. That shared API must be capable of handling both its normal request load and the additional load coming from the batch, which, in the case of large jobs, can become massive.

This has an obvious consequence: if you do not want the system to crash, you will have to size the infrastructure for a peak that happens only rarely.

Because it is a shared API, other parts of the system depend on it, so any failure or degradation can spread to dependent services, affecting other applications in the system.

In cases where resource isolation is poor, the impact can even reach unrelated processes on the same host.

You should also consider shared resources such as databases, caches, file systems, message brokers, or authentication services. Even if the API itself does not fail, any of the shared resources it depends on could become a bottleneck or fail under the extra load.

Possible solutions

One possible solution is vertical scaling for the REST service, sizing the infrastructure for a peak that happens only rarely.

Horizontal scaling is also a strong option, but it requires more supporting infrastructure for routing, discovery, load balancing, health checks, and related concerns. This kind of infrastructure is still not available in many companies.

Another option is to use reactive systems and/or event-driven systems to manage backpressure. Reactive systems can mitigate some operational pressure, but they also introduce architectural complexity and may require system-wide changes that are not realistic in legacy environments. The same applies to event-driven systems based on tools such as Kafka or RabbitMQ.

Conclusion

For me, architecture must fit the environment, constraints, and resources.

Your current architecture should not blindly dictate all future solutions, but new solutions must still be evaluated against existing constraints, costs, and integration realities.

Good architecture should make room for adaptability and alternative integration patterns, so that not every workload is forced through a shared API model.

Software ArchitectureREST APIBatch ProcessingScalabilitySystem Design