Every web application reaches a point where some operations take too long to complete within a single HTTP request. Sending emails, generating PDF reports, processing file uploads, syncing data with third-party APIs, running calculations on large datasets: these tasks can take seconds or minutes, far longer than a user should wait for a page to load. Background job processing moves these operations out of the request lifecycle and into a queue where they execute asynchronously, keeping the application responsive while heavy work happens behind the scenes.
Laravel’s queue system is one of the framework’s most powerful features for agencies building production applications. It provides a unified API for dispatching jobs, multiple queue backend options, built-in retry and failure handling, and the primitives needed for sophisticated job orchestration. This guide covers how to implement queue-based processing effectively in agency-delivered projects.
Understanding the Queue Architecture
Laravel’s queue system consists of three components: jobs (the units of work), connections (the queue backends that store jobs), and workers (the processes that execute jobs). When the application dispatches a job, it serializes the job’s data and pushes it onto a queue backend. A separate worker process continuously monitors the queue, pulls jobs off, deserializes them, and executes their logic. If a job fails, the worker catches the exception and handles it according to the job’s retry policy.
This separation between dispatching and executing is what makes queues powerful. The web application remains fast because it only needs to serialize and push a small payload. The worker process handles the heavy lifting in a separate process, on a separate timeline, and potentially on a separate server entirely.
Choosing a Queue Backend
Laravel supports multiple queue backends through a driver-based architecture. The choice of backend affects reliability, performance, and operational complexity.
Redis
Redis is the recommended queue backend for most production applications. It provides fast in-memory storage with optional persistence, supports atomic operations that prevent job duplication, and integrates seamlessly with Laravel Horizon for monitoring and management. Redis queues handle high throughput with low latency, making them suitable for applications that process thousands of jobs per minute.
Amazon SQS
SQS is a fully managed queue service that eliminates the operational overhead of running queue infrastructure. It scales automatically, provides built-in dead-letter queues, and guarantees at-least-once delivery. SQS is the appropriate choice for applications deployed on AWS where the team prefers managed services over self-managed Redis. The tradeoff is higher per-message latency compared to Redis and slightly less flexibility in job prioritization.
Database
Laravel can use the application’s database as a queue backend, which requires no additional infrastructure. This is acceptable for low-volume applications and development environments but should not be used for production applications with significant queue throughput. Database queues add load to the primary database and lack the performance characteristics needed for high-volume job processing.
Designing Effective Jobs
Well-designed jobs are idempotent, focused, and failure-aware. Idempotency means that executing the same job multiple times produces the same result, which is critical because queue systems may deliver a job more than once under certain failure conditions. A job that sends a welcome email should check whether the email has already been sent before sending it again. A job that processes a payment should verify the payment has not already been recorded.
Each job should perform a single, well-defined task. A job called ProcessNewOrder that sends a confirmation email, updates inventory, notifies the warehouse, and generates an invoice is doing too much. If the warehouse notification fails, the entire job fails, and the email, inventory update, and invoice all need to be retried. Break this into separate jobs: SendOrderConfirmation, UpdateInventory, NotifyWarehouse, and GenerateInvoice. Each can be retried independently, and a failure in one does not block the others.
Job Chaining, Batching, and Orchestration
Laravel provides several mechanisms for orchestrating complex job workflows that go beyond simple dispatch-and-execute patterns.
Job Chains
Job chains execute a sequence of jobs in order, where each job runs only after the previous one completes successfully. If any job in the chain fails, the remaining jobs are not executed. This is useful for workflows with sequential dependencies: parse a CSV file, validate the data, import the records, then send a completion notification. Each step depends on the previous step’s success.
Job Batches
Job batches group a set of jobs that can execute in parallel, with callbacks that fire when all jobs in the batch complete, when any job fails, or when the batch is finished regardless of individual outcomes. Batches are ideal for processing large datasets in chunks: import 10,000 records by dispatching 100 jobs that each process 100 records, then trigger a completion notification when all chunks have been processed. The batch tracks progress, allowing the application to display a progress indicator to the user.
Rate Limiting Jobs
When jobs interact with external APIs that enforce rate limits, the queue system needs to respect those limits. Laravel’s rate limiting for jobs allows you to define how many jobs of a specific type can execute within a given time window. This prevents the queue from overwhelming external services and triggering rate limit errors that cause cascading job failures.
Failure Handling and Retry Strategies
Job failures are not exceptional events in production systems. They are expected and must be handled gracefully. Laravel provides configurable retry policies that determine how many times a failed job should be retried and how long to wait between retries.
Exponential backoff is the standard retry strategy for jobs that interact with external services. Instead of retrying immediately, the delay between retries increases with each attempt: 1 second, then 5 seconds, then 30 seconds, then 2 minutes. This gives transient issues time to resolve without hammering the external service with rapid retry attempts. Laravel supports custom backoff strategies through the backoff method on job classes.
After all retries are exhausted, failed jobs are moved to a failed jobs table where they can be inspected, debugged, and retried manually. The failed job record includes the serialized job payload, the exception message and stack trace, and the timestamp of the failure. Laravel Horizon provides a web interface for managing failed jobs, making it straightforward for operations teams to monitor queue health and resolve issues without command-line access.
Laravel Horizon for Queue Monitoring
Laravel Horizon is a dashboard and configuration system for Redis-powered queues. It provides real-time visibility into queue throughput, job execution times, failure rates, and worker status. Horizon also manages worker processes, automatically balancing workers across queues based on workload and scaling worker counts to match demand.
For production applications, Horizon is essential. Without it, queue health is opaque and problems only become apparent when users report missing emails, incomplete imports, or other symptoms of queue failures. Horizon’s metrics and alerts provide early warning when queues are backing up, workers are failing, or specific job types are experiencing elevated error rates.
Scaling Queue Workers
Queue workers are separate processes that need their own scaling strategy. A single worker processes jobs sequentially, so throughput is limited by the execution time of individual jobs. Adding more workers increases parallelism, allowing more jobs to execute concurrently.
The scaling strategy depends on the workload pattern. Applications with steady, predictable job volumes can run a fixed number of workers. Applications with bursty workloads, such as a SaaS platform where all tenants generate monthly reports simultaneously, need the ability to scale workers up during peak periods and down during quiet periods. Horizon’s auto-scaling balances worker counts across queues, and infrastructure-level auto-scaling can adjust the total number of worker servers based on queue depth.
Queue-based background processing transforms how web applications handle complex operations. For agencies delivering Laravel projects, implementing queues correctly means the application can handle growth without architectural changes. The queue system absorbs spikes in workload, isolates slow operations from user-facing requests, and provides the reliability guarantees that production applications demand. It is one of the features that most clearly distinguishes a professionally built application from a hobbyist project.