I’m working with multiple web applications running on Tomcat, deployed across several Debian 12 and 13 servers. I’ve evaluated two possible architectures for integrating BunkerWeb:
a centralized BunkerWeb instance sitting in front of all services
a full BunkerWeb stack deployed in front of each application, making every service fully isolated and avoiding any single point of failure
The second model feels more appropriate for our needs: one standalone BunkerWeb instance per application, each handling only a single service and remaining completely independent.
The challenge now is figuring out how to synchronize global parameters across dozens of completely separate instances. I want to avoid modifying every server manually whenever a configuration change is required.
I’ve explored a few ideas such as pushing config files to all servers, using shared templates, or relying on an external configuration management system. Another option would be to use a BunkerWeb manager to dispatch configuration to all the workers I deploy. However, this approach introduces issues with Let’s Encrypt HTTP validation, since the manager (not the workers) handles the ACME challenge and the validation ends up failing when DNS points traffic directly to the workers.
I’m not sure which direction makes the most sense, and it’s possible I’m approaching the problem the wrong way altogether. Any guidance, best practices, or insights from those who have managed large numbers of independent BunkerWeb instances would be extremely helpful.
Hello,
You can use the API and clustering features to synchronise each instance. The idea is to deploy the BunkerWeb service on another device/instance. The scheduler will then connect to this instance and send the configuration. In your case, however, each BunkerWeb instance will receive all configurations.
A staff of BunkerWeb will be answer with more information about the setup!
In an HA / multi-instance setup you don’t need a full “stack per app”. BunkerWeb is designed so that one scheduler + one database can manage multiple BunkerWeb instances:
The scheduler is the central manager: it reads settings (incl. BUNKERWEB_INSTANCES) and pushes configuration to all listed BunkerWeb instances.
Those BunkerWeb instances act as workers in front of your apps. You just list them (IP) in BUNKERWEB_INSTANCES and the scheduler keeps them in sync.
All instances share the same database backend via DATABASE_URI, and there is only one database and one scheduler per namespace/environment to avoid conflicts (only the scheduler, web UI and API service access the database).
All API-related settings (API_HTTP_PORT, API_LISTEN_IP, API_WHITELIST_IP, API_TOKEN, etc.) must be identical on the scheduler and all instances so API calls from the scheduler are accepted everywhere (Usually API_LISTEN_IP is set to 0.0.0.0 and API_WHITELIST_IP is the IP of the manager).
So for your use case: put one scheduler + one database in “manager” role, and run X BunkerWeb workers (one per Tomcat app/server) managed centrally via BUNKERWEB_INSTANCES, instead of duplicating the whole stack on every machine.
Thanks for the reply, it clarified quite a bit. I’m still trying to understand the best pattern for my setup, so I have a couple of specific questions:
I can technically make a single worker serve a single service by pointing that service’s domain directly at the worker I want to handle it. But is there any supported or recommended way to operate in this model, where one worker is responsible for one service, rather than having every worker serve all services?
If this setup is acceptable, I’m unsure how the Let’s Encrypt HTTP challenge is intended to work. The domain being validated reaches the worker, but the scheduler on the manager is the component initiating the challenge. How is that traffic expected to flow, or is this scenario simply not supported?
Alternatively, if the intended design is for each worker to serve all services and rely on multi-instance mode purely for redundancy, I’m trying to understand the benefit of running multiple instances without a load balancer in front, since no traffic distribution would occur.
I apologize for my very broad questions and I thank you for the time taken to reply.