Two Types of Deployment for Subsystems

During development you sometimes have a part of the system that is rarely changed, requires a lot of resources, has a lot of prerequisites to run, and/or takes a lot of time to deploy. What do you normally do? Extract that part into a separate application and run it on one or several servers, having developers connect to these services. Examples may be: search engine interface, CPU-intensive operations, 3rd party systems, etc. In a way, you make your system modular.

However, there’s a drawback to this modularity – it adds complexity and points of failure. What if the communication between the module breaks, how do you trace and debug issues (they might be caused by either of the two (or more) systems), how do you keep multiple versions of the applications’ public interfaces in sync (and do you need multiple versions to co-exist?), how is deployment of two applications handled at one atomic step, and more.

Many times you don’t need to introduce that complexity, but you still need the flexibility during development.

An option I sometimes use is to have the subsystem both as embedded and as a standalone module. To do that, you make a thin layer (calling it a layer from architectural perspective; in terms of code it is going to consist of 2 classes and an interface) which handles two types of deployment. There is one interface that defines the operations, performed by the module, and two implementations – one is RESTful, calling RESTful services of a separately deployed application, one is simply wrapping the class that performs the actual operation (or even is the class itself) and is present on the classpath of the application.

An example. You have an ExtractionService which has to perform some text analysis and concept extraction on an input. This requires a lot of CPU and memory, and so it is not feasible for each developer to run it locally. You make

  • ExtractionService interface, defining the extraction operations
  • RestfulExtractionService which is configured with an endpoint URL and invokes restful services that wrap the actual extraction implementation
  • ClasspathExtractionService which is either the actual extraction implementation, or a simple wrapper of it.

How do you organize that in terms of projects and their dependencies? You have a separate jar-packaged project with the extraction implementation, you have a war-packaged project (that depends on the jar), which wraps the implementation in a RESTful API and you have your main application, which also depends on the jar.

How do you switch the implementations, depending on whether you are in development or in production mode? It depends on the framework you use. With spring you can simply have @Resource("${service.implementation}") ExtractionService service, where service.implementation is externally configured.

Why do you need that, doesn’t it add overhead, and why don’t I just keep the modularity in production? You will be able to build and deploy one monolithic application (with only classpath dependencies) to production without worrying about the problems mentioned in the 2nd paragraph. You are keeping your architecture simple, and that’s always a good thing. At the same time you are getting a lot of flexibility during development, by not having to redeploy heavy parts of the system. And all of that is achieved with just a few classes and one configuration switch. It is not always applicable, of course, but do consider it as an option if you are faced with a similar scenario.

Leave a Reply

Your email address will not be published. Required fields are marked *