Caucho maker of Resin Server | Application Server (Java EE Certified) and Web Server


 

Resin Documentation

Aug 2012: Resin outscales C-based web server nginx in AutoBench benchmark
Feb 2012: NetCraft survey says Resin experiencing strong growth in last year and used in number of the Million Busiest Sites.
home company blog wiki docs 
app server web server 
health cloud java ee pro 
 Resin Server | Application Server (Java EE Certified) and Web Server
 

deploying web-applications to a cluster


When you deploy an application, Resin ensures each server in the cluster gets a copy of the new application, using a transactional store to ensure consistency.

Deployment

Cluster Deployment

To deploy an application to your cluster, use the same command-line deploy as you would for a single server. The deployment process is the same because Resin treats a standalone server as single server in a cluster.

Example: command-line deployment
unix> resinctl deploy test.war

That command-line deploy will send the test.war to the cluster's triad-server repository, and then copy the repository to all three servers in the triad hub. If you have only two servers in the cluster, Resin will copy the application to both. Once all three triad hub servers have the deployed .war, Resin will update all the spoke servers in the cluster.

deployment replicates to triad

The cluster command-line deployment uses the <web-app-deploy> tag in the resin.xml to configure and control where the deployed application should be expanded. Typically, the deployment will use the webapps/ directory.

Example: web-app-deploy in resin.xml
<resin xmlns="http://caucho.com/ns/resin">

<cluster id="">
  ...
  <host id="">
  
    <web-app-deploy path="webapps"
                    expand-preserve-fileset="WEB-INF/work/**"/>
    
  </host>
</cluster>
</resin>

When you're using virtual hosts, you'll add a -host tag to specify the virtual host to deploy to.

The default deployment is to the default host with the war's name as a prefix. Both can be changed with deploy options.

Example: virtual host deployment
unix> resinctl deploy -host www.foo.com test.war

Controlling Restarts

By default, a Resin server will detect an updated application automatically and restart the web-app immediately. You can delay the restart by putting it on manual control. In manual mode, Resin will only look for a new version when you use a command-line webapp-restart.

Example: command-line to restart the web-app
unix> resinctl webapp-restart test

The manual control is configured by setting <restart-mode< to manual in the web-app-deploy:

Example: web-app-deploy in resin.xml
<resin xmlns="http://caucho.com/ns/resin">

<cluster id="">
  <host id="">
  
    <web-app-deploy path="webapps"
                 restart-mode="manual"
                 expand-preserve-fileset="WEB-INF/work/**"/>
    
  </host>
</cluster>
</resin>

Zero Downtime Deployment (Versioning)

You can configure Resin's cluster deployment in a versioning mode where users gracefully upgrade to your new application version. Since new user sessions use the new version and old user sessions use the old application version, users will not need to be aware of the version upgrade.

By default, Resin restarts the web-app on a new deployment, destroying the current user sessions before starting them on the new deployment. You can change that behavior by setting multiversion-routing to true and deploying with a -version command-line option.

Example: web-app-deploy with versioning
<resin xmlns="http://caucho.com/ns/resin">

<cluster id="">
  <host id="">
  
    <web-app-deploy path="webapps"
                 multiversion-routing="true"
                 expand-preserve-fileset="WEB-INF/work/**"/>
    
  </host>
</cluster>
</resin>

For versioning to work, you'll deploy with a named version of your application. Resin will send new sessions to the most recent version and leave old sessions on the previous version.

Example: command-line deploy with versioning
unix> resinctl deploy -version 2.1.3 test.war

Internally, the application repository has both versions active.

Example: internal repository tags
production/webapp/default/test-2.1.2
production/webapp/default/test-2.1.3

Deployment Reliability

Resin's deployment system is designed around several reliability requirements. Although the user-visible system is simple, the underlying architecture is sophisticated -- we're not just copying .war files.

  • Predictable - all servers run the same deployed application by design, whether the server has restarted, been taken off line for maintenance, started and stopped for dynamic load management, or started from scratch from a fresh VM image.
  • Transactional (all or nothing) - all of the update files are copied and verified in the background before the web-app restarts. While the update is occurring, Resin continues to serve the old application. Even if a network glitch occurs or a server restarts before the upgrade completes, Resin will continue to use the old web-app.
  • Replicated - all deployments are replicated to all three servers in the triad hub. If a triad server restarts, it will update itself to the latest repository version from the backup servers. As long as at least one triad server is available, the active servers will have access to the latest repository.
  • Elastic - the system supports dynamic adding and removal of servers. A new spoke server will contacts the triad hub for the latest application deployment and update itself.
  • Staging, Archiving, and Versioning - the deployment system supports these through naming conventions of the deployed tag, allowing multiple versions of the same web-app to be saved in the repository and deployed as appropriate.
  • Straightforward - the user-view of cloud deployment needs to be as simple as a single-server deployment. It needs to look simpler than it is. It needs to just work.

Deployment Architecture

The following is a description of the underlying architecture of Resin's deployment system. It's not necessary to understand or even read any of this section to use Resin's deployment. But for those who are curious, some details might be interesting.

.git control system architecture

The main repository is based on the distributed version control system .git, which is used for large programming projects like the Linux kernel. The .git format gives Resin the key transactional repository to make the cloud deployment reliable.

Each file in the repository is stored by its secure document hash (SHA-1). The secure hash lets Resin verify that a file is completely copied without any corruption. If verifying the hash fails, Resin will recopy the file from the triad or from the deploy command. Since the file is not saved until it's validated, Resin can guarantee that the file contents are correct.

Files are never overwritten in in Resin's repository. It's essentially write-only. Two versions of the same file are saved as two separate file: a test.jsp (version 23) which replaces a test.jsp (version 22). So there's never a case where an older version of the file can be partially overwritten.

Since the repository itself is organized as a .git self-validating file, its own updates are validated before any changes occur. Essentially, Resin verifies every file in a repository update, and then verifies every directory, and then verifies the repository itself before making any changes visible.

  1. Resin detects that a new repository version is available (it continues to use the old repository) by checking with the triad.
  2. It checks for any new file updates and copies the new files from the triad (Resin continues to use the old files and repository.)
  3. When all the new files are verified, it copies and verifies the new directories and archives from the traid (Resin continues to use the old files and repository.)
  4. It now copies and verifies the top-level repository changes. (Resin continues to use the old files and repository.)
  5. After everything is verified on the local filesystem, Resin switches to the new repository.

If at any point a servers stops, or the network fails, or a new file is corrupted in a partial transfer, Resin continues to use the old files. On recovery, Resin will verify and delete any partially copied files, and continue the repository update. Only the repository system itself knows that an update is in process; the rest of Resin continues to use the old repository files.

Repository tag system

Internally, the repository is organized by tags where each tag is an archive like a .war. The tag system enables versioning and archiving since each tag can point to an archive or two tags can point to the same archive.

The current application for the "foo" web-app would have the tag production/webapp/default/foo. The tag points to a version of the archive, say the foo.war that was deployed on 2011-08-15 at 10:13:00. If you deploy a new foo.war, the same tag will point to the new foo.war that was deployed on 2011-08-16 13:43:15. The repository treats the two versions as entirely different archives and saves both of them.

The tag system lets you copy a current deployment to an archive tag or copy a preview staged application to the production application. You can copy the production/webapp/default/foo tag to archive/webapp/default/foo-20110815, archiving it. If you're familiar with subversions tags and branches, this is a similar system.

If you want to rollback to a previous version, you can copy the archived tag to the current production tag. Resin will run through the repository update system and ensure that all servers in the cloud see the updates.

Cloud Deployment

Deploying to a cloud extends the transactional repository to all the servers in a cluster. In Resin's replicated hub-and-spoke model, a deployment copies the archive first to all three servers in the triad. (If you have two servers, it will copy to the second server.) Since all three servers have a copy of the entire repository, your system keeps it reliability even if one server is down for maintenance and a second server restarts for an unexpected reason.

deployment replicates to triad

After all three servers in the hub have received and verified the deployment update, the triad hub can send the changes to all of the spoke servers.

triad updates spoke servers

If a spoke server restarts or a new spoke server is added to the cloud dynamically, it will contacts the hub for the most recent repository version. So even a new virtual-machine image can receive the most recent deployments without intervention.


Copyright © 1998-2012 Caucho Technology, Inc. All rights reserved. Resin ® is a registered trademark. Quercustm, and Hessiantm are trademarks of Caucho Technology.