AccuWeb.Cloud

Guaranteed 40% Cloud Savings OR Get 100% Money Back!*

Search Documentation
Type to search
AccuWeb.Cloud
Trusted by 50K+ businesses
GlassFish & Payara Auto-Clustering: Running Jakarta EE Applications in the Cloud
Post Category: Blog > Tutorials

GlassFish & Payara Auto-Clustering: Running Jakarta EE Applications in the Cloud

GlassFish & Payara Auto-Clustering: Running Jakarta EE Applications in the Cloud

Making sure that your service runs smoothly around the clock has been a hot topic in cloud hosting lately. One common solution that keeps popping up is setting up a clustered infrastructure for your project.

To lend a hand to our customers grappling with this complex task and to free up their time for other project-related tasks, we’ve developed a special high-availability solution. This solution is tailored to simplify Jakarta EE application hosting, specifically focusing on embedded Auto-Clustering for GlassFish and Payara application servers.

This solution’s unique feature is its ability to automatically link several application server instances when the topology of the application changes. This essentially implements the widely used clustering configuration.

In the article below, we delve into the workings of GlassFish and Payara auto-clustering, along with the specifics of infrastructure topology and how you can swiftly set up development and production environments within AccuWeb.Cloud PaaS.

How the Auto-Clustering for GlassFish and Payara Works

A “clusterized solution” boils down to a bunch of linked-up instances all running the same stuff and handling the same data. So, you’ve got servers that are scaled out horizontally and share user sessions.

Now, with AccuWeb.Cloud 8.3.2, we’ve added this new Auto-Clustering feature. It lets you set up clusters for GlassFish and Payara instances right from the topology wizard.

Auto Clustering

Select either the GlassFish or Payara application server from the options available on the Java tab of the wizard. After that, find and activate the Auto-Clustering switcher in the central section. Adjust the rest of the settings according to your requirements, including horizontal scaling, to ensure a dependable solution right from the beginning.

Load Balancing

Tip: You can also find the Auto-Clustering feature in a variety of other software templates like MySQL, MariaDB, PostgreSQL, Tomcat/TomEE, WildFly, Shared Storage, MongoDB, and Couchbase.

Depending on what you’re aiming for with your setup, you might want to skip using Auto-Clustering, especially during development. This way, you’ll just have regular standalone servers without setting up a cluster.

But when it comes to putting your system into production, clustering becomes pretty much essential. It ensures your application stays available and runs smoothly for your users. AccuWeb.Cloud’s Auto-Clustering feature makes it super easy to set up a reliable structure for your services without having to do much manual configuration. Here’s what happens when you use it:

If you’ve got two or more GlassFish (Payara) instances, the system adds a load balancer (LB) to your environment. This LB manages incoming requests and spreads them out among your server instances.

An additional Domain Administration Server (DAS) node gets automatically included. This node is like the control center for your cluster, managing all the other nodes and their interactions via SSH. Here are some key points about how it works:

  • The admin server is connected to all the worker nodes using a special hostname, making it easy for them to communicate.
  • To make sure everything connects properly, the system creates an SSH keypair for the DAS node and shares it with all the other instances in the cluster.

GlassFish Payara Cluster Topology

Session Replication Implementation

To guarantee high availability for your GlassFish or Payara cluster, the AccuWeb.Cloud PaaS sets up session replication across worker nodes automatically. This ensures that all user session data generated during processing is distributed across every application server instance, regardless of which node handled the request.

Additionally, the load balancer’s automatic sticky sessions feature enhances reliability and improves failover capabilities within your GlassFish or Payara cluster. The exact replication mechanism varies slightly depending on the stack used, so let’s delve into the details of each approach.

GlassFish Session Replication with GMS

In a GlassFish cluster, session replication is managed by the Group Management Service (GMS), an integral part of the application server. This service provides essential features like failover protection, and in-memory replication, and supports transaction and timer services for cluster instances.

GMS

GMS employs TCP, without using multicast, to identify cluster instances. When a new node joins a GlassFish cluster, the system automatically rechecks all active workers and the DAS node. This auto-discovery process is facilitated by setting the GMS_DISCOVERY_URI_LIST property to the appropriate value.

GMS discovery URI LIST

Payara Session Replication with Hazelcast

Session replication within the Payara cluster utilizes Hazelcast, which offers the added benefit of being JCache compliant, ensuring persistence for embedded Web and EJB sessions. This in-memory data grid is automatically enabled on all Payara instances, allowing them to discover cluster members via TCP without requiring multicast.

To enable session replication, you need to activate web container availability first. This ensures that managed web container properties, like sessions, can be shared across multiple instances with the same configuration.

In Payara Server 4, enabling Hazelcast and configuring accessibility had to be done manually. However, in Payara 5, this is configured by default. If you have modified any settings, ensure that the accessibility service is enabled and the save type is set to “hazelcast” on the web container’s accessibility page.

Persistence Type

To configure Hazelcast settings, navigate to the Administration Console and consult the Domain Data Grid configuration page. The Domain Data Grid feature in Payara leverages the Hazelcast library, offering essential functionalities such as clustering, caching, a unified CDI cluster object, and data storage monitoring for deployment groups.

Grid Configuration

Deploy Example Application for HA Testing

Let’s examine the high availability of an automatically composed cluster using a scaled GlassFish server as an example. To verify its fault tolerance, we’ll deploy a dedicated testing application that allows us to add custom session data and view detailed information about which server is handling the session. By stopping specific cluster instances, we can confirm that ongoing user sessions will continue to be processed even if the associated server fails. Now let’s watch it in action.

Step 1. To access the start page of the application server, click “Open in browser” next to your environment.

Server open in browser

On the opened page, click on Go to the Administration Console, then log in using the credentials that were emailed to you when the environment was created.

Step 2. Go to the Applications section and upload the clusterjsp.ear file to the Packaged File to Be Uploaded to the Server location.

Upload clusterjsp.ear file

Step 3.  Ensure that Availability is enabled, set cluster1 as the application target, and then click OK to continue.

Set Application target

Step 4 . Next, go ahead and open up your web browser and add /  to the end of the URL Feel free to choose a unique Name and Value for your session attribute, then simply click on Add Session Data to save it.

Add clusterjsp to the browser

Step 5. Go back to the admin panel and find the Clusters section. From there, click on cluster1 and then go to the Instances tab. You’ll see a list of instances, find the one your session is on (look for its hostname circled in the image) and select it. Then, choose the option to Stop it.

Stop instance

Step 6. Go back to the application and click the button to Reload the page.

Cluster sample

Despite the session being managed by a different instance, our custom attribute is still being displayed, as you can observe.

You can find all replication settings in the server admin panel under Configurations > cluster1-config > Availability Service. By default, the following replication modes are enabled:

  • Web Container Availability
  • EJB Container Availability

Enabled Replication

Cloning Cluster for A/B Testing

When rolling out a new version of an application or making crucial updates, it’s important to evaluate how these changes might impact the service’s performance and user experience. AccuWeb.Cloud PaaS offers a seamless way to conduct this testing through its Clone Environment feature, enabling you to do so without any service interruptions and without your customers noticing.

Clone Environment

Consequently, you’ll end up with a fully prepared cluster copy, complete with all necessary adjustments. Specifically, this involves a cloned DAS node functioning alongside the corresponding cloned workers, which are pre-configured in the admin panel. Additionally, all applications from the original setup are deployed in the cloned environment. Your final task will be to review your application’s code and custom server configurations for any hardcoded IP addresses or domains and make any necessary corrections.

Test Environment without live

By doing this, you’ll be able to make adjustments to your test environment without messing with the live version.

You can then assess the productivity and effectiveness of the updated application version by comparing it to the original through A/B testing. At AccuWeb.Cloud PaaS, is achievable using the Traffic Distributor add-on.

Traffic Distributor

When using Sticky Sessions in a dual-environment setup, requests are intelligently routed based on the specified backend weights. For detailed instructions on configuring this in TD, please refer to the A/B Testing guidelines.

A Few Useful Tips for GlassFish & Payara Clustering

Once you’ve set up your GlassFish or Payara cluster and confirmed it’s working correctly, consider the following tips to maximize its efficiency within AccuWeb.Cloud:

Optimize Resource Usage

Configure auto-scaling triggers in your environment settings to automatically add or remove nodes based on the incoming load. This ensures efficient resource consumption.

Database Connectivity

To connect to any database software stack, ensure the necessary libraries are integrated into the Administration Server. Most popular libraries are included by default on new GlassFish/Payara nodes. For legacy instances, verify that the /opt/glassfish/glassfish/domains/domain1/lib DAS directory contains the required files. If not, upload them manually to this location.

We hope these details help you see the benefits of using a GlassFish & Payara cluster. Try creating your cluster on an AccuWeb.Cloud Platform with a free trial period.

* View Product limitations and legal policies

All third-party logos and trademarks displayed on AccuWeb Cloud are the property of their respective owners and are used only for identification purposes. Their use does not imply any endorsement or affiliation.

Product limitations and legal policies

* Pricing Policy
To know about how the pricing is calculated please refer to our Terms and Conditions.