- 5 minutes to read
- Print
- DarkLight
- PDF
High availability and disaster recovery
- 5 minutes to read
- Print
- DarkLight
- PDF
High availability (HA) and disaster recovery (DR)
You can implement senhasegura's architecture using any of the options below:
- Running PAM Crypto Appliances in on-premises data centers;
- Running PAM Virtual Appliances in on-premises data centers;
- Running PAM Virtual Appliances on a cloud server.
senhasegura's architecture was designed to contain redundancies that can withstand hardware or software failures to minimize unplanned downtime.
The Principles of High Availability
- Eliminating single points of failure (SPOF). By building redundancies, the entire system remains operational even if a component fails.
- Reliable crossover. In redundant systems, the crossover point may become a SPOF itself. A truly reliable system must be able to switch from one component to another without losing data or impacting performance.
- Real-time failure detection. If the above two principles are observed, any failures will probably go unnoticed by users. However, maintenance teams must still be able to detect failures as soon as they occur.
Replication technologies
senhasegura's architecture includes several replication layers that ensure data is accessible in all your senhasegura instances.
Layer | Description |
---|---|
Native database replication | By default, senhasegura uses MariaDB Galera Cluster to replicate databases and support even higher-latency networks. |
File system replication using Rsync | All senhasegura instances synchronize their files with the other members of the cluster. |
Kernel layer replication* | PAM Crypto Appliances also include a DRBD. |
*Only available for PAM Crypto Appliances
Architectures
Appliances
Implementation strategies may vary depending on the Appliances you use:
Appliance Combination | Description |
---|---|
Two PAM Virtual Appliances |
|
Two PAM Crypto Appliances |
|
PAM Crypto Appliances and Virtual Appliances Combined |
|
On-premises and Cloud Instances Combined |
|
Scenarios
Two Members (no arbitrator)
Scenario | Type | Application Status | FailOver | Automatic Resync |
---|---|---|---|---|
1 | Member 2 Fails | Available (Member 1) | Automatic | Available |
2 | Member 1 Fails | Available (Member 2) | Manual | Available |
3 | Connection Failure (Between sites) | Available (Member 1) | Automatic | Available |
4 | Members 1 and 2 Fail | Not available | Not available | Not available |
Examples
Scenario 1 - Member 2 Fails
- Application Status: The application continues to run on the first member
- FailOver: Automatic
- Unavailable Members Recovery: Automatic once the instance reboots or regains connectivity
Scenario 2 - Member 1 Fails
- Application Status: The application continues to run on the second member
- FailOver: Manual
- Unavailable Members Recovery: Automatic once the instance reboots or regains connectivity
Scenario 3 - Connection Failure (Between sites)
- Application Status: The application continues to run on the first member
- FailOver: Automatic
- Unavailable Members Recovery: Automatic once the instance reboots or regains connectivity
Scenario 4 - Fall Node 1 and 2
- Application Status: The application becomes unavailable
- FailOver: None
- Member Failure Recovery: contact senhasegura's support team to restore these members
- If all members fail simultaneously, use the master key and credential backup procedure
Two Members (with arbitrator)
Scenario | Type | Application Status | FailOver | Automatic Resync |
---|---|---|---|---|
1 | Member 2 Fails | Available (Member 1) | Automatic | Available |
2 | Member 1 Fails | Available (Member 2) | Automatic | Available |
3 | Connection Failure (Between sites) | Available (Member at the same site as the arbitrator) | Automatic | Available |
4 | Members 1 and 2 Fail | Not available | Not available | Not available |
5 | Failure Arbitrator | Available (Both Members) | Automatic | Available |
6 | Failure Arbitrator and any other Member | Not available | Not available | Not available |
Examples
Scenario 1 - Member 2 Fails
- Application Status: The application continues to run on the first member
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 2 - Member 1 Fails
- Application Status: The application continues to run on the second member
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 3 - Connection Failure (Between sites)
- Application Status: The application continues to run on the member that is at the same site as the Arbitrator
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 4 - Members 1 and 2 Fail
- Application Status: The application is unavailable
- FailOver: Not available
- Member Failure Recovery: contact senhasegura's support team to restore these members
- If all members fail simultaneously, use the master key and credential backup procedure
Scenario 5 - Arbitrator Fails
- Application Status: The application becomes available on both senhasegura members
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 6 - Arbitrator Fails
- Application Status: The application becomes unavailable
- FailOver: Not available
- Member Failure Recovery: Not available, contact senhasegura's support team to restore these members
- If all members fail simultaneously, use the master key and credential backup procedure
Three Members
Scenario | Type | Application Status | FailOver | Automatic Resync |
---|---|---|---|---|
1 | Member 2 Fails | Available (Members 1 and 3) | Automatic | Available |
2 | Member 1 Fails | Available (Members 2 and 3) | Automatic | Available |
3 | Member 3 Fails | Available (Members 1 and 2) | Automatic | Available |
4 | Connection Failure with one member | Available (All the other members) | Automatic | Available |
5 | Connection Failure (Between all members) | Available (Member 1) | Not available | Not available |
6 | All Members Fail | Not available | Not available | Not available |
Examples
Scenario 1 - Member 2 Fails
- Application Status: The application continues to run on members 1 and 3
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 2 - Member 1 Fails
- Application Status: The application continues to run on members 2 and 3
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 3 - Member 3 Fails
- Application Status: The application continues to run on members 1 and 2
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 4 - Connection Failure with one member
- Application Status: The application continues to run on the members still connected to the cluster
- FailOver: Automatic
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 5 - Connection Failure (Between all members)
- Application Status: The application continues to run on the first member
- FailOver: Not available
- Member Failure Recovery: Automatic once the instance reboots or regains connectivity
Scenario 6 - Failure of all members
- Application Status: The application becomes unavailable
- FailOver: Not available
- Member Failure Recovery: Not available, contact senhasegura's support team to restore these members
- If all members fail simultaneously, use the master key and credential backup procedure
Disaster Recovery (DR)
Disaster recovery involves a set of policies and procedures to recover data or restore infrastructure in case of a natural or artificial disaster. DR allows customers to reconfigure senhasegura resources using an alternative environment when it proves impossible to recover the primary environment as required.
Data integrity is subject to the quality and speed of a connection and the amount of data in a cluster. If any of these variables fail to meet the requirements, it may result in data loss, production environment shutdown, and the activation of the DR environment. A manual reboot and recovery process is required if the failure was caused by hardware problems.
Hot-Spare features
senhasegura instances include monitoring and administrative URLs that monitor their status. Load balancers can use these features to switch instances automatically in case one of the instances is down.
In-house Load balancer
You can choose to use a proprietary load balancer or add senhasegura's load balancer to your clustered environment. Click here to learn more about senhasegura's Load Balancer.