Skip to main content

Multi-Application Deployment Strategies: Complete Configuration Guide

When your development projects grow, you'll often need to host multiple applications on a single VPS server. This comprehensive guide shows you exactly how to configure different deployment strategies and add new applications step-by-step.

Understanding Multi-Application Hosting: The Restaurant Complex Analogy

What you'll learn:

  • Three main strategies for hosting multiple applications
  • How to choose the right strategy for your needs
  • Step-by-step configuration for each approach
  • How to add new applications to existing setups
  • Troubleshooting common multi-app issues

Prerequisites:

  • Basic VPS server setup with Nginx installed
  • Understanding of GitHub Actions workflows
  • Domain name configured (for subdomain strategy)

The Restaurant Complex Analogy

Think of hosting multiple applications like managing different restaurants in a complex:

  • Port-Based Strategy = Different floors (Restaurant on floor 1, café on floor 2, bar on floor 3)
  • Subdomain Strategy = Different buildings (main.restaurant.com, cafe.restaurant.com, bar.restaurant.com)
  • Path-Based Strategy = Different sections of same building (restaurant.com/, restaurant.com/cafe/, restaurant.com/bar/)

Each approach has advantages depending on your needs, traffic patterns, and domain setup.

Strategy 1: Port-Based Multi-Application Setup

When to Use Port-Based Strategy

✅ Best for:

  • Different applications with distinct purposes
  • Applications that don't need to share sessions/cookies
  • Development and staging environments
  • When you don't want to buy multiple domains

❌ Avoid when:

  • Users need seamless experience between apps
  • Corporate firewalls block non-standard ports
  • SEO is important (search engines prefer standard ports)

Access Pattern:

  • Main app: http://yourdomain.com (port 80)
  • Admin panel: http://yourdomain.com:8080
  • API dashboard: http://yourdomain.com:3000
  • Tools: http://yourdomain.com:8090

Step-by-Step Port-Based Configuration

1. Plan Your Port Assignment

Create a port assignment plan to avoid conflicts:

# Create port assignment documentation
nano ~/port-assignments.md

# Add your applications:
# Port 80 - Main Website (public)
# Port 8080 - Admin Panel (internal team)
# Port 3000 - API Dashboard (monitoring)
# Port 8090 - Developer Tools
# Port 9000 - Client Portal

2. Configure Firewall for Multiple Ports

# Allow your assigned ports
sudo ufw allow 80 # Main website
sudo ufw allow 8080 # Admin panel
sudo ufw allow 3000 # API dashboard
sudo ufw allow 8090 # Developer tools
sudo ufw allow 9000 # Client portal

# Or allow a port range (if your apps use consecutive ports)
sudo ufw allow 8000:9000/tcp

# Check firewall status
sudo ufw status numbered

3. Create Application Directories

# Create directories for each application
sudo mkdir -p /var/www/{main-website,admin-panel,api-dashboard,dev-tools,client-portal}

# Set proper ownership (replace 'deploy' with your username)
sudo chown -R deploy:www-data /var/www/main-website
sudo chown -R deploy:www-data /var/www/admin-panel
sudo chown -R deploy:www-data /var/www/api-dashboard
sudo chown -R deploy:www-data /var/www/dev-tools
sudo chown -R deploy:www-data /var/www/client-portal

# Set proper permissions
sudo chmod -R 755 /var/www/

4. Configure Nginx for Each Application

Main Website (Port 80):

# Create main website configuration
sudo nano /etc/nginx/sites-available/main-website
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;

root /var/www/main-website;
index index.html index.htm;

# Logging for this application
access_log /var/log/nginx/main-website-access.log;
error_log /var/log/nginx/main-website-error.log;

# Handle React Router (client-side routing)
location / {
try_files $uri $uri/ /index.html;
}

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;

# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript
application/x-javascript application/xml+rss
application/javascript application/json;
}

Admin Panel (Port 8080):

# Create admin panel configuration
sudo nano /etc/nginx/sites-available/admin-panel
server {
listen 8080;
listen [::]:8080;
server_name yourdomain.com _; # Accept any server name on this port

root /var/www/admin-panel;
index index.html index.htm;

# Separate logging
access_log /var/log/nginx/admin-panel-access.log;
error_log /var/log/nginx/admin-panel-error.log;

# Handle React Router
location / {
try_files $uri $uri/ /index.html;
}

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}

# Optional: Add basic authentication for admin panel
# auth_basic "Admin Area";
# auth_basic_user_file /etc/nginx/.htpasswd;
}

API Dashboard (Port 3000):

# Create API dashboard configuration
sudo nano /etc/nginx/sites-available/api-dashboard
server {
listen 3000;
listen [::]:3000;
server_name yourdomain.com _;

root /var/www/api-dashboard;
index index.html index.htm;

# Separate logging
access_log /var/log/nginx/api-dashboard-access.log;
error_log /var/log/nginx/api-dashboard-error.log;

# Handle SPA routing
location / {
try_files $uri $uri/ /index.html;
}

# If this dashboard connects to live APIs, add CORS headers
location /api/ {
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type' always;

# Handle preflight requests
if ($request_method = 'OPTIONS') {
return 204;
}
}
}

5. Enable All Applications

# Enable all sites
sudo ln -s /etc/nginx/sites-available/main-website /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/admin-panel /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/api-dashboard /etc/nginx/sites-enabled/

# Test configuration
sudo nginx -t

# If test passes, reload Nginx
sudo systemctl reload nginx

6. Test Each Application

# Test each port locally on server
curl -I http://localhost:80 # Main website
curl -I http://localhost:8080 # Admin panel
curl -I http://localhost:3000 # API dashboard

# Test externally (replace with your domain/IP)
curl -I http://yourdomain.com:80
curl -I http://yourdomain.com:8080
curl -I http://yourdomain.com:3000

Strategy 2: Subdomain-Based Multi-Application Setup

When to Use Subdomain Strategy

✅ Best for:

  • Professional appearance (no port numbers in URLs)
  • SEO-friendly setup
  • Applications that serve different user groups
  • When you want SSL certificates for each app

❌ Avoid when:

  • You don't control DNS settings
  • Subdomain SSL certificates are expensive
  • Applications need to share cookies/sessions

Access Pattern:

  • Main app: https://yourdomain.com
  • Admin panel: https://admin.yourdomain.com
  • API dashboard: https://api.yourdomain.com
  • Tools: https://tools.yourdomain.com

Step-by-Step Subdomain Configuration

1. Configure DNS Records

In your domain registrar's DNS panel, add A records:

# DNS A Records
yourdomain.com → your-server-ip
www.yourdomain.com → your-server-ip
admin.yourdomain.com → your-server-ip
api.yourdomain.com → your-server-ip
tools.yourdomain.com → your-server-ip

2. Create Nginx Configurations

Main Domain:

sudo nano /etc/nginx/sites-available/main-domain
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;

root /var/www/main-website;
index index.html;

access_log /var/log/nginx/main-access.log;
error_log /var/log/nginx/main-error.log;

location / {
try_files $uri $uri/ /index.html;
}
}

Admin Subdomain:

sudo nano /etc/nginx/sites-available/admin-subdomain
server {
listen 80;
listen [::]:80;
server_name admin.yourdomain.com;

root /var/www/admin-panel;
index index.html;

access_log /var/log/nginx/admin-access.log;
error_log /var/log/nginx/admin-error.log;

location / {
try_files $uri $uri/ /index.html;
}

# Optional: Restrict access to admin panel
# allow 192.168.1.0/24; # Your office IP range
# deny all;
}

API Subdomain:

sudo nano /etc/nginx/sites-available/api-subdomain
server {
listen 80;
listen [::]:80;
server_name api.yourdomain.com;

root /var/www/api-dashboard;
index index.html;

access_log /var/log/nginx/api-access.log;
error_log /var/log/nginx/api-error.log;

location / {
try_files $uri $uri/ /index.html;
}
}

3. Enable Subdomain Sites

# Enable all subdomain sites
sudo ln -s /etc/nginx/sites-available/main-domain /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/admin-subdomain /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/api-subdomain /etc/nginx/sites-enabled/

# Test and reload
sudo nginx -t
sudo systemctl reload nginx

4. Configure SSL for All Subdomains

# Install Certbot if not already installed
sudo apt install certbot python3-certbot-nginx -y

# Generate SSL certificates for all domains at once
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com -d admin.yourdomain.com -d api.yourdomain.com

# Or generate certificates separately for each subdomain
sudo certbot --nginx -d admin.yourdomain.com
sudo certbot --nginx -d api.yourdomain.com

# Verify auto-renewal
sudo certbot certificates
sudo systemctl status certbot.timer

Strategy 3: Path-Based Multi-Application Setup

When to Use Path-Based Strategy

✅ Best for:

  • Applications that share user sessions
  • Single SSL certificate setup
  • Related applications (like admin panel for main app)
  • Simplified DNS management

❌ Avoid when:

  • Applications are completely independent
  • You need different caching strategies per app
  • Build tools don't support base paths well

Access Pattern:

  • Main app: https://yourdomain.com/
  • Admin panel: https://yourdomain.com/admin/
  • Dashboard: https://yourdomain.com/dashboard/
  • Tools: https://yourdomain.com/tools/

Step-by-Step Path-Based Configuration

1. Configure Applications for Base Paths

For React Apps (Vite):

// vite.config.js for admin panel
export default {
base: '/admin/', // This app will be served from /admin/
build: {
outDir: 'dist'
}
}

// vite.config.js for dashboard
export default {
base: '/dashboard/', // This app will be served from /dashboard/
build: {
outDir: 'dist'
}
}

For React Apps (Create React App):

// package.json for admin panel
{
"name": "admin-panel",
"homepage": "/admin",
"scripts": {
"build": "react-scripts build"
}
}

// package.json for dashboard
{
"name": "dashboard",
"homepage": "/dashboard",
"scripts": {
"build": "react-scripts build"
}
}

2. Create Single Nginx Configuration

sudo nano /etc/nginx/sites-available/multi-path-app
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;

# Main application (serves from root)
location / {
root /var/www/main-website;
try_files $uri $uri/ /index.html;

access_log /var/log/nginx/main-access.log;
error_log /var/log/nginx/main-error.log;
}

# Admin panel at /admin/
location /admin/ {
alias /var/www/admin-panel/;
try_files $uri $uri/ /admin/index.html;

access_log /var/log/nginx/admin-access.log;
error_log /var/log/nginx/admin-error.log;
}

# Dashboard at /dashboard/
location /dashboard/ {
alias /var/www/dashboard/;
try_files $uri $uri/ /dashboard/index.html;

access_log /var/log/nginx/dashboard-access.log;
error_log /var/log/nginx/dashboard-error.log;
}

# Tools at /tools/
location /tools/ {
alias /var/www/tools/;
try_files $uri $uri/ /tools/index.html;

access_log /var/log/nginx/tools-access.log;
error_log /var/log/nginx/tools-error.log;
}

# API endpoints (if you have backend APIs)
location /api/ {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

# Static assets caching
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";

# Try to serve from each app directory
try_files $uri @fallback;
}

# Fallback for static assets
location @fallback {
root /var/www/main-website;
}

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
}

3. Deploy Applications to Correct Paths

# Create directories for path-based apps
sudo mkdir -p /var/www/{main-website,admin-panel,dashboard,tools}

# Set ownership
sudo chown -R deploy:www-data /var/www/

# Enable the multi-path configuration
sudo ln -s /etc/nginx/sites-available/multi-path-app /etc/nginx/sites-enabled/

# Test and reload
sudo nginx -t
sudo systemctl reload nginx

Adding New Applications to Existing Setup

Adding a New App to Port-Based Setup

Scenario: You want to add a "Client Portal" to your existing port-based setup.

1. Choose Available Port

# Check which ports are already in use
sudo netstat -tlnp | grep nginx
# Or check your port assignments documentation
cat ~/port-assignments.md

2. Update Firewall

# Allow the new port (example: 9000)
sudo ufw allow 9000

# Verify firewall rules
sudo ufw status

3. Create Application Directory

# Create directory for new client portal
sudo mkdir -p /var/www/client-portal

# Set proper ownership
sudo chown deploy:www-data /var/www/client-portal

4. Create Nginx Configuration

sudo nano /etc/nginx/sites-available/client-portal
server {
listen 9000;
listen [::]:9000;
server_name yourdomain.com _;

root /var/www/client-portal;
index index.html;

access_log /var/log/nginx/client-portal-access.log;
error_log /var/log/nginx/client-portal-error.log;

location / {
try_files $uri $uri/ /index.html;
}

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}

# Client-specific security (if needed)
# add_header Content-Security-Policy "default-src 'self';" always;
}

5. Enable and Test

# Enable the new site
sudo ln -s /etc/nginx/sites-available/client-portal /etc/nginx/sites-enabled/

# Test configuration
sudo nginx -t

# Reload Nginx
sudo systemctl reload nginx

# Test the new application
curl -I http://yourdomain.com:9000

Adding a New App to Subdomain Setup

Scenario: Adding "support.yourdomain.com" to existing subdomain setup.

1. Configure DNS

Add DNS A record:

support.yourdomain.com → your-server-ip

2. Create Application Directory

sudo mkdir -p /var/www/support-portal
sudo chown deploy:www-data /var/www/support-portal

3. Create Nginx Configuration

sudo nano /etc/nginx/sites-available/support-subdomain
server {
listen 80;
listen [::]:80;
server_name support.yourdomain.com;

root /var/www/support-portal;
index index.html;

access_log /var/log/nginx/support-access.log;
error_log /var/log/nginx/support-error.log;

location / {
try_files $uri $uri/ /index.html;
}
}

4. Enable and Add SSL

# Enable the site
sudo ln -s /etc/nginx/sites-available/support-subdomain /etc/nginx/sites-enabled/

# Test configuration
sudo nginx -t && sudo systemctl reload nginx

# Add SSL certificate
sudo certbot --nginx -d support.yourdomain.com

Adding a New App to Path-Based Setup

Scenario: Adding "/support/" path to existing path-based setup.

1. Configure App Build for Base Path

// vite.config.js for support app
export default {
base: "/support/",
build: {
outDir: "dist",
},
};

2. Create Application Directory

sudo mkdir -p /var/www/support-portal
sudo chown deploy:www-data /var/www/support-portal

3. Update Existing Nginx Configuration

# Edit the existing multi-path configuration
sudo nano /etc/nginx/sites-available/multi-path-app

Add this location block to the existing server configuration:

# Add this inside the existing server block
location /support/ {
alias /var/www/support-portal/;
try_files $uri $uri/ /support/index.html;

access_log /var/log/nginx/support-access.log;
error_log /var/log/nginx/support-error.log;
}

4. Test and Reload

# Test configuration
sudo nginx -t

# Reload Nginx
sudo systemctl reload nginx

# Test the new path
curl -I http://yourdomain.com/support/

Troubleshooting Multi-Application Issues

Configuration Validation Checklist

Before deploying new applications, run through this checklist:

#!/bin/bash
# multi-app-validation.sh

echo "🔍 Multi-Application Configuration Validation"
echo "============================================="

# 1. Check for port conflicts
echo "📋 Checking for port conflicts..."
sudo nginx -T | grep "listen" | sort | uniq -c | awk '$1 > 1 {print "⚠️ Port conflict detected: " $0}'

# 2. Check for server name conflicts
echo "📋 Checking for server name conflicts..."
sudo nginx -T | grep "server_name" | sort | uniq -c | awk '$1 > 1 {print "⚠️ Server name conflict detected: " $0}'

# 3. Verify all sites are enabled
echo "📋 Checking enabled sites..."
for site in $(ls /etc/nginx/sites-available/); do
if [ -L "/etc/nginx/sites-enabled/$site" ]; then
echo "✅ $site - enabled"
else
echo "⚠️ $site - available but not enabled"
fi
done

# 4. Test Nginx configuration
echo "📋 Testing Nginx configuration..."
if sudo nginx -t; then
echo "✅ Nginx configuration is valid"
else
echo "❌ Nginx configuration has errors"
fi

# 5. Check firewall ports
echo "📋 Checking firewall configuration..."
sudo ufw status | grep -E "(80|8080|3000|8090|9000)" || echo "⚠️ Standard ports not found in firewall rules"

echo "============================================="
echo "✅ Validation complete!"

Common Issues and Solutions

Issue: "Address already in use" when reloading Nginx

# Find what's using the port
sudo netstat -tlnp | grep :8080

# Kill the process if it's not Nginx
sudo kill -9 <process_id>

# Or restart Nginx completely
sudo systemctl restart nginx

Issue: Wrong application served on a URL

# Check Nginx configuration priority
sudo nginx -T | grep -A 10 -B 2 "server {"

# Nginx serves the first matching server block
# Make sure your server_name directives are specific enough

Issue: SSL certificate errors with subdomains

# Check certificate coverage
sudo certbot certificates

# Generate new certificate that covers all subdomains
sudo certbot --nginx -d yourdomain.com -d *.yourdomain.com

# Or use individual certificates per subdomain
sudo certbot --nginx -d specific.yourdomain.com

Real-World Configuration Examples

Example 1: React Frontend + Node.js Backend (Combined Setup)

Scenario: The most common deployment pattern - a React frontend application with a Node.js backend API on the same server.

Key Architecture Points:

  • React App: Built as static files, served directly by Nginx
  • Node.js API: Runs as a background service managed by PM2
  • Nginx: Acts as both static file server and reverse proxy
  • Benefits: Fast static serving + reliable backend process management

Component Roles & Responsibilities

React App Management

Who Manages: Nginx Web Server
Why Nginx:

  • React builds into static HTML, CSS, and JavaScript files
  • Nginx is optimized for serving static content with minimal memory usage
  • No need for a JavaScript runtime after the build process
  • Handles thousands of concurrent static file requests efficiently

What Happens Behind the Scenes:

# 1. React build process creates static files
npm run build
# Creates: build/static/js/main.[hash].js, build/static/css/main.[hash].css, etc.

# 2. Nginx serves these files directly from disk
# When user visits yourdomain.com:
# -> Nginx reads /var/www/react-app/build/index.html
# -> Browser requests /static/js/main.[hash].js
# -> Nginx serves file directly from filesystem (very fast!)

Node.js API Management

Who Manages: PM2 Process Manager
Why PM2:

  • Node.js apps are long-running processes that need constant monitoring
  • PM2 automatically restarts crashed processes
  • Provides memory management, logging, and clustering capabilities
  • Handles graceful shutdowns and zero-downtime deployments

What Happens Behind the Scenes:

# 1. PM2 starts Node.js process
pm2 start server.js --name api-server

# 2. PM2 continuously monitors the process
# - Watches memory usage and CPU
# - Restarts if process crashes or uses too much memory
# - Keeps logs of all stdout/stderr output
# - Maintains process PID and status

# 3. When API request comes in:
# Browser -> Nginx (/api/users) -> PM2-managed Node.js (localhost:3001) -> Database -> Response back

Nginx Reverse Proxy Role

Why Nginx Handles Both:

  • Static Files: Direct filesystem access (fastest possible serving)
  • API Proxy: Routes API calls to PM2-managed Node.js process
  • Load Balancing: Can distribute requests across multiple Node.js instances
  • SSL Termination: Handles HTTPS certificates for both apps
  • Caching: Can cache API responses and static assets

Request Flow Behind the Scenes:

User Request Flow:

1. Static File Request (yourdomain.com/dashboard):
Browser → Nginx → Filesystem (/var/www/react-app/build/index.html) → Browser

2. API Request (yourdomain.com/api/users):
Browser → Nginx → PM2 Process (localhost:3001) → Database → Response chain back

3. React Router Navigation (yourdomain.com/profile):
Browser → Nginx (try_files rule) → Falls back to index.html → React Router handles routing

Application Structure:

/var/www/
├── react-app/
│ └── build/ # React production build (static files)
└── nodejs-api/ # Node.js API source code

PM2 Process Management:

# Navigate to Node.js API directory
cd /var/www/nodejs-api

# Start Node.js API with PM2
pm2 start server.js --name "api-server" --port 3001

# Save PM2 configuration
pm2 save
pm2 startup

# Check status
pm2 status

Complete Nginx Configuration:

# Combined React + Node.js Setup
# /etc/nginx/sites-available/react-nodejs-app
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;

# Serve React app (static files)
root /var/www/react-app/build;
index index.html;

# Logging
access_log /var/log/nginx/react-nodejs-access.log;
error_log /var/log/nginx/react-nodejs-error.log;

# Serve React static files
location / {
try_files $uri $uri/ /index.html; # SPA routing fallback

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}

# Proxy API requests to Node.js backend
location /api {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;

# Timeout settings
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}

# Health check endpoint
location /health {
proxy_pass http://localhost:3001/health;
access_log off;
}

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
}

PM2 Ecosystem Configuration (ecosystem.config.js):

module.exports = {
apps: [
{
name: "api-server",
script: "server.js",
instances: 1,
exec_mode: "fork",
env: {
NODE_ENV: "production",
PORT: 3001,
},
error_file: "./logs/err.log",
out_file: "./logs/out.log",
log_file: "./logs/combined.log",
time: true,
},
],
};

GitHub Actions Deployment Workflow:

name: Deploy React + Node.js Apps

on:
push:
branches: [main]

jobs:
deploy-react:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: "18"

- name: Install dependencies
run: npm install
working-directory: ./frontend

- name: Build React app
run: npm run build
working-directory: ./frontend

- name: Deploy to VPS
run: |
rsync -avz --delete ./frontend/build/ user@your-server:/var/www/react-app/build/

deploy-nodejs:
runs-on: ubuntu-latest
needs: deploy-react
steps:
- uses: actions/checkout@v3

- name: Deploy Node.js API
run: |
rsync -avz --exclude node_modules ./backend/ user@your-server:/var/www/nodejs-api/
ssh user@your-server "cd /var/www/nodejs-api && npm install && pm2 restart api-server"

Process Lifecycle & Deployment Flow

What Happens During Deployment

React App Deployment:

# 1. GitHub Actions builds React app
npm run build # Creates optimized static files

# 2. Files are transferred to server
rsync -avz ./build/ user@server:/var/www/react-app/build/

# 3. Nginx immediately serves new files (no restart needed)
# Old files are replaced, new requests get new version instantly

Node.js API Deployment:

# 1. Code is transferred to server
rsync -avz ./backend/ user@server:/var/www/nodejs-api/

# 2. Dependencies are installed
npm install --production

# 3. PM2 performs graceful restart
pm2 restart api-server
# PM2 starts new process, waits for it to be ready, then kills old process
# Zero downtime for API requests

Memory and Resource Management

React App (Static Files):

  • Memory Usage: ~0MB (no runtime process)
  • CPU Usage: ~0% (Nginx handles file serving)
  • Disk Usage: ~50-200MB for typical React build
  • Scaling: Limited only by Nginx's file serving capacity (thousands of concurrent users)

Node.js API (PM2 Managed):

  • Memory Usage: ~50-500MB per instance (depends on app complexity)
  • CPU Usage: Varies with request load
  • Process Management: PM2 monitors and auto-restarts if memory/CPU limits exceeded
  • Scaling: Can run multiple instances with PM2 cluster mode
// PM2 Cluster Mode Example
module.exports = {
apps: [
{
name: "api-server",
script: "server.js",
instances: "max", // Uses all CPU cores
exec_mode: "cluster",
max_memory_restart: "300M", // Restart if memory exceeds 300MB
},
],
};

Error Handling & Recovery

React App Failures:

  • Build Failure: GitHub Actions fails, old version remains active
  • File Corruption: Nginx returns 404, but doesn't crash
  • Recovery: Simply re-deploy working build

Node.js API Failures:

  • Process Crash: PM2 automatically restarts within seconds
  • Memory Leak: PM2 restarts when memory limit reached
  • Startup Failure: PM2 retries startup with exponential backoff
  • Recovery: PM2 handles most issues automatically
# PM2 Automatic Recovery Logs
pm2 logs api-server --lines 50
# Shows restart attempts, error messages, and recovery actions

Why This Setup Works Best:

  • Performance: Nginx serves static React files at maximum speed (10,000+ req/sec)
  • Reliability: PM2 ensures Node.js API has 99.9%+ uptime with auto-recovery
  • Scalability: React scales infinitely (static files), Node.js scales with PM2 clustering
  • Maintenance: Independent deployment cycles - update frontend without touching backend
  • Security: Nginx provides built-in DDoS protection and request filtering
  • Monitoring: PM2 provides real-time metrics, Nginx provides access logs
  • Cost Efficiency: One server handles both apps with optimal resource usage

Example 2: SaaS Company with Multiple Services (Port-Based)

Scenario: A SaaS company needs to host their main product, admin panel, API documentation, and customer support portal.

Application Structure:

/var/www/
├── saas-product/ # Port 80 - Main SaaS application
├── admin-dashboard/ # Port 8080 - Internal admin panel
├── api-docs/ # Port 3000 - API documentation
├── support-portal/ # Port 8090 - Customer support
└── monitoring-dashboard/ # Port 9000 - System monitoring

Complete Nginx Configuration:

# Main SaaS Product (Port 80)
# /etc/nginx/sites-available/saas-product
server {
listen 80;
listen [::]:80;
server_name saas.company.com www.saas.company.com;

root /var/www/saas-product;
index index.html;

access_log /var/log/nginx/saas-access.log;
error_log /var/log/nginx/saas-error.log;

# Handle React Router
location / {
try_files $uri $uri/ /index.html;
}

# API proxy to backend
location /api/ {
proxy_pass http://localhost:4000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

# WebSocket support for real-time features
location /ws/ {
proxy_pass http://localhost:4000/ws/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

# Admin Dashboard (Port 8080) - Restricted Access
# /etc/nginx/sites-available/admin-dashboard
server {
listen 8080;
listen [::]:8080;
server_name saas.company.com;

root /var/www/admin-dashboard;
index index.html;

access_log /var/log/nginx/admin-access.log;
error_log /var/log/nginx/admin-error.log;

# Restrict access to office IPs only
allow 203.0.113.0/24; # Office IP range
allow 198.51.100.50; # VPN IP
deny all;

# Basic auth as additional security layer
auth_basic "Admin Area";
auth_basic_user_file /etc/nginx/.htpasswd;

location / {
try_files $uri $uri/ /index.html;
}
}

# API Documentation (Port 3000)
# /etc/nginx/sites-available/api-docs
server {
listen 3000;
listen [::]:3000;
server_name saas.company.com;

root /var/www/api-docs;
index index.html;

access_log /var/log/nginx/docs-access.log;
error_log /var/log/nginx/docs-error.log;

location / {
try_files $uri $uri/ /index.html;
}

# Allow iframe embedding for documentation widgets
add_header X-Frame-Options "SAMEORIGIN";
}

GitHub Actions Workflow for SaaS Setup:

name: SaaS Multi-App Deployment

on:
push:
branches: [main]

jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
saas-product: ${{ steps.changes.outputs.saas-product }}
admin-dashboard: ${{ steps.changes.outputs.admin-dashboard }}
api-docs: ${{ steps.changes.outputs.api-docs }}
support-portal: ${{ steps.changes.outputs.support-portal }}
steps:
- uses: actions/checkout@v3
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
saas-product:
- 'apps/saas-product/**'
- 'packages/shared-components/**'
admin-dashboard:
- 'apps/admin-dashboard/**'
- 'packages/admin-utils/**'
api-docs:
- 'apps/api-docs/**'
- 'api-specs/**'
support-portal:
- 'apps/support-portal/**'

deploy-saas-product:
needs: detect-changes
if: needs.detect-changes.outputs.saas-product == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and Deploy Main Product
run: |
cd apps/saas-product
npm ci && npm run build

# Deploy to port 80 directory
scp -r dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:/var/www/saas-product/

deploy-admin-dashboard:
needs: detect-changes
if: needs.detect-changes.outputs.admin-dashboard == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and Deploy Admin Dashboard
run: |
cd apps/admin-dashboard
npm ci && npm run build

# Deploy to port 8080 directory
scp -r dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:/var/www/admin-dashboard/

Example 2: E-commerce Platform (Subdomain-Based)

Scenario: E-commerce platform with separate customer store, vendor dashboard, and admin panel.

Domain Structure:

store.ecommerce.com      → Main customer store
vendors.ecommerce.com → Vendor management dashboard
admin.ecommerce.com → Admin control panel
api.ecommerce.com → API gateway and documentation

DNS Configuration:

# DNS A Records
store.ecommerce.com → 192.168.1.100
vendors.ecommerce.com → 192.168.1.100
admin.ecommerce.com → 192.168.1.100
api.ecommerce.com → 192.168.1.100

Nginx Configuration for E-commerce:

# Customer Store
# /etc/nginx/sites-available/store-subdomain
server {
listen 80;
listen [::]:80;
server_name store.ecommerce.com;

root /var/www/customer-store;
index index.html;

# High-performance settings for customer store
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript;

# Cache product images aggressively
location ~* \.(jpg|jpeg|png|gif|webp)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}

location / {
try_files $uri $uri/ /index.html;
}

# Redirect to HTTPS (after SSL setup)
# return 301 https://$server_name$request_uri;
}

# Vendor Dashboard
# /etc/nginx/sites-available/vendors-subdomain
server {
listen 80;
listen [::]:80;
server_name vendors.ecommerce.com;

root /var/www/vendor-dashboard;
index index.html;

# Vendor-specific security headers
add_header X-Content-Type-Options "nosniff";
add_header X-Frame-Options "DENY";
add_header X-XSS-Protection "1; mode=block";

location / {
try_files $uri $uri/ /index.html;
}

# File upload endpoint for vendor products
location /uploads/ {
client_max_body_size 10M;
proxy_pass http://localhost:5000/uploads/;
}
}

# Admin Control Panel
# /etc/nginx/sites-available/admin-subdomain
server {
listen 80;
listen [::]:80;
server_name admin.ecommerce.com;

root /var/www/admin-panel;
index index.html;

# Strict security for admin panel
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline';" always;

location / {
try_files $uri $uri/ /index.html;
}
}

SSL Setup for Multiple Subdomains:

# Generate wildcard certificate for all subdomains
sudo certbot --nginx -d ecommerce.com -d *.ecommerce.com

# Or generate individual certificates
sudo certbot --nginx -d store.ecommerce.com
sudo certbot --nginx -d vendors.ecommerce.com
sudo certbot --nginx -d admin.ecommerce.com

Example 3: Corporate Website with Tools (Path-Based)

Scenario: Corporate website with integrated employee tools and client portals.

URL Structure:

corporate.com/           → Main corporate website
corporate.com/tools/ → Employee productivity tools
corporate.com/clients/ → Client portal
corporate.com/hr/ → HR management system
corporate.com/docs/ → Internal documentation

Application Build Configuration:

// vite.config.js for tools app
export default {
base: '/tools/',
build: {
outDir: 'dist',
assetsDir: 'assets'
},
server: {
port: 3001
}
}

// vite.config.js for client portal
export default {
base: '/clients/',
build: {
outDir: 'dist'
}
}

// vite.config.js for HR system
export default {
base: '/hr/',
build: {
outDir: 'dist'
}
}

Single Nginx Configuration for Path-Based Setup:

# /etc/nginx/sites-available/corporate-multi-path
server {
listen 80;
listen [::]:80;
server_name corporate.com www.corporate.com;

# Main corporate website (root)
location / {
root /var/www/corporate-main;
try_files $uri $uri/ /index.html;

access_log /var/log/nginx/main-access.log;
}

# Employee Tools at /tools/
location /tools/ {
alias /var/www/employee-tools/;
try_files $uri $uri/ /tools/index.html;

access_log /var/log/nginx/tools-access.log;

# Restrict to employee IP ranges
allow 192.168.1.0/24;
allow 10.0.0.0/8;
deny all;
}

# Client Portal at /clients/
location /clients/ {
alias /var/www/client-portal/;
try_files $uri $uri/ /clients/index.html;

access_log /var/log/nginx/clients-access.log;

# Rate limiting for client portal
limit_req zone=client_portal burst=10 nodelay;
}

# HR System at /hr/
location /hr/ {
alias /var/www/hr-system/;
try_files $uri $uri/ /hr/index.html;

access_log /var/log/nginx/hr-access.log;

# Extra security for HR system
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
}

# Internal Documentation at /docs/
location /docs/ {
alias /var/www/documentation/;
try_files $uri $uri/ /docs/index.html;

access_log /var/log/nginx/docs-access.log;
}

# Shared API endpoint
location /api/ {
proxy_pass http://localhost:4000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

# Global static asset caching
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";

# Try each application directory
try_files $uri @asset_fallback;
}

location @asset_fallback {
root /var/www/corporate-main;
}
}

# Rate limiting configuration (add to nginx.conf)
http {
limit_req_zone $binary_remote_addr zone=client_portal:10m rate=1r/s;
}

GitHub Actions for Corporate Path-Based Setup:

name: Corporate Multi-Path Deployment

on:
push:
branches: [main]

jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
main: ${{ steps.changes.outputs.main }}
tools: ${{ steps.changes.outputs.tools }}
clients: ${{ steps.changes.outputs.clients }}
hr: ${{ steps.changes.outputs.hr }}
docs: ${{ steps.changes.outputs.docs }}
steps:
- uses: actions/checkout@v3
- uses: dorny/paths-filter@v2
id: changes
with:
filters: |
main:
- 'apps/corporate-main/**'
tools:
- 'apps/employee-tools/**'
clients:
- 'apps/client-portal/**'
hr:
- 'apps/hr-system/**'
docs:
- 'apps/documentation/**'

build-and-deploy:
needs: detect-changes
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Build Main Site
if: needs.detect-changes.outputs.main == 'true'
run: |
cd apps/corporate-main
npm ci && npm run build
scp -r dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:/var/www/corporate-main/

- name: Build Employee Tools
if: needs.detect-changes.outputs.tools == 'true'
run: |
cd apps/employee-tools
npm ci && npm run build
scp -r dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:/var/www/employee-tools/

- name: Build Client Portal
if: needs.detect-changes.outputs.clients == 'true'
run: |
cd apps/client-portal
npm ci && npm run build
scp -r dist/* ${{ secrets.SSH_USER }}@${{ secrets.SSH_HOST }}:/var/www/client-portal/

- name: Test All Paths
run: |
sleep 30
curl -f -s http://${{ secrets.SSH_HOST }}/
curl -f -s http://${{ secrets.SSH_HOST }}/tools/
curl -f -s http://${{ secrets.SSH_HOST }}/clients/
curl -f -s http://${{ secrets.SSH_HOST }}/hr/
echo "✅ All applications accessible!"

Performance Optimization for Multi-App Setups

Shared Nginx Configuration Optimizations:

# Add to /etc/nginx/nginx.conf

# Worker process optimization
worker_processes auto;
worker_connections 2048;

# Buffer optimizations for multi-app
client_body_buffer_size 128k;
client_max_body_size 20M;
client_header_buffer_size 1k;
large_client_header_buffers 4 8k;

# Connection optimization
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;

# Gzip compression for all apps
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/xml+rss
application/json;

# Cache zone for static assets
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=static_cache:10m max_size=1g inactive=60m;

Monitoring Script for Multi-App Performance:

#!/bin/bash
# /home/deploy/monitor-performance.sh

echo "🔍 Multi-App Performance Monitor - $(date)"
echo "=========================================="

# Define applications with their URLs
declare -A apps=(
["Main Site"]="http://localhost:80"
["Admin Panel"]="http://localhost:8080"
["API Docs"]="http://localhost:3000"
["Support"]="http://localhost:8090"
)

for app_name in "${!apps[@]}"; do
url="${apps[$app_name]}"

# Measure response time
response_time=$(curl -o /dev/null -s -w '%{time_total}' --max-time 10 "$url" 2>/dev/null)

if [ $? -eq 0 ]; then
if (( $(echo "$response_time > 2.0" | bc -l) )); then
echo "⚠️ $app_name: SLOW (${response_time}s)"
else
echo "✅ $app_name: OK (${response_time}s)"
fi
else
echo "❌ $app_name: DOWN"
fi
done

# System resource check
echo ""
echo "📊 System Resources:"
echo "CPU: $(top -bn1 | grep 'Cpu(s)' | awk '{print $2}' | awk -F'%' '{print $1}')%"
echo "Memory: $(free | awk 'NR==2{printf "%.1f%%", $3*100/$2}')"
echo "Disk: $(df / | awk 'NR==2{print $5}')"

This comprehensive multi-application deployment guide provides you with real-world examples and practical configurations for efficiently managing multiple applications on a single VPS server. Choose the strategy that best fits your use case, follow the step-by-step instructions, and use the troubleshooting tools to maintain a healthy multi-app environment.