OpenResty is a powerful web platform that combines Nginx with LuaJIT, enabling high-performance web applications with dynamic capabilities. In this comprehensive guide, we’ll explore setting up production-ready OpenResty with advanced Lua-based metrics collection, custom monitoring, and performance optimization.
Table of Contents
- Why OpenResty?
- Architecture Overview
- Production Installation
- Lua Metrics Framework
- Custom Metrics Collection
- Performance Monitoring
- Security Configuration
- High Availability Setup
- Troubleshooting and Optimization
- Production Deployment
Why OpenResty?
OpenResty provides unique advantages for modern web applications:
- High Performance: Built on Nginx with LuaJIT for exceptional speed
- Dynamic Capabilities: Lua scripting for complex logic without external dependencies
- Built-in Libraries: Rich ecosystem of Lua modules for common tasks
- Metrics Integration: Native support for custom metrics and monitoring
- Production Ready: Battle-tested in high-traffic environments
- Microservices Friendly: Perfect for API gateways and service mesh
OpenResty vs Traditional Nginx
Feature | OpenResty | Nginx |
---|---|---|
Scripting | ✅ LuaJIT | ❌ No scripting |
Metrics | ✅ Built-in | ❌ Limited |
Dynamic Config | ✅ Hot reload | ❌ Restart required |
Complex Logic | ✅ Lua modules | ❌ C modules only |
Performance | ✅ Excellent | ✅ Excellent |
Learning Curve | Moderate | Easy |
Architecture Overview
OpenResty Components
graph TB
subgraph "OpenResty Architecture"
A[Client Requests] --> B[Nginx Core]
B --> C[LuaJIT VM]
C --> D[Lua Modules]
D --> E[Metrics Collection]
D --> F[Custom Logic]
D --> G[Cache Layer]
E --> H[Prometheus Metrics]
E --> I[Custom Dashboards]
E --> J[Alerting System]
B --> K[Upstream Services]
B --> L[Static Files]
M[Configuration] --> B
N[Lua Scripts] --> C
end
Metrics Collection Flow
sequenceDiagram
participant Client
participant OpenResty
participant LuaVM
participant Metrics
participant Prometheus
participant Grafana
Client->>OpenResty: HTTP Request
OpenResty->>LuaVM: Process Request
LuaVM->>Metrics: Collect Metrics
Metrics->>Prometheus: Export Metrics
Prometheus->>Grafana: Query Metrics
OpenResty->>Client: HTTP Response
Production Installation
1. System Requirements
# Minimum requirements
CPU: 2 cores
RAM: 4GB
Storage: 20GB SSD
OS: Ubuntu 20.04+ / CentOS 8+ / RHEL 8+
# Recommended for production
CPU: 4+ cores
RAM: 8GB+
Storage: 50GB+ SSD
OS: Ubuntu 22.04 LTS / RHEL 9
2. Installation Methods
Method 1: Package Manager Installation
# Ubuntu/Debian
wget -qO - https://openresty.org/package/pubkey.gpg | sudo apt-key add -
echo "deb http://openresty.org/package/ubuntu $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/openresty.list
sudo apt update
sudo apt install openresty openresty-resty
# CentOS/RHEL
sudo yum install yum-utils
sudo yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo
sudo yum install openresty openresty-resty
Method 2: Source Compilation
#!/bin/bash
# install-openresty-from-source.sh
set -e
# Install dependencies
sudo apt update
sudo apt install -y build-essential libpcre3-dev libssl-dev zlib1g-dev libreadline-dev libncurses5-dev libperl-dev libgd-dev libgeoip-dev
# Download and compile OpenResty
cd /tmp
wget https://openresty.org/download/openresty-1.21.4.3.tar.gz
tar -xzf openresty-1.21.4.3.tar.gz
cd openresty-1.21.4.3
# Configure with production optimizations
./configure \
--prefix=/usr/local/openresty \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-http_secure_link_module \
--with-http_v2_module \
--with-http_auth_request_module \
--with-http_addition_module \
--with-http_dav_module \
--with-http_geoip_module \
--with-http_gunzip_module \
--with-http_image_filter_module \
--with-http_mp4_module \
--with-http_random_index_module \
--with-http_realip_module \
--with-http_secure_link_module \
--with-http_slice_module \
--with-http_ssl_module \
--with-http_stub_status_module \
--with-http_sub_module \
--with-http_v2_module \
--with-http_xslt_module \
--with-ipv6 \
--with-luajit \
--with-pcre-jit \
--with-stream \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-threads \
--with-file-aio \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_auth_request_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_slice_module \
--with-http_stub_status_module \
--with-http_sub_module \
--with-http_v2_module \
--with-http_xslt_module \
--with-mail \
--with-mail_ssl_module \
--with-stream \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-stream_realip_module \
--with-stream_geoip_module \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-threads \
--with-file-aio \
--with-http_realip_module \
--with-http_geoip_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_image_filter_module \
--with-http_mp4_module \
--with-http_random_index_module \
--with-http_realip_module \
--with-http_secure_link_module \
--with-http_slice_module \
--with-http_ssl_module \
--with-http_stub_status_module \
--with-http_sub_module \
--with-http_v2_module \
--with-http_xslt_module \
--with-ipv6 \
--with-luajit \
--with-pcre-jit \
--with-stream \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-threads \
--with-file-aio \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_auth_request_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_slice_module \
--with-http_stub_status_module \
--with-http_sub_module \
--with-http_v2_module \
--with-http_xslt_module \
--with-mail \
--with-mail_ssl_module \
--with-stream \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-stream_realip_module \
--with-stream_geoip_module \
--with-stream_ssl_module \
--with-stream_ssl_preread_module \
--with-threads \
--with-file-aio
# Compile and install
make -j$(nproc)
sudo make install
# Create systemd service
sudo tee /etc/systemd/system/openresty.service > /dev/null <<EOF
[Unit]
Description=OpenResty HTTP Server
After=network.target
[Service]
Type=forking
PIDFile=/usr/local/openresty/nginx/logs/nginx.pid
ExecStartPre=/usr/local/openresty/nginx/sbin/nginx -t
ExecStart=/usr/local/openresty/nginx/sbin/nginx
ExecReload=/bin/kill -s HUP \$MAINPID
ExecStop=/bin/kill -s QUIT \$MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable openresty
sudo systemctl start openresty
Method 3: Docker Installation
# Dockerfile
FROM openresty/openresty:1.21.4.3-0-alpine
# Install additional packages
RUN apk add --no-cache \
curl \
wget \
bash \
vim
# Copy custom configuration
COPY nginx.conf /usr/local/openresty/nginx/conf/nginx.conf
COPY lua/ /usr/local/openresty/nginx/lua/
# Create directories
RUN mkdir -p /usr/local/openresty/nginx/logs \
/usr/local/openresty/nginx/lua \
/usr/local/openresty/nginx/conf/conf.d
# Expose ports
EXPOSE 80 443 8080
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
CMD ["/usr/local/openresty/nginx/sbin/nginx", "-g", "daemon off;"]
3. Production Configuration
# /usr/local/openresty/nginx/conf/nginx.conf
user nginx;
worker_processes auto;
pid /var/run/nginx.pid;
# Load dynamic modules
load_module modules/ngx_http_lua_module.so;
load_module modules/ngx_stream_lua_module.so;
events {
worker_connections 4096;
use epoll;
multi_accept on;
worker_aio_requests 32;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
log_format metrics '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time" '
'lua_metrics="$lua_metrics"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
server_tokens off;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
# Connection limiting
limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_conn_zone $server_name zone=conn_limit_per_server:10m;
# Lua configuration
lua_package_path "/usr/local/openresty/nginx/lua/?.lua;;";
lua_package_cpath "/usr/local/openresty/nginx/lua/?.so;;";
# Shared dictionaries for metrics
lua_shared_dict metrics 10m;
lua_shared_dict cache 100m;
lua_shared_dict locks 1m;
# Include server configurations
include /etc/nginx/conf.d/*.conf;
}
Lua Metrics Framework
1. Core Metrics Module
-- /usr/local/openresty/nginx/lua/metrics.lua
local _M = {}
local cjson = require "cjson"
local ngx = require "ngx"
local shared_metrics = ngx.shared.metrics
-- Metric types
local METRIC_TYPES = {
COUNTER = "counter",
GAUGE = "gauge",
HISTOGRAM = "histogram",
SUMMARY = "summary"
}
-- Initialize metrics storage
local function init_metrics()
if not shared_metrics:get("initialized") then
shared_metrics:set("initialized", true)
shared_metrics:set("counters", cjson.encode({}))
shared_metrics:set("gauges", cjson.encode({}))
shared_metrics:set("histograms", cjson.encode({}))
shared_metrics:set("summaries", cjson.encode({}))
end
end
-- Get metrics data
local function get_metrics_data(metric_type)
local data = shared_metrics:get(metric_type .. "s")
if data then
return cjson.decode(data)
end
return {}
end
-- Set metrics data
local function set_metrics_data(metric_type, data)
shared_metrics:set(metric_type .. "s", cjson.encode(data))
end
-- Counter operations
function _M.counter(name, value, labels)
init_metrics()
local counters = get_metrics_data("counter")
local key = name .. (labels and ":" .. cjson.encode(labels) or "")
if not counters[key] then
counters[key] = {
name = name,
labels = labels or {},
value = 0
}
end
counters[key].value = counters[key].value + (value or 1)
set_metrics_data("counter", counters)
end
-- Gauge operations
function _M.gauge(name, value, labels)
init_metrics()
local gauges = get_metrics_data("gauge")
local key = name .. (labels and ":" .. cjson.encode(labels) or "")
gauges[key] = {
name = name,
labels = labels or {},
value = value
}
set_metrics_data("gauge", gauges)
end
-- Histogram operations
function _M.histogram(name, value, labels, buckets)
init_metrics()
local histograms = get_metrics_data("histogram")
local key = name .. (labels and ":" .. cjson.encode(labels) or "")
if not histograms[key] then
histograms[key] = {
name = name,
labels = labels or {},
buckets = buckets or {0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10},
counts = {},
sum = 0,
count = 0
}
end
local histogram = histograms[key]
histogram.sum = histogram.sum + value
histogram.count = histogram.count + 1
-- Update bucket counts
for i, bucket in ipairs(histogram.buckets) do
if value <= bucket then
histogram.counts[i] = (histogram.counts[i] or 0) + 1
end
end
set_metrics_data("histogram", histograms)
end
-- Summary operations
function _M.summary(name, value, labels, quantiles)
init_metrics()
local summaries = get_metrics_data("summary")
local key = name .. (labels and ":" .. cjson.encode(labels) or "")
if not summaries[key] then
summaries[key] = {
name = name,
labels = labels or {},
quantiles = quantiles or {0.5, 0.9, 0.95, 0.99},
values = {},
sum = 0,
count = 0
}
end
local summary = summaries[key]
table.insert(summary.values, value)
summary.sum = summary.sum + value
summary.count = summary.count + 1
-- Keep only last 1000 values for quantile calculation
if #summary.values > 1000 then
table.remove(summary.values, 1)
end
set_metrics_data("summary", summaries)
end
-- Get all metrics in Prometheus format
function _M.get_prometheus_metrics()
init_metrics()
local output = {}
-- Counters
local counters = get_metrics_data("counter")
for key, metric in pairs(counters) do
local labels_str = ""
if next(metric.labels) then
local label_parts = {}
for k, v in pairs(metric.labels) do
table.insert(label_parts, k .. '="' .. v .. '"')
end
labels_str = "{" .. table.concat(label_parts, ",") .. "}"
end
table.insert(output, "# TYPE " .. metric.name .. " counter")
table.insert(output, metric.name .. labels_str .. " " .. metric.value)
end
-- Gauges
local gauges = get_metrics_data("gauge")
for key, metric in pairs(gauges) do
local labels_str = ""
if next(metric.labels) then
local label_parts = {}
for k, v in pairs(metric.labels) do
table.insert(label_parts, k .. '="' .. v .. '"')
end
labels_str = "{" .. table.concat(label_parts, ",") .. "}"
end
table.insert(output, "# TYPE " .. metric.name .. " gauge")
table.insert(output, metric.name .. labels_str .. " " .. metric.value)
end
-- Histograms
local histograms = get_metrics_data("histogram")
for key, metric in pairs(histograms) do
local labels_str = ""
if next(metric.labels) then
local label_parts = {}
for k, v in pairs(metric.labels) do
table.insert(label_parts, k .. '="' .. v .. '"')
end
labels_str = "{" .. table.concat(label_parts, ",") .. "}"
end
table.insert(output, "# TYPE " .. metric.name .. " histogram")
table.insert(output, metric.name .. "_sum" .. labels_str .. " " .. metric.sum)
table.insert(output, metric.name .. "_count" .. labels_str .. " " .. metric.count)
for i, bucket in ipairs(metric.buckets) do
local bucket_labels = labels_str .. ',le="' .. bucket .. '"'
table.insert(output, metric.name .. "_bucket" .. bucket_labels .. " " .. (metric.counts[i] or 0))
end
end
-- Summaries
local summaries = get_metrics_data("summary")
for key, metric in pairs(summaries) do
local labels_str = ""
if next(metric.labels) then
local label_parts = {}
for k, v in pairs(metric.labels) do
table.insert(label_parts, k .. '="' .. v .. '"')
end
labels_str = "{" .. table.concat(label_parts, ",") .. "}"
end
table.insert(output, "# TYPE " .. metric.name .. " summary")
table.insert(output, metric.name .. "_sum" .. labels_str .. " " .. metric.sum)
table.insert(output, metric.name .. "_count" .. labels_str .. " " .. metric.count)
-- Calculate quantiles
if #metric.values > 0 then
table.sort(metric.values)
for _, quantile in ipairs(metric.quantiles) do
local index = math.ceil(quantile * #metric.values)
local value = metric.values[index] or metric.values[#metric.values]
local quantile_labels = labels_str .. ',quantile="' .. quantile .. '"'
table.insert(output, metric.name .. quantile_labels .. " " .. value)
end
end
end
return table.concat(output, "\n")
end
return _M
2. Request Metrics Module
-- /usr/local/openresty/nginx/lua/request_metrics.lua
local _M = {}
local metrics = require "metrics"
local ngx = require "ngx"
-- Track request metrics
function _M.track_request()
local start_time = ngx.now()
local method = ngx.var.request_method
local uri = ngx.var.request_uri
local status = ngx.status
local bytes_sent = ngx.var.bytes_sent
-- Increment request counter
metrics.counter("http_requests_total", 1, {
method = method,
uri = uri,
status = status
})
-- Track request duration
local duration = ngx.now() - start_time
metrics.histogram("http_request_duration_seconds", duration, {
method = method,
uri = uri
})
-- Track response size
metrics.histogram("http_response_size_bytes", bytes_sent, {
method = method,
uri = uri
})
-- Track active connections
metrics.gauge("http_active_connections", ngx.var.connections_active)
-- Track request rate
metrics.counter("http_requests_rate", 1)
end
-- Track upstream metrics
function _M.track_upstream(upstream_name, upstream_addr, response_time, status)
metrics.counter("upstream_requests_total", 1, {
upstream = upstream_name,
server = upstream_addr,
status = status
})
metrics.histogram("upstream_response_time_seconds", response_time, {
upstream = upstream_name,
server = upstream_addr
})
end
-- Track cache metrics
function _M.track_cache(cache_status, cache_key)
metrics.counter("cache_requests_total", 1, {
status = cache_status,
key = cache_key
})
end
-- Track error metrics
function _M.track_error(error_type, error_message)
metrics.counter("errors_total", 1, {
type = error_type,
message = error_message
})
end
return _M
3. System Metrics Module
-- /usr/local/openresty/nginx/lua/system_metrics.lua
local _M = {}
local metrics = require "metrics"
local ngx = require "ngx"
-- Track system metrics
function _M.collect_system_metrics()
-- CPU usage
local cpu_usage = ngx.var.cpu_usage or 0
metrics.gauge("system_cpu_usage_percent", cpu_usage)
-- Memory usage
local memory_usage = ngx.var.memory_usage or 0
metrics.gauge("system_memory_usage_bytes", memory_usage)
-- Disk usage
local disk_usage = ngx.var.disk_usage or 0
metrics.gauge("system_disk_usage_bytes", disk_usage)
-- Network metrics
local bytes_in = ngx.var.bytes_in or 0
local bytes_out = ngx.var.bytes_out or 0
metrics.counter("system_network_bytes_in_total", bytes_in)
metrics.counter("system_network_bytes_out_total", bytes_out)
-- Nginx worker metrics
metrics.gauge("nginx_worker_processes", ngx.var.worker_processes)
metrics.gauge("nginx_worker_connections", ngx.var.worker_connections)
metrics.gauge("nginx_active_connections", ngx.var.connections_active)
metrics.gauge("nginx_reading_connections", ngx.var.connections_reading)
metrics.gauge("nginx_writing_connections", ngx.var.connections_writing)
metrics.gauge("nginx_waiting_connections", ngx.var.connections_waiting)
-- Request metrics
metrics.counter("nginx_requests_total", ngx.var.requests_total)
metrics.counter("nginx_requests_per_second", ngx.var.requests_per_second)
end
-- Track Lua VM metrics
function _M.collect_lua_metrics()
-- Lua memory usage
local lua_memory = collectgarbage("count") * 1024 -- Convert to bytes
metrics.gauge("lua_memory_usage_bytes", lua_memory)
-- Lua GC metrics
local gc_count = collectgarbage("count")
metrics.gauge("lua_gc_count", gc_count)
-- Lua shared dict metrics
local shared_dict_usage = ngx.shared.metrics:capacity()
metrics.gauge("lua_shared_dict_usage_bytes", shared_dict_usage)
end
return _M
Custom Metrics Collection
1. Business Metrics
-- /usr/local/openresty/nginx/lua/business_metrics.lua
local _M = {}
local metrics = require "metrics"
local ngx = require "ngx"
-- Track user authentication
function _M.track_auth(user_id, success, method)
metrics.counter("auth_attempts_total", 1, {
user_id = user_id,
success = tostring(success),
method = method
})
if not success then
metrics.counter("auth_failures_total", 1, {
user_id = user_id,
method = method
})
end
end
-- Track API usage
function _M.track_api_usage(api_key, endpoint, method, status)
metrics.counter("api_requests_total", 1, {
api_key = api_key,
endpoint = endpoint,
method = method,
status = status
})
-- Track rate limiting
if status == 429 then
metrics.counter("api_rate_limited_total", 1, {
api_key = api_key,
endpoint = endpoint
})
end
end
-- Track payment processing
function _M.track_payment(amount, currency, status, payment_method)
metrics.counter("payments_total", 1, {
currency = currency,
status = status,
method = payment_method
})
metrics.histogram("payment_amount", amount, {
currency = currency,
status = status
})
end
-- Track user activity
function _M.track_user_activity(user_id, action, resource)
metrics.counter("user_activity_total", 1, {
user_id = user_id,
action = action,
resource = resource
})
end
return _M
2. Performance Metrics
-- /usr/local/openresty/nginx/lua/performance_metrics.lua
local _M = {}
local metrics = require "metrics"
local ngx = require "ngx"
-- Track response time percentiles
function _M.track_response_time(uri, method, duration)
metrics.histogram("response_time_seconds", duration, {
uri = uri,
method = method
})
-- Track slow requests
if duration > 1.0 then
metrics.counter("slow_requests_total", 1, {
uri = uri,
method = method
})
end
end
-- Track cache hit ratio
function _M.track_cache_performance(cache_name, hit)
metrics.counter("cache_requests_total", 1, {
cache = cache_name,
hit = tostring(hit)
})
if hit then
metrics.counter("cache_hits_total", 1, {
cache = cache_name
})
else
metrics.counter("cache_misses_total", 1, {
cache = cache_name
})
end
end
-- Track database performance
function _M.track_database_query(query_type, duration, success)
metrics.histogram("database_query_duration_seconds", duration, {
query_type = query_type,
success = tostring(success)
})
metrics.counter("database_queries_total", 1, {
query_type = query_type,
success = tostring(success)
})
end
-- Track external API calls
function _M.track_external_api(service_name, endpoint, duration, status)
metrics.histogram("external_api_duration_seconds", duration, {
service = service_name,
endpoint = endpoint,
status = status
})
metrics.counter("external_api_requests_total", 1, {
service = service_name,
endpoint = endpoint,
status = status
})
end
return _M
3. Security Metrics
-- /usr/local/openresty/nginx/lua/security_metrics.lua
local _M = {}
local metrics = require "metrics"
local ngx = require "ngx"
-- Track security events
function _M.track_security_event(event_type, severity, source_ip, details)
metrics.counter("security_events_total", 1, {
type = event_type,
severity = severity,
source_ip = source_ip
})
-- Track by severity
if severity == "high" then
metrics.counter("security_events_high_total", 1, {
type = event_type,
source_ip = source_ip
})
end
end
-- Track failed login attempts
function _M.track_failed_login(username, source_ip, user_agent)
metrics.counter("failed_logins_total", 1, {
username = username,
source_ip = source_ip
})
-- Track potential brute force
metrics.counter("failed_logins_by_ip_total", 1, {
source_ip = source_ip
})
end
-- Track suspicious activity
function _M.track_suspicious_activity(activity_type, source_ip, details)
metrics.counter("suspicious_activity_total", 1, {
type = activity_type,
source_ip = source_ip
})
end
-- Track rate limiting
function _M.track_rate_limit(limit_type, source_ip, endpoint)
metrics.counter("rate_limit_hits_total", 1, {
type = limit_type,
source_ip = source_ip,
endpoint = endpoint
})
end
return _M
Performance Monitoring
1. Real-time Dashboard
-- /usr/local/openresty/nginx/lua/dashboard.lua
local _M = {}
local cjson = require "cjson"
local metrics = require "metrics"
-- Get real-time metrics
function _M.get_realtime_metrics()
local counters = metrics.get_metrics_data("counter")
local gauges = metrics.get_metrics_data("gauge")
local histograms = metrics.get_metrics_data("histogram")
local realtime_data = {
timestamp = ngx.time(),
counters = counters,
gauges = gauges,
histograms = histograms,
system = {
cpu_usage = ngx.var.cpu_usage or 0,
memory_usage = ngx.var.memory_usage or 0,
active_connections = ngx.var.connections_active or 0,
requests_per_second = ngx.var.requests_per_second or 0
}
}
return cjson.encode(realtime_data)
end
-- Get health status
function _M.get_health_status()
local health = {
status = "healthy",
timestamp = ngx.time(),
checks = {}
}
-- Check memory usage
local memory_usage = ngx.var.memory_usage or 0
if memory_usage > 80 then
health.status = "unhealthy"
health.checks.memory = "high_usage"
else
health.checks.memory = "ok"
end
-- Check CPU usage
local cpu_usage = ngx.var.cpu_usage or 0
if cpu_usage > 90 then
health.status = "unhealthy"
health.checks.cpu = "high_usage"
else
health.checks.cpu = "ok"
end
-- Check active connections
local active_connections = ngx.var.connections_active or 0
if active_connections > 1000 then
health.status = "degraded"
health.checks.connections = "high_load"
else
health.checks.connections = "ok"
end
return cjson.encode(health)
end
return _M
2. Prometheus Integration
# /etc/nginx/conf.d/metrics.conf
server {
listen 8080;
server_name _;
# Prometheus metrics endpoint
location /metrics {
access_log off;
content_by_lua_block {
local metrics = require "metrics"
ngx.header.content_type = "text/plain; version=0.0.4"
ngx.say(metrics.get_prometheus_metrics())
}
}
# Health check endpoint
location /health {
access_log off;
content_by_lua_block {
local dashboard = require "dashboard"
ngx.header.content_type = "application/json"
ngx.say(dashboard.get_health_status())
}
}
# Real-time metrics endpoint
location /realtime {
access_log off;
content_by_lua_block {
local dashboard = require "dashboard"
ngx.header.content_type = "application/json"
ngx.say(dashboard.get_realtime_metrics())
}
}
# Nginx status endpoint
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
3. Grafana Dashboard Configuration
{
"dashboard": {
"title": "OpenResty Production Metrics",
"tags": ["openresty", "nginx", "lua", "production"],
"timezone": "browser",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(http_requests_total[5m])",
"legendFormat": "{{method}} {{uri}}"
}
]
},
{
"title": "Response Time",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))",
"legendFormat": "95th percentile"
},
{
"expr": "histogram_quantile(0.50, rate(http_request_duration_seconds_bucket[5m]))",
"legendFormat": "50th percentile"
}
]
},
{
"title": "Active Connections",
"type": "graph",
"targets": [
{
"expr": "http_active_connections",
"legendFormat": "Active Connections"
}
]
},
{
"title": "Error Rate",
"type": "graph",
"targets": [
{
"expr": "rate(http_requests_total{status=~\"5..\"}[5m])",
"legendFormat": "5xx Errors"
},
{
"expr": "rate(http_requests_total{status=~\"4..\"}[5m])",
"legendFormat": "4xx Errors"
}
]
},
{
"title": "Cache Performance",
"type": "graph",
"targets": [
{
"expr": "rate(cache_hits_total[5m]) / rate(cache_requests_total[5m])",
"legendFormat": "Cache Hit Ratio"
}
]
},
{
"title": "System Resources",
"type": "graph",
"targets": [
{
"expr": "system_cpu_usage_percent",
"legendFormat": "CPU Usage %"
},
{
"expr": "system_memory_usage_bytes / 1024 / 1024",
"legendFormat": "Memory Usage MB"
}
]
}
]
}
}
Security Configuration
1. Security Headers
# /etc/nginx/conf.d/security.conf
server {
listen 443 ssl http2;
server_name example.com;
# SSL configuration
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Hide server information
server_tokens off;
# Rate limiting
limit_req zone=api burst=20 nodelay;
limit_conn conn_limit_per_ip 10;
# Security monitoring
access_by_lua_block {
local security_metrics = require "security_metrics"
local client_ip = ngx.var.remote_addr
local user_agent = ngx.var.http_user_agent
-- Check for suspicious patterns
if string.find(user_agent, "bot") or string.find(user_agent, "crawler") then
security_metrics.track_security_event("suspicious_user_agent", "medium", client_ip, user_agent)
end
-- Check for SQL injection attempts
local uri = ngx.var.request_uri
if string.find(uri, "union") or string.find(uri, "select") or string.find(uri, "drop") then
security_metrics.track_security_event("sql_injection_attempt", "high", client_ip, uri)
ngx.status = 403
ngx.say("Forbidden")
ngx.exit(403)
end
}
# Log security events
log_by_lua_block {
local security_metrics = require "security_metrics"
local client_ip = ngx.var.remote_addr
local status = ngx.status
if status >= 400 then
security_metrics.track_security_event("http_error", "low", client_ip, tostring(status))
end
}
}
2. Authentication and Authorization
-- /usr/local/openresty/nginx/lua/auth.lua
local _M = {}
local cjson = require "cjson"
local metrics = require "metrics"
local business_metrics = require "business_metrics"
-- JWT token validation
function _M.validate_jwt(token)
local jwt = require "resty.jwt"
local jwt_obj = jwt:verify("your-secret-key", token)
if jwt_obj.valid then
return jwt_obj.payload
else
return nil
end
end
-- API key validation
function _M.validate_api_key(api_key)
-- Check against database or cache
local cache = ngx.shared.cache
local user_data = cache:get("api_key:" .. api_key)
if user_data then
return cjson.decode(user_data)
end
-- Validate against external service
local http = require "resty.http"
local httpc = http.new()
local res, err = httpc:request_uri("http://auth-service/validate", {
method = "POST",
body = cjson.encode({api_key = api_key}),
headers = {
["Content-Type"] = "application/json"
}
})
if res and res.status == 200 then
local user_data = cjson.decode(res.body)
cache:set("api_key:" .. api_key, res.body, 300) -- Cache for 5 minutes
return user_data
end
return nil
end
-- Rate limiting per user
function _M.check_rate_limit(user_id, endpoint)
local cache = ngx.shared.cache
local key = "rate_limit:" .. user_id .. ":" .. endpoint
local current = cache:get(key) or 0
if current >= 100 then -- 100 requests per minute
business_metrics.track_api_usage(user_id, endpoint, ngx.var.request_method, 429)
return false
end
cache:incr(key, 1)
cache:expire(key, 60) -- 1 minute
return true
end
-- Authorization check
function _M.check_permission(user, resource, action)
-- Implement your authorization logic here
if user.role == "admin" then
return true
end
if user.role == "user" and action == "read" then
return true
end
return false
end
return _M
High Availability Setup
1. Load Balancer Configuration
# /etc/nginx/conf.d/load_balancer.conf
upstream backend {
least_conn;
server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8080 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
server_name _;
# Health check endpoint
location /health {
access_by_lua_block {
local health = require "health_check"
if not health.check_backend_health() then
ngx.status = 503
ngx.say("Service Unavailable")
ngx.exit(503)
end
}
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Main application
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Connection settings
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
# Retry settings
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 10s;
}
}
2. Health Check Module
-- /usr/local/openresty/nginx/lua/health_check.lua
local _M = {}
local http = require "resty.http"
local cjson = require "cjson"
-- Check backend health
function _M.check_backend_health()
local backends = {
"http://10.0.1.10:8080/health",
"http://10.0.1.11:8080/health",
"http://10.0.1.12:8080/health"
}
local healthy_count = 0
for _, backend in ipairs(backends) do
local httpc = http.new()
local res, err = httpc:request_uri(backend, {
method = "GET",
timeout = 1000, -- 1 second timeout
headers = {
["User-Agent"] = "OpenResty-HealthCheck/1.0"
}
})
if res and res.status == 200 then
healthy_count = healthy_count + 1
end
end
return healthy_count > 0
end
-- Check database connectivity
function _M.check_database()
-- Implement database health check
return true
end
-- Check external services
function _M.check_external_services()
local services = {
"http://auth-service/health",
"http://payment-service/health",
"http://notification-service/health"
}
local healthy_count = 0
for _, service in ipairs(services) do
local httpc = http.new()
local res, err = httpc:request_uri(service, {
method = "GET",
timeout = 2000, -- 2 second timeout
})
if res and res.status == 200 then
healthy_count = healthy_count + 1
end
end
return healthy_count >= #services * 0.8 -- 80% of services must be healthy
end
-- Comprehensive health check
function _M.comprehensive_health_check()
local health = {
status = "healthy",
timestamp = ngx.time(),
checks = {}
}
-- Check backend
health.checks.backend = _M.check_backend_health()
-- Check database
health.checks.database = _M.check_database()
-- Check external services
health.checks.external_services = _M.check_external_services()
-- Determine overall health
if not health.checks.backend or not health.checks.database then
health.status = "unhealthy"
elseif not health.checks.external_services then
health.status = "degraded"
end
return health
end
return _M
3. Kubernetes Deployment
# openresty-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: openresty
labels:
app: openresty
spec:
replicas: 3
selector:
matchLabels:
app: openresty
template:
metadata:
labels:
app: openresty
spec:
containers:
- name: openresty
image: openresty/openresty:1.21.4.3-0-alpine
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 8080
env:
- name: NGINX_WORKER_PROCESSES
value: "auto"
- name: NGINX_WORKER_CONNECTIONS
value: "4096"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: nginx-config
mountPath: /usr/local/openresty/nginx/conf
- name: lua-scripts
mountPath: /usr/local/openresty/nginx/lua
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: lua-scripts
configMap:
name: lua-scripts
---
apiVersion: v1
kind: Service
metadata:
name: openresty-service
spec:
selector:
app: openresty
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: metrics
port: 8080
targetPort: 8080
type: LoadBalancer
Troubleshooting and Optimization
1. Common Issues and Solutions
High Memory Usage
-- /usr/local/openresty/nginx/lua/memory_monitor.lua
local _M = {}
function _M.check_memory_usage()
local memory_usage = collectgarbage("count") * 1024
local max_memory = 100 * 1024 * 1024 -- 100MB limit
if memory_usage > max_memory then
ngx.log(ngx.WARN, "High memory usage: " .. memory_usage .. " bytes")
collectgarbage("collect")
end
return memory_usage
end
function _M.optimize_memory()
-- Force garbage collection
collectgarbage("collect")
-- Clear shared dictionaries if needed
local shared_metrics = ngx.shared.metrics
local capacity = shared_metrics:capacity()
local free_space = shared_metrics:free_space()
if free_space < capacity * 0.1 then -- Less than 10% free
ngx.log(ngx.WARN, "Shared dictionary nearly full, clearing old data")
-- Implement cleanup logic here
end
end
return _M
Performance Bottlenecks
#!/bin/bash
# performance-analysis.sh
# Check OpenResty performance
echo "=== OpenResty Performance Analysis ==="
# Check worker processes
echo "Worker Processes:"
ps aux | grep nginx | grep -v grep
# Check memory usage
echo -e "\nMemory Usage:"
free -h
# Check CPU usage
echo -e "\nCPU Usage:"
top -bn1 | grep "Cpu(s)"
# Check disk I/O
echo -e "\nDisk I/O:"
iostat -x 1 1
# Check network connections
echo -e "\nNetwork Connections:"
ss -tuln | grep :80
ss -tuln | grep :443
# Check error logs
echo -e "\nRecent Errors:"
tail -n 50 /var/log/nginx/error.log | grep -i error
# Check access logs for slow requests
echo -e "\nSlow Requests:"
awk '$NF > 1.0 {print $0}' /var/log/nginx/access.log | tail -10
Lua Script Debugging
-- /usr/local/openresty/nginx/lua/debug.lua
local _M = {}
function _M.log_debug(message, data)
if ngx.var.debug_mode == "1" then
local log_data = {
timestamp = ngx.time(),
message = message,
data = data,
request_id = ngx.var.request_id
}
ngx.log(ngx.INFO, "DEBUG: " .. cjson.encode(log_data))
end
end
function _M.trace_function(func_name, args)
if ngx.var.debug_mode == "1" then
ngx.log(ngx.INFO, "TRACE: " .. func_name .. " called with args: " .. cjson.encode(args))
end
end
function _M.measure_time(func_name, func)
local start_time = ngx.now()
local result = func()
local end_time = ngx.now()
ngx.log(ngx.INFO, "TIMING: " .. func_name .. " took " .. (end_time - start_time) .. " seconds")
return result
end
return _M
2. Performance Optimization
Nginx Configuration Tuning
# /etc/nginx/conf.d/performance.conf
worker_processes auto;
worker_cpu_affinity auto;
# Optimize worker connections
events {
worker_connections 8192;
use epoll;
multi_accept on;
worker_aio_requests 32;
}
http {
# Optimize buffer sizes
client_body_buffer_size 128k;
client_header_buffer_size 1k;
client_max_body_size 10m;
large_client_header_buffers 4 4k;
# Optimize timeouts
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
# Optimize file handling
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Optimize gzip
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/json
application/javascript
application/xml+rss
application/atom+xml
image/svg+xml;
# Optimize caching
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Lua Performance Optimization
-- /usr/local/openresty/nginx/lua/performance.lua
local _M = {}
-- Cache frequently used modules
local cjson = require "cjson"
local ngx = require "ngx"
-- Pre-compile regex patterns
local patterns = {
email = ngx.re.compile("^[%w._%+-]+@[%w._%+-]+%.%w+$"),
ipv4 = ngx.re.compile("^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$"),
uuid = ngx.re.compile("^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$")
}
-- Optimize JSON operations
function _M.fast_json_encode(data)
return cjson.encode(data)
end
function _M.fast_json_decode(json_string)
return cjson.decode(json_string)
end
-- Optimize string operations
function _M.fast_string_operations()
local str = "Hello, World!"
-- Use ngx.re for regex operations
local match = ngx.re.match(str, "Hello")
-- Use string methods efficiently
local upper = string.upper(str)
local lower = string.lower(str)
return match, upper, lower
end
-- Optimize table operations
function _M.optimize_table_operations()
local t = {}
-- Pre-allocate table size if known
local size = 1000
for i = 1, size do
t[i] = i
end
-- Use table.concat for string concatenation
local result = table.concat(t, ",")
return result
end
-- Memory optimization
function _M.optimize_memory()
-- Use local variables
local local_var = "local value"
-- Avoid global variables
-- Use ngx.shared for persistent data
-- Clean up large tables
local large_table = {}
-- ... use large_table
large_table = nil
collectgarbage("collect")
end
return _M
3. Monitoring and Alerting
Prometheus Alert Rules
# openresty-alerts.yaml
groups:
- name: openresty
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.1
for: 2m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value }} errors per second"
- alert: HighResponseTime
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "High response time detected"
description: "95th percentile response time is {{ $value }} seconds"
- alert: HighMemoryUsage
expr: system_memory_usage_bytes / 1024 / 1024 / 1024 > 8
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage detected"
description: "Memory usage is {{ $value }} GB"
- alert: ServiceDown
expr: up{job="openresty"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "OpenResty service is down"
description: "OpenResty service has been down for more than 1 minute"
Grafana Dashboard Alerts
{
"alert": {
"name": "OpenResty High Error Rate",
"message": "Error rate is above threshold",
"frequency": "10s",
"handler": 1,
"noDataState": "no_data",
"executionErrorState": "alerting",
"conditions": [
{
"evaluator": {
"params": [0.1],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"params": ["A", "5m", "now"]
},
"reducer": {
"params": [],
"type": "last"
},
"type": "query"
}
]
}
}
Production Deployment
1. Deployment Checklist
#!/bin/bash
# production-deployment.sh
set -e
echo "=== OpenResty Production Deployment ==="
# 1. System Requirements Check
echo "Checking system requirements..."
if [ $(nproc) -lt 2 ]; then
echo "ERROR: At least 2 CPU cores required"
exit 1
fi
if [ $(free -m | awk 'NR==2{printf "%.0f", $2}') -lt 4096 ]; then
echo "ERROR: At least 4GB RAM required"
exit 1
fi
# 2. Security Hardening
echo "Applying security hardening..."
sudo ufw enable
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 8080/tcp
# 3. Install OpenResty
echo "Installing OpenResty..."
wget -qO - https://openresty.org/package/pubkey.gpg | sudo apt-key add -
echo "deb http://openresty.org/package/ubuntu $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/openresty.list
sudo apt update
sudo apt install -y openresty openresty-resty
# 4. Configure OpenResty
echo "Configuring OpenResty..."
sudo mkdir -p /usr/local/openresty/nginx/lua
sudo cp nginx.conf /usr/local/openresty/nginx/conf/
sudo cp lua/*.lua /usr/local/openresty/nginx/lua/
# 5. Set up monitoring
echo "Setting up monitoring..."
sudo systemctl enable openresty
sudo systemctl start openresty
# 6. Verify installation
echo "Verifying installation..."
curl -f http://localhost:8080/health || exit 1
curl -f http://localhost:8080/metrics || exit 1
echo "Deployment completed successfully!"
2. Docker Compose Setup
# docker-compose.yml
version: '3.8'
services:
openresty:
build: .
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- ./nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf
- ./lua:/usr/local/openresty/nginx/lua
- ./ssl:/etc/ssl
environment:
- NGINX_WORKER_PROCESSES=auto
- NGINX_WORKER_CONNECTIONS=4096
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alerts.yml:/etc/prometheus/alerts.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--web.enable-lifecycle'
restart: unless-stopped
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana-storage:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
restart: unless-stopped
volumes:
grafana-storage:
3. Kubernetes Deployment
# k8s-deployment.yaml
apiVersion: v1
kind: Namespace
metadata:
name: openresty-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openresty
namespace: openresty-system
labels:
app: openresty
spec:
replicas: 3
selector:
matchLabels:
app: openresty
template:
metadata:
labels:
app: openresty
spec:
containers:
- name: openresty
image: openresty/openresty:1.21.4.3-0-alpine
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
- containerPort: 8080
name: metrics
env:
- name: NGINX_WORKER_PROCESSES
value: "auto"
- name: NGINX_WORKER_CONNECTIONS
value: "4096"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: nginx-config
mountPath: /usr/local/openresty/nginx/conf
- name: lua-scripts
mountPath: /usr/local/openresty/nginx/lua
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: lua-scripts
configMap:
name: lua-scripts
---
apiVersion: v1
kind: Service
metadata:
name: openresty-service
namespace: openresty-system
spec:
selector:
app: openresty
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: metrics
port: 8080
targetPort: 8080
type: LoadBalancer
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: openresty-system
data:
nginx.conf: |
# Include the main nginx configuration here
user nginx;
worker_processes auto;
pid /var/run/nginx.pid;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Include other configurations
include /usr/local/openresty/nginx/conf/conf.d/*.conf;
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: lua-scripts
namespace: openresty-system
data:
metrics.lua: |
-- Include the metrics.lua content here
local _M = {}
-- ... rest of the metrics module
return _M
4. CI/CD Pipeline
# .github/workflows/deploy.yml
name: Deploy OpenResty
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup OpenResty
run: |
wget -qO - https://openresty.org/package/pubkey.gpg | sudo apt-key add -
echo "deb http://openresty.org/package/ubuntu $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/openresty.list
sudo apt update
sudo apt install -y openresty openresty-resty
- name: Test Configuration
run: |
sudo /usr/local/openresty/nginx/sbin/nginx -t
- name: Test Lua Scripts
run: |
/usr/local/openresty/bin/resty -e 'local metrics = require "metrics"; print("Lua scripts loaded successfully")'
- name: Run Integration Tests
run: |
sudo systemctl start openresty
sleep 5
curl -f http://localhost:8080/health
curl -f http://localhost:8080/metrics
sudo systemctl stop openresty
deploy:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Deploy to Production
run: |
echo "Deploying to production..."
# Add your deployment commands here
5. Backup and Recovery
#!/bin/bash
# backup-openresty.sh
BACKUP_DIR="/backup/openresty"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="openresty_backup_$DATE.tar.gz"
echo "Creating OpenResty backup..."
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup configuration
tar -czf $BACKUP_DIR/$BACKUP_FILE \
/usr/local/openresty/nginx/conf \
/usr/local/openresty/nginx/lua \
/etc/systemd/system/openresty.service
# Backup logs
tar -czf $BACKUP_DIR/logs_$DATE.tar.gz /var/log/nginx/
# Cleanup old backups (keep last 7 days)
find $BACKUP_DIR -name "*.tar.gz" -mtime +7 -delete
echo "Backup completed: $BACKUP_DIR/$BACKUP_FILE"
6. Monitoring Setup
#!/bin/bash
# setup-monitoring.sh
echo "Setting up OpenResty monitoring..."
# Install Prometheus
wget https://github.com/prometheus/prometheus/releases/download/v2.40.0/prometheus-2.40.0.linux-amd64.tar.gz
tar xzf prometheus-2.40.0.linux-amd64.tar.gz
sudo mv prometheus-2.40.0.linux-amd64 /opt/prometheus
# Create Prometheus configuration
sudo tee /opt/prometheus/prometheus.yml > /dev/null <<EOF
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'openresty'
static_configs:
- targets: ['localhost:8080']
metrics_path: /metrics
scrape_interval: 5s
EOF
# Install Grafana
wget https://dl.grafana.com/oss/release/grafana-9.3.0.linux-amd64.tar.gz
tar xzf grafana-9.3.0.linux-amd64.tar.gz
sudo mv grafana-9.3.0 /opt/grafana
# Create systemd services
sudo tee /etc/systemd/system/prometheus.service > /dev/null <<EOF
[Unit]
Description=Prometheus
After=network.target
[Service]
Type=simple
User=prometheus
ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml
Restart=always
[Install]
WantedBy=multi-user.target
EOF
sudo tee /etc/systemd/system/grafana.service > /dev/null <<EOF
[Unit]
Description=Grafana
After=network.target
[Service]
Type=simple
User=grafana
ExecStart=/opt/grafana/bin/grafana-server
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# Start services
sudo systemctl daemon-reload
sudo systemctl enable prometheus grafana
sudo systemctl start prometheus grafana
echo "Monitoring setup completed!"
echo "Prometheus: http://localhost:9090"
echo "Grafana: http://localhost:3000 (admin/admin)"
Conclusion
This comprehensive guide has covered the complete setup of OpenResty in a production environment with advanced Lua-based metrics collection, monitoring, and optimization. Key takeaways:
What We’ve Accomplished
- Production-Ready Installation: Multiple installation methods with proper configuration
- Advanced Metrics Framework: Custom Lua-based metrics collection system
- Comprehensive Monitoring: Real-time dashboards, Prometheus integration, and Grafana visualization
- Security Hardening: Security headers, authentication, and threat detection
- High Availability: Load balancing, health checks, and failover mechanisms
- Performance Optimization: Tuning guidelines and best practices
- Production Deployment: Complete deployment strategies with CI/CD
Best Practices Summary
- Use Lua modules for complex business logic and metrics collection
- Implement proper monitoring with Prometheus and Grafana
- Apply security hardening from the start
- Monitor performance metrics continuously
- Use containerization for consistent deployments
- Implement proper backup and recovery procedures
- Follow the principle of least privilege for security
Next Steps
- Customize metrics for your specific use case
- Set up alerting based on your SLA requirements
- Implement additional security measures as needed
- Scale horizontally using load balancers
- Monitor and optimize continuously
OpenResty provides a powerful platform for building high-performance web applications with dynamic capabilities. With proper setup and monitoring, it can handle massive traffic loads while providing rich observability and control.
For more advanced topics, consider exploring:
- Custom Lua modules for specific business logic
- Integration with service mesh architectures
- Advanced caching strategies
- Microservices communication patterns
- Real-time data processing with Lua
Remember to always test thoroughly in staging environments before deploying to production, and maintain comprehensive documentation for your team.