Back to blog
Dec 12, 2024
5 min read

Deploying Astro to Your Own VPS with Nginx

A quick tutorial on how to host your own Astro website.

Deploying an Astro Website to Your Own Server with Docker, Docker Compose, and Nginx

Deploying a modern static site generator like Astro to your server or Virtual Private Server (VPS) is a powerful way to ensure full control over your web project. By combining Docker, Docker Compose, and Nginx, you can create a robust, scalable setup. This article walks you through the process step-by-step.


Prerequisites

Before you begin, ensure you have the following:

  1. A VPS or dedicated server with root access.
  2. Docker and Docker Compose installed.
  3. Basic knowledge of Docker and Nginx.

Directory Structure

For this guide, we will organize the project with the following structure:

project-directory/
|-- docker-compose.yml
|-- astro/
|   |-- Dockerfile
|   |-- (Astro project files)
|-- nginx/
    |-- Dockerfile
    |-- nginx.conf

Step 1: Write the Docker Compose Configuration

The docker-compose.yml file coordinates the services required for the deployment. Here’s the example file:

version: '1'

services:
  astro:
    container_name: astro
    restart: unless-stopped
    build:
      context: ./astro
      dockerfile: Dockerfile
    volumes:
      - shared-data:/usr/src/astro/shared-data
    ports:
      - "4321:4321"
    networks:
      - network1

  se-integrations-nginx:
    container_name: nginx
    restart: unless-stopped
    build:
      context: ./nginx
      dockerfile: Dockerfile
    volumes:
      - shared-data:/usr/src/nginx/shared-data
    ports:
      - "80:80"
    depends_on:
      - astro
    networks:
      - network1

volumes:
  shared-data:

networks:
  network1:
    name: network1
    driver: bridge

This configuration defines two services:

  • Astro: Hosts your Astro website.
  • Nginx: Acts as the reverse proxy.

Explanation of Key Elements

  • volumes: Used for shared data between containers.
  • networks: Ensures both services communicate efficiently.
  • depends_on: Ensures Nginx waits for the Astro service to be ready.

Step 2: Create the Dockerfile for the Astro Service

Inside the astro/ directory, create a Dockerfile:

FROM node:lts AS base
WORKDIR /app

# By copying only the package.json and package-lock.json here, we ensure that the following `-deps` steps are independent of the source code.
# Therefore, the `-deps` steps will be skipped if only the source code changes.
COPY package.json package-lock.json ./

# Set environment variables if you need them
ENV ASTRO_DB_REMOTE_URL=libsql://eexample.com
ENV ASTRO_DB_APP_TOKEN=123456789

FROM base AS prod-deps
RUN npm install --omit=dev

FROM base AS build-deps
RUN npm install

FROM build-deps AS build
COPY . .
RUN npm run build --remote

FROM base AS runtime
COPY --from=prod-deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist

ENV HOST=0.0.0.0
ENV PORT=4321
EXPOSE 4321
CMD node ./dist/server/entry.mjs

Step 3: Configure Nginx

Inside the nginx/ directory, create a Dockerfile, nginx.conf, and project.conf file.

Nginx Dockerfile

FROM nginx:1.23.1

WORKDIR usr/src/nginx
RUN rm /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/
RUN rm /etc/nginx/conf.d/default.conf
COPY mime.types /etc/nginx/
COPY project.conf /etc/nginx/conf.d/
RUN mkdir /usr/src/nginx/shared-data
RUN mkdir /usr/src/nginx/static

Nginx Configuration Files

# Define the user that will own and run the Nginx server
user  nginx;
# Define the number of worker processes; recommended value is the number of
# cores that are being used by your server
worker_processes  1;
# Define the location on the file system of the error log, plus the minimum
# severity to log messages for
error_log  /var/log/nginx/error.log warn;
# Define the file that will store the process ID of the main NGINX process
pid        /var/run/nginx.pid;

# events block defines the parameters that affect connection processing.
events {
    # Define the maximum number of simultaneous connections that can be opened by a worker proce$
    worker_connections  1024;
}

# http block defines the parameters for how NGINX should handle HTTP web traffic
http {
    # Include the file defining the list of file types that are supported by NGINX
    include       /etc/nginx/mime.types;
    # Define the default file type that is returned to the user
    default_type  text/html;
    # Define the format of log messages.
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
                          # Define the location of the log of access attempts to NGINX
    access_log  /var/log/nginx/access.log  main;
    # Define the parameters to optimize the delivery of static content
    sendfile        on;
    tcp_nopush     on;
    tcp_nodelay    on;
    # Define the timeout value for keep-alive connections with the client
    keepalive_timeout  65;
    # Define the usage of the gzip compression algorithm to reduce the amount of data to transmit
    gzip  on;
    gzip_min_length 1000;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    # Include additional parameters for virtual host(s)/server(s)
    include /etc/nginx/conf.d/*.conf;
}
server {

    # listen              443 ssl;
    # server_name         se_integrations;
    # ssl_certificate     /usr/src/nginx/ssl/ca-bundle.crt;
    # ssl_certificate_key /usr/src/nginx/ssl/private-key.key;
    # ssl_protocols       TLSv1 TLSv1.1 TLSv1.2;

    listen 80;
    server_name portfolio;

    location / {
        rewrite /(.*) /$1 break;
        proxy_pass http://astro:4321;

        # Do not change this
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Server side events support
        proxy_buffering off;
        proxy_cache off;
        proxy_set_header Connection '';
        proxy_http_version 1.1;
        chunked_transfer_encoding off;
        proxy_read_timeout 24h;
    }

}

Explanation

  • proxy_pass: Forwards requests to the Astro container.
  • Headers: Ensures proper handling of requests and client information.

Step 4: Deploy the Setup

  1. Place all files in the appropriate directories.

  2. Build and start the services using Docker Compose:

    docker-compose up --build -d
    
  3. Verify the deployment:

    • Open your server’s IP in a browser to see the deployed Astro website.

Conclusion

By leveraging Docker, Docker Compose, and Nginx, you can have full control of the capabilities of your website and reduce costs by using your own server or VPS. Don’t forget to secure your server, and consider a DNS proxy like Cloudflare!