Build your own easy-migrate development

·

5 min read

Prerequisite

Path definitions

Use the environment variable instead of the absolute path will be more adaptable.

${PANDORA} - Some public folders, some web services, such as WebDAV can directly set the data directory here. There is no credentials here.

${TORONTO} - Private folder, which can be fully synchronized to another place to be a mirror.

${SEATTLE} - Temporary log folder, or some restricted.

Export them by adding this to ~/.bashrc

export PANDORA=/data0/pandora
export TORONTO=/data0/toronto
export SEATTLE=/data0/seattle

Fundamental softwares

  • Required: docker(Ubuntu), you can check the URL below, other Linux platform is easy to find by the navigation tree of the Docker official site.

Install Docker Engine on Ubuntu

  • Strong recommend: zsh and OhMyZSH here is the one-key instruction(copy from the official site)
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

Let’s do this

Setup the docker-compose

There are 5 services bundled:

  • Nginx: the main entrance of your development.

  • WebDav: a file transfer service using implemented with HTTP protocol, likewise the best compatible way.

  • DB: MySQL service, it’s useful, and now it’s been depended on by Gieta and WordPress .

  • WordPress: An antique blog system by PHP(the best language of the world).

  • Gitea: an easy deploy open-source tool to deploy your self git repository.

Creating docker-compose.yml is OKAY, we will finally use some scripts to run it.

version: "0.1"

services:
  web:
    image: nginx
    volumes:
      - ${TORONTO}/nginx/templates:/etc/nginx/templates
      - ${TORONTO}/nginx/cert:/etc/nginx/cert
      - ${TORONTO}/nginx/webroot:/var/webroot
      - /etc/localtime:/etc/localtime
    ports:
      - "443:443"
      - "80:80"
    depends_on:
      - webdav
      - wordpress
    restart: always
    extra_hosts:
      - "host.docker.internal:host-gateway"

  webdav:
    image: bytemark/webdav
    environment:
      AUTH_TYPE: Basic
      USERNAME: your-name
      PASSWORD: your-password
    volumes:
      - ${PANDORA}:/var/lib/dav
    restart: always

  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: container
      MYSQL_USER: mysql-name
      MYSQL_PASSWORD: mysql-password
      MYSQL_RANDOM_ROOT_PASSWORD: "1"
    volumes:
      - ${TORONTO}/mysql:/var/lib/mysql

  wordpress:
    image: wordpress
    restart: always
    environment:
      WORDPRESS_DB_HOST: halo-db-1
      WORDPRESS_DB_USER: mysql-name
      WORDPRESS_DB_PASSWORD: mysql-password
      WORDPRESS_DB_NAME: container
    depends_on:
      - db
    volumes:
      - ${TORONTO}/wordpress/data:/var/www/html
      - ${TORONTO}/wordpress/config/php.ini:/usr/local/etc/php/php.ini

  gitea:
    image: gitea/gitea:1.18.1
    environment:
      - USER_UID=1000
      - USER_GID=1000
    restart: always
    volumes:
      - ${TORONTO}/gitea:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "222:22"

volumes:
  db:
  wordpress:

Setup Certbot

Certbot is a very easy-to-use command line to manage HTTPS certificates, let’s install it first, here is the simple instruction to install it.

Certbot Instructions

apt install snapd
snap install core; snap refresh core
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot
snap set certbot trust-plugin-with-root=ok
snap install certbot-dns-cloudflare

Setup up the Cloudflare plugin so you can apply the certificate, But firstly you should apply two kinds of stuff:

  • Use Cloudflare to handle your DNS queries, it’s easy.

  • Apply an API token for the plugin.

# Cloudflare API token used by Certbot
# Write this line into this file -> 
# $TORONTO/.secrets/certbot/cloudflare.ini
dns_cloudflare_api_token = your-api-token

Finally, apply it, here are your domains, you should replace yours.

# Apply here: https://dash.cloudflare.com/profile/api-tokens
certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials $TORONTO/.secrets/certbot/cloudflare.ini \
    -d dev.noahyao.me

certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials $TORONTO/.secrets/certbot/cloudflare.ini \
    -d code.noahyao.me

certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials $TORONTO/.secrets/certbot/cloudflare.ini \
    -d webdav.noahyao.me

# Copy archive file to real
rm -rf ${TORONTO}/nginx/cert && cp -LR /etc/letsencrypt/live ${TORONTO}/nginx/cert

There is a little problem here, the default certificate applied by certbot is saved as a symbolic file, which means directly mapped into the container would fail to find the file. So now the only way I can conceive is to copy the files using -LR arguments, this will copy all real files.

Setup Nginx conf

You should replace the names of your site.

# Your should put this file at ${TORONTO}/nginx/templates/default.conf.template

server
{
    listen 443 ssl;
    server_name dev.noahyao.me;
    ssl_certificate /etc/nginx/cert/dev.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/dev.noahyao.me/privkey.pem;

    error_page 404 500 502 503 504 =200 /error_page;
    location = /error_page
    {
        default_type text/html;
        return 200 '<h1>:)</h1>';
    }
}

server
{
    listen 443 ssl;
    server_name code.noahyao.me;

    ssl_certificate /etc/nginx/cert/code.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/code.noahyao.me/privkey.pem;

    location /
    {
        proxy_read_timeout 5;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass http://host.docker.internal:8443;
    }
}

server
{
    listen 443 ssl;
    server_name webdav.noahyao.me;
    ssl_certificate /etc/nginx/cert/webdav.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/webdav.noahyao.me/privkey.pem;

    location /
    {
        client_max_body_size 1024M;
        proxy_pass http://halo-webdav-1;
    }
}

server
{
    listen 443 ssl;
    server_name rabbit.noahyao.me;
    ssl_certificate /etc/nginx/cert/rabbit.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/rabbit.noahyao.me/privkey.pem;

    location /
    {
        proxy_pass http://halo-wordpress-1;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto https;
        client_max_body_size 100M;
    }
}

server
{
    listen 443 ssl;
    server_name git.noahyao.me;
    ssl_certificate /etc/nginx/cert/git.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/git.noahyao.me/privkey.pem;

    location /
    {
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto https;
        client_max_body_size 100M;
        proxy_pass http://halo-gitea-1:3000;
    }
}

server
{
    listen 443 ssl;
    server_name demo.noahyao.me;
    ssl_certificate /etc/nginx/cert/demo.noahyao.me/cert.pem;
    ssl_certificate_key /etc/nginx/cert/demo.noahyao.me/privkey.pem;

    error_page 404 500 502 503 504 =200 /error_page;

    location /
    {
        root /var/webroot;
    }

    location = /error_page
    {
        default_type text/html;
        return 200 '<h1>:)</h1>';
    }
}

[Optional] Code-server on host directly

code-server is a GREAT software for programmers, it can be a 99.99% alternative tool to local VsCode. Here are two reasons to let me decide to deploy it on the host instead to the container.

  • Sometimes I would like to use it’s terminal using an iPad or some devices, a web browser solution is great.

  • I want to learn some weird network position of docker(😟container → host)

Install on the host it’s easy, here is the instruction.

# code-server
curl -fsSL https://code-server.dev/install.sh | sh

# This password is very important, leak this will cause disaster.
echo "bind-addr: 0.0.0.0:8443
auth: password
password: your-code-server-password
cert: false" > ~/.config/code-server/config.yaml

systemctl start code-server@USER

Create scripts

# First up the services
docker compose -p halo up

# Shutdown and clean services
# docker compose -p halo down

# Restart the service
# restart.sh
docker compose -p halo stop
docker compose -p halo start

Run chmod u+x restart.sh, so if you change some conf, such a Nginx you just need to run this script.

Troubleshooting

  • UFW problem: ufw is a default firewall tool of Ubuntu, even you modified the Vulta(A Cloud Service platform) firewall, it did not work. I got two problems confused me over 3 hours.

    • Docker container cannot communicate to host.

    • New 222 port, my Giteaforwarded ports do not work when I add accept strategy at the Vultr panel.

  • Timezone solution: /usr/share/zoneinfo/Asia/Beijing /etc/localtime

  • Rsync problem: I planned to backup all files from VPS to my local NAS, but I conquered the permission problems, some directories/files cannot be synchronized successfully, finally, I add some commands in /etc/rsyncd.conf to escape the problems. uid = root gid = root

    • MySQL data.

    • Gitea ssh files.