Installation Steps
All the configuration is using docker and… (need better info)
resume de lo que vamos a instalar.
To execute Docker commands, a user with sudo privileges is required. If the root user is accessible, there is no need to add the ‘sudo’ instruction.
In this section, we will focus on each folder to clarify its configuration.
First of all, we need to create the folder structure
on /opt to create and run these commands:
mkdir -p zw-vmmc-docker/certs \ zw-vmmc-docker/mongo/backups \ zw-vmmc-docker/nginx/certs \ zw-vmmc-docker/node_container/services \ zw-vmmc-docker/tomcat/conf \ zw-vmmc-docker/tomcat/webapps
It creates the next folder structure:
└── zw-vmmc-docker └── mongo └── backups └── nginx └── certs └── node_container └── services └── tomcat └── conf └── webapps
Mongo folder
In /opt/zw-vmmc-docker/mongo you need to create mongo-init.js:
Create file:
vim/opt/zw-vmmc-docker/mongo/mongo-init.js
And paste the next lines:
//mongo Init db.createUser( { user: "<user>", pwd: "<pwd>", mechanisms: ["SCRAM-SHA-1"], roles: [ { role: "readWrite", db: "prod" }, { role: "readWrite", db: "dev" }, { role: "dbAdmin", db: "dev" }, { role: "dbAdmin", db: "prod" } ] } );
You will need to replace “<user>” and “<pwd>” with the user and password existing here: mongo-user // TODO
You will also need to download backups log-backup.gz and prod-backup.gz, which you will find here. And paste in /opt/zw-vmmc-docker/mongo/backups // TODO
About the Mongo configuration, we will finish the configuration when we focus on the docker-compose.yml file.
Tomcat folder
Here we are going to focus on the Tomcat server and Java services.
Go to /opt/zw-vmmc-docker/tomcat/conf and create log4j2.json config.json file:
vim/opt/zw-vmmc-docker/tomcat/conf/log4j2.json
And paste the next content:
{ "configuration": { "status": "warn", "name": "connectDWS", "packages": "psi.dws", "ThresholdFilter": { "level": "debug" }, "appenders": { "Console": { "name": "Console", "PatternLayout": { "pattern": "%d{HH:mm:ss.SSS} %-5level %X{request.id} %logger{36} - %msg%n" } } }, "loggers": { "root": { "level": "warn", "AppenderRef": { "ref": "Console" } } } } }
In the same folder create config.json file:
vim /opt/zw-vmmc-docker/tomcat/conf/config.json
Copy the next lines on the file you are creating:
{ "tomcatLog4j2Config": "/usr/local/tomcat/conf/log4j2.json", "dws": { "kafkaBrokers": "kafka:9092", "urlsToEnqueueHighPriority": [], "urlsToEnqueueMediumPriority": [], "wsInternalDevVer": "0.0.19.02.02", "accessServerUsernameProfiler": "dwsUser", "accessServerPasswordProfiler": "dwsPass", "enqueueProgramFilter": [ "pezQRuP6DE2" ], "gitDefaultUrlDefault": "http://localhost:8080/connectGitConfig/api/getConfig?type=ws_v2&ver=1", "gitDefaultUrlProd": "http://localhost:8080/connectGitConfig/api/getConfig?type=ws_v2&ver=1", "unsafeUrls": [ "https://18.205.64.238/" ], "reqCustomHeaders": [ { "key": "DWS-CONFIG", "urlRegex": "connectGitConfig/api", "value": "JBcLRzkItdEauNqqq91TQSjAREdzCZT2" }, { "key": "DWS-SIGNATURE", "urlRegex": ".*", "value": "dws" } ], "initCachedGitConfigUrlsDefault": [], "initCachedGitConfigUrlsStage": [], "initCachedGitConfigUrlsTest": [], "initCachedGitConfigUrlsTrain": [], "initCachedGitConfigUrlsProd": [], "servicesAvailability": { "dhis2_available": "DHIS2", "legacy_available": "LEGACY", "mongo_available": "MONGO", "poeditor_available": "POEDITOR", "replica_available": "REPLICA" } }, "routeNode": { "routeMap": { "mongoDB_Disabled": "3000", "cronWs": "http://localhost:3002", "webshot": "http://localhost:3005" } }, "connectGitConfig": { "postgreSql": "localhost:5432" }, "voucherGen": { "postgreSql": "localhost:5432", "mongoUrl": "mongo:3000" }, "voucherGenWs": { "postgreSql": "localhost:5432", "mongoUrl": "mongo:3000" } }
After that, we will need to get the services. To get the resources you need to access to GitHub and download the war files of the next releases:
connectPWA // TODO: pending confirmation
All these war files have to be moved to /opt/zw-vmmc-docker/tomcat/webapps
The last step in the tomcat folder is to create the dockerfile in /opt/zw-vmmc-docker/tomcat :
vim /opt/zw-vmmc-docker/tomcat/dockerfile
With the next content:
# tomcat/Dockerfile # Use the Tomcat Docker Official Image FROM tomcat:9.0-jdk11-openjdk # Use root USER root WORKDIR /usr/local/tomcat # Not necessary, but, install vim and bash-completion RUN apt-get update RUN apt-get install -y vim RUN apt-get install -y bash-completion RUN apt-get clean RUN rm -rf /var/lib/apt/lists/* run # Create 'tomcat' user RUN useradd -r -m -U -d /usr/local/tomcat -s /bin/false tomcat # Update repositories and install necessary tools: RUN apt-get update && apt-get install -y \ passwd \ && rm -rf /var/lib/apt/lists/* # Change user owner RUN chown -R tomcat:tomcat /usr/local/tomcat # Change to 'tomcat' user USER tomcat # Copy log4j2 config file COPY conf/log4j2.json /usr/local/tomcat/conf/ # Copy config.json file COPY conf/config.json /usr/local/tomcat/conf/ # Copy .war files to the tomcat webapps folder. COPY webapps/*.war /usr/local/tomcat/webapps/ EXPOSE 8080 # run Tomcat CMD ["catalina.sh", "run"]
Node_container folder
Here we are going to focus on the Mongo server and Java services.
Go to /opt/zw-vmmc-docker/services/ and create .env file:
vim /opt/zw-vmmc-docker/node_container/services/.env
And paste the next content:
MONGO_LOCATION="mongo:27017" MONGO_USER_PASS="<user>:<pwd>" KAFKA_BROKER="kafka-broker:9092"
You will need to replace “<user>” and “<pwd>” with the same user and password existing in: mongo-user // TODO
Continue creating the file ecosystem.config.js in the same folder:
vim /opt/zw-vmmc-docker/node_container/services/ecosystem.config.js
Copy the next lines on the file you are creating:
module.exports = { apps: [ { name: 'queue-service', script: './queue-service.js', instances: '1', }, { name: 'cronWs', script: './connect-cron/src/app.js', instances: '1', env: { PORT: 3002, }, }, { name: 'mongo', script: '/usr/src/app/connectMongo/nodeJsWs/mongo.js', instances: '1', exec_mode: 'cluster', env: { PORT: 3000, }, }, ], };
After that, we will need to get the services. To get the resources you need to access to GitHub and download the next releases:
All these folder haves to be moved to /opt/zw-vmmc-docker/node_container/services
The last step in the node_container folder is to create the dockerfile in /opt/zw-vmmc-docker/node_container :
vim /opt/zw-vmmc-docker/tomcat/dockerfile
With the next content:
# nodeapp/Dockerfile # Use the Mongo Docker Official Image FROM node:20-alpine # set the folder to copy the services WORKDIR /usr/src/app COPY services/ . # Install pm2 globally RUN apk add --no-cache bash-completion RUN apk add --no-cache vim RUN npm install pm2 -g WORKDIR /usr/src/app/connect-cron RUN npm install WORKDIR /usr/src/app/connectMongo/nodeJsWs RUN npm install WORKDIR /usr/src/app CMD ["sh", "-c", "cd /usr/src/app/connect-cron && npm install && cd /usr/src/app/connectMongo/nodeJsWs && npm install && cd /usr/src/app && pm2-runtime ecosystem.config.js"] EXPOSE 3000
Create a Docker Compose file in the /opt directory
Ngnix folder
Certbot - SSL Certificate
Having an SSL certificate ensures a secure connection between users and the server, and that no data is compromised while it is traveling over the internet. This allows users to connect using HTTPS protocol over port 443.
This guide assumes the server already has a domain and the necessary DNS record/s have been created.
Open ports 80 and 443, belonging to HTTP and HTTPS respectively. Port 22 is also required.
Connect via SSH to the server with a user with sudo privileges.
Install Certbot
sudo snap install --classic certbot
Prepare the Certbot command
sudo ln -s /snap/bin/certbot /usr/bin/certbot
Generate the certificate, this will also automatically edit the nginx configuration to serve it.
sudo certbot --nginx
Certbot will ask some questions, like an email to send notifications about certificate renewals.
After the initial questions, Certbot will ask for the domain names to issue the certificate. It will try to access the server over port 80 using the domain name, so the DNS records must be already configured.
Test that Certbot is capable of renewing the certificate, otherwise after a couple of months it will expire and users will lose access to the services.
sudo certbot renew --dry-run
Official installation guide: https://certbot.eff.org/instructions?ws=nginx&os=snap
Official documentation: https://eff-certbot.readthedocs.io/en/stable/
Nginx Configuration
It is recommended to install the SSL certificate using Certbot before going through this section of the configuration.
NiFi by default runs on port 8443, similarly, Superset runs on port 8088. However, to have a secured connection users should only be able to connect through port 443, where the SSL certificate is served.
Nginx will act as a reverse-proxy and redirect users requests that come through port 443 to the correct destinations.
Nginx configuration file can be edited using the following command:
vim/etc/nginx/sites-enabled/default
Search for the server bracket that is listening to port 443. Certbot configuration can be found here.
Edit and add the locations to redirect traffic. For superset:
location / { proxy_pass http://localhost:8088; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; }
For NiFi:
location /nifi { proxy_pass https://localhost:8443/nifi; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; }
The above configuration will make Superset accessible by using the base domain (https://exampledomain.com) and NiFi accesible by adding /nifi as a path (https://exampledomain.com/nifi)
The following is a complete example of the file:
## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # https://www.nginx.com/resources/wiki/start/ # https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ # https://wiki.debian.org/Nginx/DirectoryStructure # # In most cases, administrators will remove this file from sites-enabled/ and # leave it as reference inside of sites-available where it will continue to be # updated by the nginx packaging team. # # This file will automatically load configuration files provided by other # applications, such as Drupal or Wordpress. These applications will be made # available underneath a path with that package name, such as /drupal8. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name _; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass PHP scripts to FastCGI server # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php-fpm (or other unix sockets): # fastcgi_pass unix:/run/php/php7.4-fpm.sock; # # With php-cgi (or other tcp sockets): # fastcgi_pass 127.0.0.1:9000; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # Virtual Host configuration for example.com # # You can move that to a different file under sites-available/ and symlink that # to sites-enabled/ to enable it. # #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #} server { # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; root /var/www/html; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name dev.p2zwe.psidigital.org; # managed by Certbot location / { proxy_pass http://localhost:8088; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. #try_files $uri $uri/ =404; } location /nifi { proxy_pass https://localhost:8443/nifi; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. #try_files $uri $uri/ =404; } # pass PHP scripts to FastCGI server # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php-fpm (or other unix sockets): # fastcgi_pass unix:/run/php/php7.4-fpm.sock; # # With php-cgi (or other tcp sockets): # fastcgi_pass 127.0.0.1:9000; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/dev.p2zwe.psidigital.org/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/dev.p2zwe.psidigital.org/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = dev.p2zwe.psidigital.org) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 ; listen [::]:80 ; server_name dev.p2zwe.psidigital.org; return 404; # managed by Certbot }
Additionally, it is necessary to add one more parameter to the nginx main configuration file:
Edit the nginx.conf file.
vim /etc/nginx/nginx.conf
On the http bracket, add the following line.
http { ... client_max_body_size 500M; ... }
This ensures there will be no problems when uploading the NiFi pipelines or the Superset dashboards.
Remember that after any modification to a nginx configuration file, its is required to restart the service.
systemctl restart nginx
zw-vmmc-docker
Finally, create the docker-compose.yml file:
vim/opt/zw-vmmc-docker/tomcat/conf/log4j2.json
And past the next content:
services: tomcat: build: context: ./tomcat image: tomcat container_name: tomcat ports: - "8080:8080" volumes: - ./tomcat/webapps:/usr/local/tomcat/webapps/ - ./tomcat/conf/log4j2.json:/usr/local/tomcat/conf/log4j2.json depends_on: - node_container - kafka-broker restart: unless-stopped networks: - dws-network mongo: image: mongo:latest ports: - "27017:27017" networks: - dws-network environment: - MONGO_INITDB_ROOT_USERNAME=<user-mongo> - MONGO_INITDB_ROOT_PASSWORD=<pwd-mongo> - MONGO_INITDB_DATABASE=dev volumes: - ./mongo/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro - ./mongo/backups:/backup mongo-seed: image: mongo:latest depends_on: - mongo networks: - dws-network restart: no volumes: - ./mongo/backups:/backup entrypoint: ["sh", "-c", "sleep 20 && mongorestore --host mongo --port 27017 --username <user-mongo> --password <pwd-mongo> --authenticationDatabase admin --gzip --archive=/backup/prod-backup.gz&&mongorestore --host mongo --port 27017 --username gaspar --password gaspar --authenticationDatabase admin --gzip --archive=/backup/log-backup.gz && exit "] node_container: build: context: ./node_container image: node_container container_name: node_container depends_on: - mongo ports: - "3000:3000" - "3002:3002" volumes: - ./node_container/services:/usr/src/app/ restart: unless-stopped networks: - dws-network kafka-broker: image: wurstmeister/kafka container_name: kafka-broker ports: - "9092:9092" environment: KAFKA_ADVERTISED_HOST_NAME: kafka-broker KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 depends_on: - zookeeper restart: unless-stopped networks: - dws-network zookeeper: image: wurstmeister/zookeeper container_name: zookeeper ports: - "2181:2181" restart: unless-stopped networks: - dws-network nginx: build: context: ./nginx volumes: - ./nginx/nginx.conf:/etc/nginx/nginx.conf - ./nginx/certs:/etc/ssl/certs # SSL certificates ports: - "443:443" # Exponer el puerto 443 para HTTPS depends_on: - tomcat - node_container restart: always networks: - dws-network networks: dws-network: driver: bridge
You will need to replace <user-mongo> and <pwd-mongo>. You will find each one twice, (lines 26, 27, and 42).
Deploy containers
Go to /opt/zw-vmmc-docker and run:
docker compose up -dw