$ sudo apt update
$ sudo apt install podman
$ podman -v
$ podman run docker.io/hello-world
Linux is for everybody. Lets enjoy it.
$ sudo apt update
$ sudo apt install podman
$ podman -v
$ podman run docker.io/hello-world
This is the original disk layout before the disk extension
# pvcreate /dev/sda3
# vgs
# vgextend centos /dev/sda3
# lvdisplay | grep Path
# lvextend -l +100%FREE /dev/centos/root
# lvs
# df -Th /
# xfs_growfs /
# df -Th /
Manipulating and partitioning hardisk can be a daunting task, especially for new sysadmin and if the disk has already contain data. Luckily, there is a tool that make this task easier, and that tool is called cfdisk.
$ sudo cfdisk /dev/sda
$ sudo partprobe
$ lsblk
In previous post, we have covered the way to make LVM and filesystem aware of the disk size increase that happen in the virtual machine layer.
$ sudo lvmdiskscan
$ sudo pvcreate <path to the new disk>
$ sudo pvs
$ sudo vgextend <VG name> <PV name>
$ sudo vgs
$ sudo lvextend -l +100%FREE <LV PATH>
$ sudo lvdisplay | grep Path
$ sudo xfs_growfs <mountpoint>
$ sudo resize2fs <mountpoint>
$ df -Th
To increase a disk size in a virtual machine with Linux operating system configured with LVM, below are the steps (these steps were tested using virtualbox):
$ sudo pvresize <pv name>
$ sudo pvs
$ sudo vgs
$ sudo lvextend -l +100%FREE <LV path>
$ sudo lvdisplay | grep Path
$ df -Th
$ sudo xfs_growfs <mountpoint>
$ sudo resize2fs <mountpoint>
$ df -Th
To easily split (or cut some part) of video using command line, a tool called ffmpeg can be used.
$ sudo apt -y install ffmpeg
$ ffmpeg -i mymovie.mp4 -ss 00:01:00 -t 00:00:30 myeditedmovie.mp4
Sometimes we have a need to test our SSL, before we deploy it to production. If we have a development or staging environment, then we can test it there. But if we do not have that, we can always rely on trusty old docker to test the ssl in our own machine. Please follow along to learn how to do it.
Listen 443<VirtualHost _default_:443>DocumentRoot "/usr/local/apache2/htdocs"ServerName linuxwave.infoServerAdmin me@linuxwave.infoErrorLog /proc/self/fd/2TransferLog /proc/self/fd/1SSLEngine onSSLCertificateFile "/ssl/server.crt"SSLCertificateKeyFile "/ssl/server.key"SSLCertificateChainFile "/ssl/server-ca.crt"</VirtualHost>
docker run -dit --name apache -v ${PWD}:/ssl httpd
docker exec -it apache cp /usr/local/apache2/conf/httpd.conf /ssl
LoadModule ssl_module modules/mod_ssl.soLoadModule socache_shmcb_module modules/mod_socache_shmcb.soInclude conf/extra/https.conf
docker exec -it apache cp /ssl/httpd/conf /usr/local/apache2/conf
docker exec -it apache ln -s /ssl/https.conf /usr/local/apache2/conf/extra
docker exec -it apache httpd -t
docker restart apache
docker inspect apache | grep IPAddress
echo "172.17.0.2 linuxwave.info" | sudo tee -a /etc/hosts
Fortivpn does offer 2 clients for linux, one is for redhat family and the other installer is for ubuntu/debian family. You can download the installers from here.
$ sudo apt install openfortivpn
$ sudo openfortivpn myvpnserver.local:10443 -u vpnuser -p mypass
$ sudo openfortivpn -c myvpn.config
$ man openfortivpn
In a previous post, I have shared a way to check for udp port allowance to a linux server using netcat and ngrep.
$ nc -klu 10000
$ echo "testing udp" | nc -u 10.10.10.10.10000
To run a mysql query directly from command line, without entering the interactive mode, use -e flag, like below
$ mysql -u user -p -e 'show tables;' mydbname
To test if a udp port is allowed to a linux server, and not blocked by any firewall, we need ngrep on the server side, and nc (netcat) on the client side.
$ sudo apt install ngrep -y
$ ngrep -q "accessible" udp port 10000
$ sudo apt install netcat-openbsd
$ echo "yes, accesible" | nc -u server-ip 10000
U client-ip:39062 -> server-ip:443 #1yes, accessible...
The default location that docker use to store all the components of docker, such as images and containers is /var/lib/docker.
$ sudo mkdir /data/docker
$ sudo systemctl stop docker
$ sudo touch /etc/docker/daemon.json
{"data-root": "/data/docker"}
$ sudo systemctl start docker
$ sudo systemctl status docker
The tool that we are going to use is just curl. We need to access the url of the website that will provide the geolocation information of the IP address. Let's get to it.
$ curl https://ipapi.co/ip-address/json
$ curl https://ipapi.co/8.8.8.8/json
{"ip": "8.8.8.8","network": "8.8.8.0/24","version": "IPv4","city": "Mountain View","region": "California","region_code": "CA","country": "US","country_name": "United States","country_code": "US","country_code_iso3": "USA","country_capital": "Washington","country_tld": ".us","continent_code": "NA","in_eu": false,"postal": "94043","latitude": 37.42301,"longitude": -122.083352,"timezone": "America/Los_Angeles","utc_offset": "-0800","country_calling_code": "+1","currency": "USD","currency_name": "Dollar","languages": "en-US,es-US,haw,fr","country_area": 9629091.0,"country_population": 327167434,"asn": "AS15169","org": "GOOGLE"}
$ curl https://ipapi.co/8.8.8.8/countryUS
$ curl https://ipinfo.io/ip-address
$ curl https://ipinfo.io/8.8.8.8
{"ip": "8.8.8.8","hostname": "dns.google","anycast": true,"city": "Mountain View","region": "California","country": "US","loc": "37.4056,-122.0775","org": "AS15169 Google LLC","postal": "94043","timezone": "America/Los_Angeles","readme": "https://ipinfo.io/missingauth"}
$ curl https://ipinfo.io/8.8.8.8/postal
94043
For a proper mongodb replication, we are going to start 3 containers for this exercise.
docker run -dit --name mongorep1 --hostname mongorep1 mongo:6 --bind_ip_all --replSet myrepl
docker inspect mongorep1 | grep -w IPAddress"IPAddress": "172.17.0.2",
docker run -dit --name mongorep2 --hostname mongorep2 --add-host mongorep1:172.17.0.2 mongo:6 --bind_ip_all --replSet myrepl
docker run -dit --name mongorep3 --hostname mongorep3 --add-host mongorep1:172.17.0.2 mongo:6 --bind_ip_all --replSet myrepl
docker exec -it mongorep1 mongoshtest> rs.initiate()
myrepl [direct: secondary] test> rs.add("172.17.0.3")myrepl [direct: primary] test> rs.add("172.17.0.4")
myrepl [direct: primary] test> rs.status()
...
members: [
{
_id: 0,
name: 'mongorep1:27017',
health: 1,
state: 1,
stateStr: 'PRIMARY',
...
{
_id: 1,
name: '172.17.0.3:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
...
{
_id: 2,
name: '172.17.0.4:27017',
health: 1,
state: 2,
stateStr: 'SECONDARY',
...
myrepl [direct: primary] test> db.printSecondaryReplicationInfo()
source: 172.17.0.3:27017
{
syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',
replLag: '0 secs (0 hrs) behind the primary '
}
---
source: 172.17.0.4:27017
{
syncedTo: 'Mon Dec 19 2022 15:48:01 GMT+0000 (Coordinated Universal Time)',
replLag: '0 secs (0 hrs) behind the primary '
}
docker exec -it mongorep1 mongoshmyrepl [direct: primary] test> use mynewdbmyrepl [direct: primary] mynewdb> db.people.insertOne( { name: "John Rambo", occupation: "Soldier" } )exit
docker exec -it mongorep2 mongoshmyrepl [direct: secondary] test> show dbsmyrepl [direct: secondary] test> use mynewdbmyrepl [direct: secondary] test> db.people.find()
[
{_id: ObjectId("63a08880e1c97fba6959ec15"),name: 'John Rambo',occupation: 'Soldier'}]
If you encounter this error:
MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string
myrepl [direct: secondary] test> db.getMongo().setReadPref("secondary")
Do the same for the third node, the data should also be the same.
docker exec -it mongorep3 mongoshmyrepl [direct: secondary] test> use mynewdbmyrepl [direct: secondary] test> db.people.find()[{_id: ObjectId("63a08880e1c97fba6959ec15"),name: 'John Rambo',occupation: 'Soldier'}]
One of the field that I found linux is quite lacking is, in editing pdf. But a few weeks ago, A friend of mine recommended an excellent tool, called xournal++ (or xournalpp). This is actually a tool to do journalling, but the pdf editing feature is so good, it beats all the tools I previously used.
$ sudo apt install snapd -y
$ sudo snap install xournalpp
$ sudo apt install xournalpp -y
$ wget https://github.com/xournalpp/xournalpp/releases/download/v1.1.3/xournalpp-1.1.3-Ubuntu-focal-x86_64.deb
$ sudo apt install ./xournalpp-1.1.3-Ubuntu-focal-x86_64.deb -y
$ xournalpp
To change the metadata in PDF files, use a command line tool called exiftool. This tool can manipulate metadata in many file types, but in this post we will focus on changing the metadata in a pdf file.
$ sudo apt install libimage-exiftool-perl -y
$ exiftool mypdf.pdf
ExifTool Version Number : 11.88File Name : mypdf.pdfDirectory : .File Size : 1 MBFile Modification Date/Time : 2022:12:08 07:46:39+08:00File Access Date/Time : 2022:12:08 07:46:43+08:00File Inode Change Date/Time : 2022:12:08 07:46:39+08:00File Permissions : rw-rw-r--File Type : PDFFile Type Extension : pdfMIME Type : application/pdfPDF Version : 1.3Linearized : NoPage Count : 15XMP Toolkit : Image::ExifTool 11.88Title : mypdf.pdfProducer : Nitro PDF PrimoPDFCreate Date : 2022:09:30 16:57:06-08:00Modify Date : 2022:09:30 16:57:06-08:00Creator : PrimoPDF http://www.primopdf.comAuthor : andre
$ exiftool -Author mypdf.pdfAuthor : andre
$ exiftool -Author=john mypdf.pdf
$ exiftool -Author mypdf.pdfAuthor : john
$ rm mypdf.pdf_original
One of the neat feature of tmux is, it has the ability to synchronize commands typed in one pane to all pane in the same window in tmux. This trick will help you run command across multiple terminals with just one time typing.
$ tmux
ctrl-b "
ctrl-b :
setw synchronize-panes on
ctrl-b :
setw synchronize-panes on
The standard linux system use grub (Grand UNified Bootloader) to manage its booting process. To change the boot order in linux, there is one file that you need to change which is /etc/default/grub.
sudo nano /etc/default/grub
GRUB_DEFAULT=2
sudo update-grub
Postgres has released the final version of postgresql 9.6 on November 2021, and this version is no longer supported by postgresql.org. So installing out of support software in production server is not recommended.
wget -c https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm
wget -c https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm
wget -c https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/postgresql96-server-9.6.22-1PGDG.rhel7.x86_64.rpm
4. Install the packages. If any additional packages are needed, just download it from the repo url above.
sudo yum install ./postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm ./postgresql96-libs-9.6.22-1PGDG.rhel7.x86_64.rpm ./postgresql96-server-9.6.22-1PGDG.rhel7.x86_64.rpm
5. Initialize the database
sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb
6. Enable the database startup on boot, and start the service
sudo systemctl enable --now postgresql-9.6
Singularity is another container platform, similar to docker. It is widely used in high performance computing world, due to better security and portability.
docker run --privileged --rm quay.io/singularity/singularity:v3.10.0 --versionsingularity-ce version 3.10.0
docker run --privileged --rm -v ${PWD}:/home/singularity quay.io/singularity/singularity:v3.10.0 pull /home/singularity/alpine_latest.sif docker://alpine
docker run --privileged --rm -v ${PWD}:/home/singularity quay.io/singularity/singularity:v3.10.0 exec /home/singularity/alpine_latest.sif cat /etc/os-releaseNAME="Alpine Linux"ID=alpineVERSION_ID=3.15.4PRETTY_NAME="Alpine Linux v3.15"HOME_URL="https://alpinelinux.org/"BUG_REPORT_URL="https://bugs.alpinelinux.org/"
This is actually very easy, just run below command to start it
docker run -d -p 8000:80 --mount type=bind,source"$(pwd):/htdocs",target=/var/www/html php:apache
The options are:
-d : run this container in a detached mode (in the background)
--mount : mount a folder in current directory called htdocs (will be created by docker) into /var/www/html in the container
-p 8000:80 : will map port 8000 in localhost to port 80 in the container
Once started, create a simple php script inside the htdocs directory
cd htdocs
cat >> index.php<<EOF
<?phpecho "This is my php script";
?>
EOF
And browse using a normal web browser to http://localhost:8000. You should see "This is my php script" shown in your web browser
First, we need to pull the postgresql image from dockerhub
singularity pull docker://postgres:14.2-alpine3.15
cat >> pg.env <<EOFexport TZ=Asia/Kuala_Lumpurtexport POSTGRES_USER=pguserexport POSTGRES_PASSWORD=mypguser123export POSTGRES_DB=mydbexport POSTGRES_INITDB_ARGS="--encoding=UTF-8"EOF
mkdir pgdatamkdir pgrun
singularity run -B pgdata:/var/lib/postgresql/data -B pgrun:/var/run/postgresql -e -C --env-file pg.env postgres_14.2-alpine3.15.sif
singularity exec postgres_14.2-alpine3.15.sif psql -h localhost -p 5432 -d mydb
mydb=#
mkdir web
cat >> web/index.html<<EOF<html><h1>This is my index<h1></html>
EOF
singularity pull docker://nginx
sudo singularity run -B web/:/usr/share/nginx/html --writable-tmpfs nginx_latest.sif
In this example, we will use the nginx web server image from docker hub.
singularity pull docker://nginx
sudo singularity run --writable-tmpfs docker://nginx web
curl localhost
<!DOCTYPE html>
10.22.0.1 - - [05/Mar/2022:15:45:10 +0800] "GET / HTTP/1.1" 200 615 "-" "curl/7.68.0" "-"
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
...
4. We can also use a web browser and browse to localhost
One of the advantage of singularity is, it does not require any service to run containers. And the images that you downloaded will be saved in normal files in your filesystem, rather than in some cache directory like docker.
To run dockerhub's hello-world image using singularity:
1. Pull the image from dockerhub
$ singularity pull docker://hello-world
2. The image will be saved as hello-world_latest.sif
$ ls
hello-world_latest.sif
3.1 To run a container based on that image, just use "singularity run" against the sif file
$ singularity run hello-world_latest.sif
...
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
$ ./hello-world_latest.sif...
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
$ sudo apt update
$ wget https://github.com/sylabs/singularity/releases/download/v3.9.7/singularity-ce_3.9.7-bionic_amd64.deb
$ sudo apt install ./singularity-ce_3.9.7-bionic_amd64.deb
$ singularity version3.9.7-bionic
Go is a programming language, created by engineers at Google in 2007 to create dependable and efficient software. Go is most similarly modeled after C.
To install go linux, the steps are very easy.
1. Download go package from https://go.dev/dl/
$ wget https://go.dev/dl/go1.18.linux-amd64.tar.gz
2. Extract the tar package
$ tar xvf go1.18.linux-amd64.tar.gz
3. Include the go bin directory into PATH
echo "export PATH=\$PATH:/home/user/go/bin" ~/.bashrc
source ~/.bashrc
$ go versiongo version go1.18 linux/amd64
SSL is an important part of web application security nowadays. Many tools are available to test out our SSL configuration, but almost all of the tools are web based. One of the great tool that I found that can be used out of a terminal, is called testssl.sh.
$ wget https://testssl.sh/testssl.sh-3.0.7.tar.gz
And deploy it anywhere on your linux machine
$ tar xvf testssl.sh-3.0.7.tar.gz
Make it easier to access
$ ln -s testssl.sh-3.0.7 testssl
And we are good to go. To use it, just run the command, and provide the url we want to test against the command
$ cd testssl
$ ./testssl.sh https://mysslwebsite.com
Once we have the result, just fix the "NOT Ok" part, and rerun the above command. Rinse and repeat until you are fully satisfied with your ssl configuration.
To get a visually better results with grading, just run the qualys ssl server test once you have fully tuned your ssl configuration with testssl.sh.
To increase nginx security, one of the thing that we can configure is, to disable old TLS. At this current moment, TLSv1.3 is the gold standard, and TLSv1 and TLSv1.1 should not be enabled in production nginx.
To disable TLSv1 and TLSv1.1, just go to /etc/nginx/nginx.conf, find ssl_protocols line and change it to look like below
ssl_protocols TLSv1.2 TLSv1.3;
Test your configuration for any syntax error
sudo nginx -t
And restart your nginx to activate the setting
sudo systemctl restart nginx
In order to quickly check if our nginx no longer support TLSv1 and TLSv1.1, use nmap command as below
nmap --script ssl-enum-ciphers -p 443 www.mytlssite.com
Or, we can use one of the free web based SSL test tools: