Command Line Interface (CLI): Your Gateway to Power Computing
β οΈ Disclaimer: This wiki is a work in progress and currently under active development. Content may be incomplete, reorganized, or updated frequently. Contributions, corrections, and suggestions are welcome as this guide evolves.
π‘ If you'd like to contribute, visit the GitHub repository: https://github.com/keilaash/cli-wiki
What is a CLI?
A Command Line Interface (CLI) is a text-based way to interact with your computer using typed commands instead of clicking buttons and icons. Think of it as having a direct conversation with your computer using a specific language it understands.
While modern computers come with beautiful graphical interfaces (GUIs) with windows, buttons, and menus, the CLI represents the original and most powerful way to control a computer system.
Why Use the Command Line?
Speed and Efficiency
- Execute complex tasks with a single command
- No need to navigate through multiple menus and windows
- Chain multiple operations together seamlessly
Precision and Control
- Fine-tune exactly what you want to do
- Access advanced features not available in graphical interfaces
- Modify system settings with surgical precision
Automation Power
- Write scripts to automate repetitive tasks
- Schedule operations to run automatically
- Batch process hundreds of files with one command
Universal Access
- Works on any computer, anywhere in the world
- Connect to remote servers and manage them as if they were local
- Essential for server administration and cloud computing
Resource Efficiency
- Uses minimal system resources
- Works even on older or limited hardware
- Faster than loading heavy graphical applications
Real-World Examples
Instead of clicking through folders to find a file, you can type:
find /home -name "*.pdf" -type f
This instantly finds all PDF files in your home directory.
Instead of using a file manager to copy files, you can type:
cp *.jpg /backup/photos/
This copies all JPEG images to your backup folder in one command.
Who Uses CLI?
- Developers: Writing code, managing projects, and deploying applications
- System Administrators: Managing servers, networks, and user accounts
- Data Scientists: Processing large datasets and running analyses
- DevOps Engineers: Automating deployments and monitoring systems
- Power Users: Anyone who wants to work more efficiently with their computer
Getting Started is Easier Than You Think
The CLI might seem intimidating at first, but it's like learning to drive a manual transmission car β once you master it, you have much more control and capability. Most people start with just a few basic commands and gradually build their skills.
What You'll Learn in This Guide
This guide will take you from complete beginner to confident CLI user, covering:
- Basic navigation and file operations
- Text processing and file manipulation
- System monitoring and management
- Automation with scripts
- Advanced tools and techniques
- Best practices and productivity tips
Whether you're a student, professional, or curious enthusiast, mastering the CLI will fundamentally change how you interact with computers and dramatically increase your productivity.
CLI Navigation
Understanding how to move around and manage files from the command line is fundamental in Unix-like systems such as Linux and macOS. Whether you're managing servers, using Git, writing shell scripts, or transferring files securely, basic CLI navigation is a universal prerequisite.
How Navigation Works in Unix
Unix-like systems use a hierarchical file structure, beginning with a single root directory: /
From there, everything is organized like a tree:
/
βββ bin β essential binaries
βββ home β user directories (e.g., /home/alice)
βββ etc β configuration files
βββ var β variable data (logs, databases)
βββ tmp β temporary files
βββ usr β user-installed software
βββ dev β system devices
Each user typically operates within their own home directory (e.g., /home/username
), which is represented by ~
(tilde).
Paths can be:
- Absolute (start with
/
) β/etc/nginx/nginx.conf
- Relative (from current directory) β
../logs/access.log
You use commands like cd
(change directory), pwd
(print working directory), and ls
(list contents) to navigate and inspect the filesystem.
Common File Management Tools
Once you can move around, you'll often want to work with files:
Tool | Purpose |
---|---|
cp | Copy files or directories |
mv | Move or rename files and folders |
rm | Delete files or directories |
mkdir | Create new folders |
rmdir | Remove empty folders |
These tools respect file structure and permissions, and many support options for working recursively (e.g., with entire folders), prompting confirmation, or preserving metadata.
Why This Matters
Knowing how to navigate and manage the filesystem from the command line helps you:
- Run Git or SCP commands in the right folder
- Move or rename config files securely
- Understand system layouts when debugging
- Write shell scripts for automation
- Work effectively over SSH or in headless environments
Even when using advanced tools, CLI navigation is the bedrock of working productively in any Unix-like system.
cd
Command Guide for moving between directories(folders)
This guide helps you understand and use the cd
(change directory) command, which is the core way to navigate through the filesystem in any Unix-like environment.
What is cd
, and Why Use It?
The cd
command allows you to move between directories (folders) in the terminal. It's the fundamental way to navigate the Unix file system, which is organized as a tree starting from the root (/
).
Why Learn cd
?
- Navigate the system: Move to configuration directories, user folders, or project files.
- Access files faster: Quickly reach your working directory.
- Enable other commands: Tools like
ls
,git
,scp
, orcp
rely on being in the right location. - Work remotely: On SSH sessions or servers without a GUI,
cd
is essential.
Without
cd
, you're stuck in one folder β limiting your ability to explore or manage the system.
1. Basic Usage
cd /path/to/folder
- Changes the working directory to the specified absolute path.
Example:
cd /etc/nginx/
2. Navigate Using Relative Paths
cd ..
..
means βgo up one levelβ from the current folder.
cd ../logs
- Moves up and into a sibling folder.
3. Shortcut to Home Directory
cd ~
~
refers to your home directory (e.g.,/home/yourname
on Linux).
cd
- Also takes you to your home directory.
4. Move Back to Previous Directory
cd -
- Returns you to the directory you were in before the last
cd
command.
5. Combine with ls
for File Awareness
After navigating, use:
ls
- Lists the contents of the current directory so you can see what's inside.
Related Navigation Tools
Understanding cd
is only the beginning. You'll commonly use these commands together:
Command | Description |
---|---|
ls | List contents of a directory |
pwd | Show the current working directory path |
cp | Copy files or directories |
mv | Move or rename files/folders |
rm | Delete files or directories |
mkdir | Create a new directory |
rmdir | Remove an empty directory |
Unix Filesystem Basics
Hereβs a quick overview of the standard structure youβll be navigating:
/
βββ bin β essential system programs
βββ home β user folders (e.g., /home/alice)
βββ etc β system configuration files
βββ var β logs, spool files, cache
βββ usr β shared and user-installed software
βββ tmp β temporary runtime files
Every directory is part of this tree. You navigate it using
cd
, inspect it usingls
, and interact with its contents using tools likecp
,mv
, andrm
.
Navigation Cheatsheet
Command | Description |
---|---|
cd path/ | Move into a directory |
cd .. | Go up one directory level |
cd ~ | Go to your home directory |
cd - | Return to previous directory |
pwd | Show current directory path |
ls | List files and folders |
cp a b | Copy file a to b |
mv a b | Move or rename file a to b |
rm a | Delete file a |
mkdir name/ | Make a new folder |
mv
Command Guide for Moving and Renaming Files
This guide helps you understand and use the mv
(move) command to move or rename files and directories in Unix-like systems.
What is mv
, and Why Use It?
The mv
command is used to move files or directories from one location to another β or to rename them.
Why Learn mv
?
- File organization: Move files into appropriate folders.
- Renaming: Quickly rename files without needing a dedicated rename tool.
- Efficient: Unlike
cp
,mv
doesnβt duplicate β it relocates or renames in-place.
Because
mv
replaces files by default, itβs often paired with safety flags like-i
to avoid accidental overwrites.
1. Rename a File
mv oldname.txt newname.txt
- Changes the file name without changing its location.
2. Move a File to a Directory
mv file.txt /path/to/destination/
- Moves
file.txt
into the specified folder.
3. Move Multiple Files into a Directory
mv file1.txt file2.txt /destination/
- Places both files into the target directory.
4. Move a Directory
mv my_folder /new/location/
- Moves the folder and everything inside it to a new path.
5. Prompt Before Overwriting
mv -i file.txt /path/
-i
(interactive) asks before replacing any existing files.
6. Verbose Mode (Show Whatβs Happening)
mv -v file.txt /path/
-v
(verbose) displays each operation β helpful when working with many files.
Move Cheatsheet
Command | Description |
---|---|
mv a.txt b.txt | Rename a.txt to b.txt |
mv file.txt /dir/ | Move file to directory |
mv file1 file2 /dir/ | Move multiple files into a folder |
mv -i file /dir/ | Ask before overwriting |
mv -v file /dir/ | Show each move operation |
mv folder /new/location/ | Move directory and its contents |
cp
Command Guide for Copying Files and Directories
This guide helps you understand and use the cp
(copy) command, which allows you to duplicate files and directories within a Unix-like file system.
What is cp
, and Why Use It?
The cp
command is used to copy files or folders from one location to another. Itβs a fundamental tool when you want to:
- Back up a file before modifying it
- Duplicate a configuration or script
- Copy data from one directory or disk to another
Why Learn cp
?
- Efficient backups: Quickly save a copy of critical files.
- Non-destructive: Unlike
mv
,cp
leaves the original intact. - Supports options for safety: Like asking before overwriting.
Youβll frequently use
cp
when working with system files, scripts, or when preparing deployment directories.
1. Copy a File
cp source.txt destination.txt
- Duplicates
source.txt
asdestination.txt
in the same or different location.
2. Copy into a Directory
cp source.txt /path/to/directory/
- Copies
source.txt
into the specified folder.
3. Copy Multiple Files into a Directory
cp file1.txt file2.txt /target/folder/
- All listed files will be copied into the given folder.
4. Copy a Directory (Recursively)
cp -r my_folder/ /target/location/
-r
or--recursive
lets you copy a whole folder, including its contents and subfolders.
5. Prompt Before Overwriting
cp -i file.txt /path/
-i
(interactive) asks you before replacing an existing file at the destination.
6. Preserve Timestamps and Permissions
cp -p file.txt /path/
-p
keeps the original fileβs modification time, access time, and mode (permissions).
7. Verbose Mode (Show Whatβs Happening)
cp -v file.txt /path/
-v
(verbose) prints each file copied β useful for confirming whatβs happening.
Copy Cheatsheet
Command | Description |
---|---|
cp a b | Copy file a to b |
cp a /dir/ | Copy a into the specified directory |
cp file1 file2 /dir/ | Copy multiple files into a folder |
cp -r folder /dir/ | Recursively copy a folder and contents |
cp -i file /dir/ | Prompt before overwriting |
cp -p file /dir/ | Preserve timestamps and permissions |
cp -v file /dir/ | Show whatβs being copied |
rm
Command Guide for Moving and Renaming Files
This guide explains how to use the rm
(remove) command to delete files and directories safely in Unix-like systems.
What is rm
, and Why Use It?
The rm
command is used to delete files or directories permanently from the file system.
Why Learn rm
?
- Free up space by removing unnecessary files
- Clean up scripts, logs, or temporary data
- Essential for file management in any Unix/Linux system
β οΈ Warning:
rm
does not move files to the trash β it deletes them immediately. Use it with care.
1. Remove a Single File
rm file.txt
- Deletes
file.txt
from the current directory.
2. Remove Multiple Files
rm file1.txt file2.txt
- Deletes both files at once.
3. Prompt Before Deleting (Safer)
rm -i file.txt
-i
(interactive) asks for confirmation before each deletion.
4. Remove a Directory and Its Contents
rm -r my_folder/
-r
(recursive) deletes the folder and all files/subfolders inside it.
5. Force Delete Without Prompt
rm -rf some_folder/
-f
(force) skips confirmation.-r
ensures folders and their contents are deleted.
β οΈ Only use
rm -rf
when you're absolutely sure β it cannot be undone.
6. Verbose Mode (Show Whatβs Being Deleted)
rm -v file.txt
-v
(verbose) prints each file as it's removed.
Remove Cheatsheet
Command | Description |
---|---|
rm file.txt | Delete a single file |
rm file1.txt file2.txt | Delete multiple files |
rm -i file.txt | Confirm before deletion |
rm -r folder/ | Delete folder and contents recursively |
rm -rf folder/ | Force delete a folder (no confirmation) |
rm -v file.txt | Show file(s) as they are deleted |
For safer usage, many users create an alias like
alias rm='rm -i'
to avoid accidents. Always double-check what you're removing β especially with wildcards like*
or recursive flags.
mkdir
Command Guide for Creating Directories
This guide helps you use the mkdir
(make directory) command to create new folders in your Unix/Linux file system.
What is mkdir
, and Why Use It?
The mkdir
command is used to create new directories (folders) in your current or specified location.
Why Learn mkdir
?
- Organize files into logical structures
- Set up project folders quickly via CLI
- Automate directory creation in scripts or build systems
Directories are the backbone of how files are structured in Unix. Mastering
mkdir
is essential for working efficiently in the terminal.
1. Create a Single Directory
mkdir my_folder
Creates my_folder
in the current working directory.
2. Create a Directory with a Full Path
mkdir /home/user/projects/my_folder
Makes the directory inside the specified path.
3. Create Parent Directories Automatically
mkdir -p path/to/my_folder
-p
creates any missing parent directories along the way.
Without
-p
,mkdir
will fail if intermediate directories donβt already exist.
4. Show Messages When Creating
mkdir -v my_folder
-v
(verbose) prints a message for each directory created.
5. Combine Options
mkdir -pv path/to/my_folder
Creates all necessary parent folders and prints confirmation.
mkdir
Cheatsheet
Command | Description |
---|---|
mkdir folder | Create a single directory |
mkdir -p a/b/c | Create nested directories recursively |
mkdir -v folder | Verbose output when creating a directory |
mkdir -pv path/to/folder | Create all folders in path + show output |
π‘ Tip: Use
mkdir
together withcd
to create and move into a new directory in one line:mkdir -p ~/projects/new && cd $_
$_
references the last argument of the previous command β handy for quick navigation.
rmdir
Command Guide for Removing Empty Directories
This guide covers how to use rmdir
to delete empty directories in Unix/Linux systems.
What is rmdir
, and Why Use It?
The rmdir
(remove directory) command is used to delete empty directories. It's a safer alternative to rm -r
when you're sure the folder is empty and want to avoid accidental data loss.
Why Use rmdir
?
- Safe deletion: Only removes empty directories
- Useful in scripts: Ensures no unintended data removal
- Encourages cleanup: Helps you maintain a tidy filesystem
If the directory is not empty,
rmdir
will not delete it.
1. Remove a Single Empty Directory
rmdir my_folder
Deletes my_folder
if it contains no files or subdirectories.
2. Remove Multiple Empty Directories
rmdir folder1 folder2 folder3
Attempts to delete all listed folders, skipping any that aren't empty.
3. Remove Directory Tree (Only if All Are Empty)
rmdir -p path/to/my_folder
-p
removes the directory and its parents if all are empty.
Example:
rmdir -p projects/demo/2025
Will remove:
projects/demo/2025
projects/demo
(if empty)projects
(if empty)
rmdir
Cheatsheet
Command | Description |
---|---|
rmdir folder | Remove a single empty directory |
rmdir folder1 folder2 | Remove multiple empty directories |
rmdir -p path/to/folder | Remove folder and its empty parent dirs |
Git & GitHub
This section covers tools and practices for working with Git and GitHub:
- Git Commands: Common Git workflows and commands.
- GitHub CLI: Setup and usage of GitHub CLI including SSH and GPG integration.
Git Commands Guide
A practical, beginner-friendly reference for essential Git commands and workflows.
What is Git?
Git is a tool that lets you track changes to files (usually code) and collaborate with others. It's called a version control system, and it's widely used in software development.
With Git, you can:
- Save your work at different points
- Go back to earlier versions
- Work with a team without overwriting each otherβs changes
Git works locally on your computer but also connects to remote services like GitHub.
1. Install Git
Before using Git, install it:
-
Linux (Debian/Ubuntu):
sudo apt update sudo apt install git
-
macOS (with Homebrew):
brew install git
-
Windows: Download from https://git-scm.com/downloads
After installation, check that it worked:
git --version
It should show the current installed version
2. Set Up Git
Tell Git who you are (this shows up in commits):
git config --global user.name "Your Name"
git config --global user.email "[email protected]"
You only need to do this once.
3. Start a Project
Option 1: Clone an Existing Project
If the code already exists online (like on GitHub):
git clone https://github.com/user/repo.git
cd repo
Option 2: Start a New Local Project
If you're creating a new project:
mkdir my-project
cd my-project
git init
This creates a hidden
.git
folder that tracks changes.
4. Check Project Status
See which files have changed:
git status
- Untracked: Git doesnβt know about the file yet
- Staged: Ready to be saved which usually happens after git add
- Modified: Changed but not yet staged
5. Stage and Commit Changes
Save a snapshot of your work:
git add . # Stage all changes
git commit -m "Describe your change"
For git commit, use a clear message to explain what you did.
You can also add a detailed description:
git commit -m "Add login button" -m "Includes styles and click handler"
first -m is for header second -m is for more details
You can sign commit messages using GPG keys (which can be created locally) with (-S):
git commit -S -m "Add login button" -m "Includes styles and click handler"
6. Connect to a Remote (if repo was initialized and not cloned)
If you started with git init
, add a remote:
git remote add origin https://github.com/yourname/project.git
git remote -v # Confirm remote is set
Push for the first time:
git push -u origin main
-u
sets this as the default remote for future pushes.
7. Push Changes
After the first push, you can just use:
git push
This sends your committed changes to the remote branch.
8. Pull Updates
Get changes from the remote repository (need to pull before subsequent update to reposetory):
git pull
Always pull before pushing to avoid conflicts.
9. Branching
Create a new branch to work separately from main
:
git checkout -b feature-login # feature-login is branch name
-b for branch
Example use case : making a new feature that is not ready to be implemented into the main branch
While working on sub-branch use this command to switch back to the main branch:
git checkout main
10. Merging
Merge changes from one branch A into branch B where branch A is derived from branch B. For example, branch B can be the main branch:
git checkout main
git merge feature-login
Resolve any conflicts if prompted.
10a. Merge Conflicts
When: You and someone else changed the same line.
Git will mark the file like:
<<<<<<< HEAD
your version
=======
their version
>>>>>>> branch-name
Steps to fix:
- Edit the file to resolve the conflict
- Stage and commit:
git add resolved_file
git commit -m "Resolve conflict"
11. View History
See whatβs been done and when:
git log # Detailed history
git log --oneline --graph # Compact visual format
12. Stash Temporary Work
Save unfinished changes:
git stash
Bring them back later into the local repository being worked on:
git stash pop
13. Undo Mistakes
Revert a commit (safe):
git revert <commit-hash>
What is a Commit Hash?
Every Git commit has a unique ID called a commit hash β a long string of letters and numbers that looks like:
a3c9e83b2d7ff1e9855a6e4b9b7297f0637b59f8
- It uniquely identifies a snapshot of your project.
- You can use it to undo, view, or refer to specific commits.
How to Find the Commit Hash
Run:
git log
Youβll see output like:
commit a3c9e83b2d7ff1e9855a6e4b9b7297f0637b59f8
Author: Your Name <[email protected]>
Date: Fri Jun 14 12:00:00 2024
Add login feature
The long string after commit
is the commit hash.
Reset to a previous state (dangerous):
git reset --hard a3c9e83
- Moves your branch back to that commit
- Deletes all commits after it
- Use only if you're sure (and havenβt pushed yet)
Be careful β this permanently removes history.
After a reset, you will need to force push changes because you are writing changes that affects the history
git push --force
Real-World Examples
Start a new project and push it
mkdir project
cd project
git init
git remote add origin https://github.com/you/project.git
# Add files and commit
git add .
git commit -m "Initial commit"
git push -u origin main
Fix a typo and push
nano README.md
git add README.md
git commit -m "Fix typo"
git push
Create, Push, and Merge a Feature Branch
git checkout -b feature/login # Create and switch to a new feature branch
# Make changes to your code
git add . # Stage all changes
git commit -m "Add login feature" # Commit your work
git push -u origin feature/login # Push the branch and set upstream
After finishing your feature:
git checkout main # Switch to the main branch
git pull origin main # Make sure main is up to date
git merge feature/login # Merge the feature branch into main
git push origin main # Push the updated main branch (origin main is assuming it is first time)
Revert a broken commit
git log # Find the hash
git revert abc1234
Stash changes to update main
git stash
git pull
git stash pop
Git Cheatsheet
Command | What it Does |
---|---|
git init | Start a new Git repo |
git clone <URL> | Copy a remote repo |
git status | Show current file states |
git add . | Stage all changes |
git commit -m "" | Save changes with a message |
git checkout -b <branch> | Create and switch to a new branch |
git push -u origin <branch> | Push a branch and set upstream (first push) |
git pull origin main | Fetch and merge changes from remote main |
git merge <branch> | Merge another branch into the current one |
git push origin main | Push your changes to the remote main branch |
git stash | Temporarily store local changes |
git stash pop | Re-apply stashed changes |
git reset --hard <commit> | Remove commits and changes (destructive) |
git revert <commit> | Undo a commit safely by creating a new one |
git log | View full commit history |
git log --oneline --graph | Compact history with branch visualization |
git remote -v | Show connected remotes |
git remote add origin <URL> | Add a new remote to your repo |
GitHub CLI Workflow Guide (with SSH, PRs & Issues)
A beginner-friendly guide to working with GitHub CLI (gh
), including SSH setup, repositories, pull requests, and issues.
What is GitHub and Why Use It?
GitHub is a web-based platform built on top of Git, the version control system. It allows developers and teams to:
- Host code repositories online
- Collaborate with others through pull requests and reviews
- Track issues and bugs with a built-in issue tracker
- Deploy and automate with GitHub Actions
- Secure and manage code using permissions, signed commits, and access controls
Why Use GitHub?
Benefit | What It Means |
---|---|
Centralized Hosting | Your code lives online, accessible anywhere |
Collaboration | Teams can work together without overwriting work |
Issue Tracking | Manage bugs and feature requests with visibility |
Pull Requests | Code review workflow ensures clean, quality code |
Version History | Go back to any version of your code at any time |
CI/CD Integration | Automate tests, builds, and deployments |
GitHub makes it easy to scale from solo projects to large enterprise teams.
1. Install GitHub CLI
macOS:
brew install gh
Ubuntu/Debian:
sudo apt install gh
Check it's working:
gh --version
It will show the current version installed
2. Authenticate with GitHub (SSH)
gh auth login
Follow prompts:
- Select:
GitHub.com
- Choose protocol:
SSH
- Login method:
Login with browser
3. Generate SSH Key (if needed)
ssh-keygen -t ed25519 -C "[email protected]"
-t is encryption type and -C is comment
Save it to the default path and set a secure passphrase.
4. Add SSH Key to SSH Agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
5. Add SSH Key to GitHub
gh ssh-key add ~/.ssh/id_ed25519.pub --title "My Dev Machine"
6. (Optional) Add GPG Key for Signed Commits
If you've created a GPG key:
gpg --armor --export <GPG_KEY_ID> | gh gpg-key add -
Find your GPG key ID:
gpg --list-secret-keys --keyid-format=long
7. Verify Authentication
gh auth status # View login method
ssh -T [email protected] # Test SSH
8. Create and Push a New Repo
mkdir my-app && cd my-app
git init
gh repo create my-app --public --source=. --remote=origin --push
9. Clone a Repo
gh repo clone username/repo-name
cd repo-name
10. Feature Branch + PR Workflow
git checkout -b feature/login
# Make changes
git add .
git commit -m "Add login feature"
git push -u origin feature/login
Create a Pull Request
gh pr create --base main --head feature/login --title "Add login" --body "Implements login form"
Merge PR (when approved)
gh pr merge 123 --merge # Standard merge
# or
gh pr merge 123 --squash # Combine all commits
You can also use
gh pr merge
with no number if you're in the branch folder.
11. Manage Issues
Create a New Issue
gh issue create --title "Bug: Login fails on Safari" --body "Steps to reproduce..."
View Open Issues
gh issue list
View a Specific Issue
gh issue view 42
Comment on an Issue
gh issue comment 42 --body "Looking into this now!"
Close an Issue
gh issue close 42
12. Git Commands Reference
Basic Git commands are not covered here. See full Git guide β git-commands.md
GitHub CLI Cheatsheet
Command | Description |
---|---|
gh auth login | Log in to GitHub |
gh repo create | Create a new repo (and push local files) |
gh repo clone user/repo | Clone a repo using SSH |
gh pr create | Create a pull request |
gh pr list | List open pull requests |
gh pr view [id] | View a pull request |
gh pr merge [id] | Merge a pull request |
gh issue create | Open a new issue |
gh issue list | View issues |
gh issue comment | Add comment to issue |
gh issue close | Close an issue |
gh ssh-key add | Add SSH key to GitHub |
gh gpg-key add | Add GPG key to GitHub |
Security
Guides related to security tools and practices, including:
Access git based reporsitories securely
Secure access to GitHub is essential for maintaining the integrity, authenticity, and confidentiality of your code and development environment. Below are the recommended methods:
1. SSH Key Authentication
- Generate an SSH key pair and add the public key to your GitHub account.
- Authenticate securely without entering your GitHub credentials each time.
- Ideal for: Frequent Git users and automation scripts.
2. GPG Key Setup (Recommended for Commit Signing)
- Generate and configure GPG keys to cryptographically sign your commits.
- Signed commits verify that changes came from a trusted sourceβyou.
- Ideal for: Open source contributors, teams with strict audit/compliance needs.
3. Personal Access Tokens (PATs)
- Used as an alternative to passwords when accessing GitHub over HTTPS.
- Scoped tokens provide controlled access to repositories or actions.
- Ideal for: CI/CD tools, automated scripts, or API integrations.
4. Two-Factor Authentication (2FA)
- Add an extra layer of security to your GitHub login using SMS or TOTP apps.
- Reduces risk from password compromise.
- Essential for: All users.
Why Choose GPG for commit signing?
- Verified Identity: Signed commits show a verified badge on GitHub.
- Audit Trail: Helps teams track who made what changes, securely.
- Defense Against Tampering: Ensures the commit wasn't modified by a malicious actor before being merged or pulled.
While GPG doesnβt secure repository access like SSH does, it secures the integrity and authorship of the content itselfβmaking it especially useful in collaborative or regulated environments.
GPG Key Setup Guide for Git Commit Signing
This guide helps you generate and configure a GPG key so you can sign your Git commits and prove authorship.
What is a GPG Key, and Why Use It?
GPG (GNU Privacy Guard) is a tool for secure communication and file encryption. In the context of Git:
- A GPG key is a digital identity made of a public and private key pair.
- You use your private key to digitally sign Git commits.
- Others (like GitHub) can verify your signature using your public key.
Why Sign Commits?
- Proves authorship: Confirms the commit really came from you.
- Increases trust: βVerifiedβ badge on GitHub helps collaborators trust your changes.
- Protects integrity: Ensures commits werenβt tampered with after you made them.
Without GPG signing, anyone could technically commit using your name/email β and Git wouldnβt know the difference.
1. Install GPG
macOS (with Homebrew)
brew install gnupg
Ubuntu/Debian
sudo apt update && sudo apt install gnupg
Confirm installation:
gpg --version
2. Generate a GPG Key
Run:
gpg --full-generate-key
Choose:
- Key type:
1
(RSA and RSA) - Key size:
4096
- Expiration: Set as needed (e.g.,
1y
or0
for no expiry) - Input your name, email, and a strong passphrase
3. List Your GPG Keys
gpg --list-secret-keys --keyid-format LONG
Look for:
sec rsa4096/0123ABCD4567EFGH 2025-06-12 [SC]
uid [ultimate] Your Name <[email protected]>
Copy the key ID after rsa4096/
β e.g., 0123ABCD4567EFGH
4. Export Your Public GPG Key
gpg --armor --export 0123ABCD4567EFGH
Copy the entire output β it begins with:
-----BEGIN PGP PUBLIC KEY BLOCK-----
5. Add GPG Key to GitHub
- Go to GitHub β Settings β SSH and GPG keys
- Click "New GPG key"
- Paste your exported key
- Click "Add GPG key"
6. Tell Git to Use Your GPG Key
git config --global user.signingkey 0123ABCD4567EFGH
git config --global commit.gpgsign true
If using GPG β₯ 2.1:
echo "use-agent" >> ~/.gnupg/gpg.conf
macOS only: Pinentry fix (for keychain prompts)
brew install pinentry-mac
echo "pinentry-program /opt/homebrew/bin/pinentry-mac" >> ~/.gnupg/gpg-agent.conf
killall gpg-agent
7. Test a Signed Commit
git commit -S -m "Test signed commit"
GitHub will show a βVerifiedβ label if the commit is properly signed.
GPG Key Cheatsheet
Command | Description |
---|---|
gpg --full-generate-key | Generate new GPG key |
gpg --list-secret-keys | View your private keys |
gpg --armor --export <ID> | Export public key |
git config --global user.signingkey <ID> | Use key in Git |
git commit -S -m "msg" | Sign a Git commit |
git config --global commit.gpgsign true | Auto-sign all commits |
gpg --edit-key <ID> | Manage a key |
gpg --delete-keys <ID> | Remove a public key |
gpg --delete-secret-keys <ID> | Remove a private key |
Server Tools
This section covers essential tools and commands for managing remote servers, including:
- SSH: Securely connect to remote machines.
- SCP: Transfer files between local and remote systems using the secure copy protocol.
SSH Key Setup Guide for Git and Remote Access
This guide helps you generate and use an SSH key for secure authentication with GitHub and remote servers.
What is an SSH Key, and Why Use It?
SSH (Secure Shell) keys are a secure way to authenticate without using a password. Like GPG keys, SSH uses a key pair:
- Your private key stays on your computer.
- Your public key is shared with remote services (like GitHub or a server).
Why Use SSH Keys?
- Secure logins without typing passwords
- Authenticate Git operations (clone, push, pull) securely
- Access remote servers with one command
GitHub and many servers prefer SSH over HTTPS for better security and ease of use.
1. Generate a New SSH Key
ssh-keygen -t ed25519 -C "[email protected]"
- Use
ed25519
(modern, faster, more secure than RSA) - When prompted, save to the default path:
~/.ssh/id_ed25519
- Optionally set a passphrase for extra protection
2. Add SSH Key to Agent
Start the agent and add your key:
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
If using a custom filename, replace with your actual key path.
3. Add SSH Key to GitHub
-
Copy the public key:
cat ~/.ssh/id_ed25519.pub
-
Click "New SSH Key", give it a title, and paste your public key
4. Test GitHub SSH Connection
ssh -T [email protected]
If successful, you'll see a message like:
Hi your-username! You've successfully authenticated.
5. Connect to Remote Server
To SSH into a server:
ssh user@host
Replace user
with your server username and host
with IP or domain name.
6. Copy SSH Key to Server
Quickly add your public key to a remote server:
ssh-copy-id user@host
Adds your key to the serverβs
~/.ssh/authorized_keys
file
7. Use SSH Config File for Convenience
Set up shortcuts and default identities.
Edit: ~/.ssh/config
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_ed25519
Host my-server
HostName example.com
User ubuntu
IdentityFile ~/.ssh/id_ed25519
Now you can connect with:
ssh github.com
ssh my-server
SSH Cheatsheet
Command | Description |
---|---|
ssh-keygen -t ed25519 | Create new SSH key pair |
ssh-add <key> | Add key to SSH agent |
ssh -T [email protected] | Test GitHub SSH auth |
ssh user@host | Connect to remote server |
ssh-copy-id user@host | Install public key on server |
cat ~/.ssh/id_ed25519.pub | View public key content |
~/.ssh/config | Define shortcuts and defaults |
SCP Key Setup Guide for Secure File Transfers
This guide helps you use SCP (Secure Copy Protocol) to securely transfer files between your local machine and a remote server, or between two remote systems.
What is SCP, and Why Use It?
SCP is a secure command-line tool to copy files over SSH. It encrypts both the files and any authentication information during transfer.
Why Use SCP?
- Securely transfer files over the internet
- Works with any server that supports SSH
- Simple syntax for fast file operations
- No need for setting up FTP or other services
Prerequisites
Before using SCP, you must:
1. Ensure SSH is Installed
On Linux/macOS (usually pre-installed):
ssh -V
On Windows (PowerShell 7+ or Git Bash):
ssh -V
If not installed, install OpenSSH from Windows Features or Git for Windows
2. Generate an SSH Key (if needed)
ssh-keygen -t ed25519 -C "[email protected]"
When prompted for a passphrase, you can either:
- Leave it empty (less secure, but no password prompts), or
- Set a passphrase (recommended), and follow step 3 to avoid repeated prompts.
3. Add SSH Key to SSH Agent
This step ensures your key is unlocked once per session, and avoids repeated password prompts.
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
If your key has a passphrase, you'll be asked to enter it once here. After that, the agent keeps it in memory.
4. Copy Your SSH Key to the Remote Server
ssh-copy-id user@server-ip
This adds your public key to the remote server so you can connect without a password.
1. Basic Syntax
scp [options] source destination
You can copy:
- From local β remote
- From remote β local
- Between two remote servers
2. Copy a File from Local to Remote
scp file.txt user@server-ip:/path/to/destination/
Sends
file.txt
from your computer to a directory on the remote machine
3. Copy a File from Remote to Local
scp user@server-ip:/path/to/file.txt /local/destination/
Downloads a file from the remote machine to your current directory
4. Copy a Directory Recursively
scp -r my_folder user@server-ip:/path/to/destination/
-r
allows you to copy an entire folder and its contents
5. Use a Specific Port
scp -P 2222 file.txt user@server-ip:/path/
If your SSH server runs on a non-default port (not 22), specify it with
-P
6. Copy Between Two Remote Hosts
scp user1@host1:/path/file.txt user2@host2:/path/
Useful for transferring files directly between two remote servers
7. Preserve File Attributes
scp -p file.txt user@server-ip:/path/
Keeps the original timestamps and modes (permissions)
8. Increase Verbosity (Debugging)
scp -v file.txt user@server-ip:/path/
Shows detailed output of the copy process β helpful for troubleshooting
SCP Cheatsheet
Command | Description |
---|---|
scp file user@server-ip:/path/ | Copy file to remote |
scp user@server-ip:/file ./ | Copy file from remote |
scp -r dir user@server-ip:/path/ | Recursively copy directory |
scp -P 2222 file user@server-ip:/path/ | Use specific port |
scp -p file user@server-ip:/path/ | Preserve timestamps and permissions |
scp -v file user@server-ip:/path/ | Verbose output for debugging |
Containers
This section covers containerisation technologies and tools.
Overview
Containerisation is a method of packaging applications and their dependencies into isolated, lightweight units called containers. Unlike virtual machines, containers share the host systemβs kernel, making them more resource-efficient and faster to start.
Why Containers?
- Consistency: Run the same app in dev, staging, and production.
- Isolation: Apps run in separate environments without interfering.
- Portability: Move containers across environments or cloud providers with ease.
- Scalability: Ideal for microservices and dynamic workloads.
Topics Covered
- Conceptual understanding of containers
- Docker as a container platform
- Container orchestration tools (e.g., Docker Swarm, Kubernetes)
- Best practices for working with containerised applications
Docker
Docker is a popular platform that simplifies building, running, and managing containers. It provides:
- A standard format for container images
- Tools for defining and managing container lifecycles
- Support for networks, volumes, and multi-container apps
Youβll find Docker used widely in both development and production environments.
For detailed usage, syntax, and examples, see the dedicated Docker sections.
Kubernetes and Orchestration
As applications grow, managing containers manually becomes inefficient. Orchestration tools help automate:
- Deployment
- Scaling
- Networking
- Rolling updates and fault recovery
We cover:
- Docker Swarm: A simple, native Docker orchestration system.
- Kubernetes: A powerful, production-grade orchestration platform widely used in cloud-native environments.
Best Practices
When working with containers:
- Use minimal base images (e.g.,
alpine
) to reduce attack surface and size. - Separate concerns: One service per container.
- Handle secrets securely β donβt bake credentials into images.
- Scan images regularly for vulnerabilities.
- Use orchestration for production workloads.
- Clean up unused resources to avoid clutter and reclaim space.
Further Topics
- Container registries (e.g., Docker Hub, GitHub Container Registry)
- CI/CD (Continuous integration / Continuous Deployemet) pipelines with container integration
- Comparing containers to virtual machines
- Container security principles and tooling
Containers form the backbone of modern DevOps, microservices, and cloud-native architecture. Understanding them sets the stage for efficient, scalable, and reliable application delivery.
Docker Command Guide
A comprehensive guide to installing, managing, and using Docker containers.
What is Docker?
Docker is a tool that lets you package applications into containers β lightweight, portable environments that run consistently across machines.
With Docker, you can:
- Package code, dependencies, and system tools into one unit
- Run the same app on your laptop, server, or cloud
- Avoid "it works on my machine" problems
- Test, scale, and deploy software faster
Docker helps developers isolate applications and ship them with confidence.
Think of a container as a box with everything your app needs to run β regardless of where it's deployed.
Familiarity with Linux CLI
If youβve used the Linux command line before, many Docker commands will feel familiar. Docker's CLI follows similar patterns for file navigation, process control, and permission handling. This makes learning Docker more intuitive for users with a Linux or Unix-like background.
Installation
Ubuntu/Debian
sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
CentOS/RHEL/Fedora
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io
Arch Linux
sudo pacman -Syu docker
Void Linux
sudo xbps-install -S docker
sudo ln -s /etc/sv/docker /var/service/docker
Alpine Linux
sudo apk update
sudo apk add docker
sudo rc-update add docker boot
Gentoo
sudo emerge --ask app-containers/docker
sudo rc-update add docker default
sudo /etc/init.d/docker start
NixOS
Enable Docker in your configuration file (/etc/nixos/configuration.nix
):
virtualisation.docker.enable = true;
Then apply the changes:
sudo nixos-rebuild switch
OpenSUSE (Tumbleweed / Leap)
sudo zypper refresh
sudo zypper install docker
Generic Linux (via Docker convenience script)
For unsupported or minimal distros:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
macOS
Install Docker Desktop from https://docker.com/products/docker-desktop
Windows
Install Docker Desktop from https://docker.com/products/docker-desktop
Post-Installation Setup
Add your user to docker group (Linux):
sudo usermod -aG docker $USER
newgrp docker
Start Docker service:
sudo systemctl start docker
sudo systemctl enable docker
Verify installation:
docker --version # shows currently installed version
docker run hello-world
Basic Container Operations
Run a container
docker run ubuntu
Run container interactively
docker run -it ubuntu bash
Run container in background (detached)
docker run -d nginx
Run with custom name
docker run --name my-nginx nginx
Run with port mapping
docker run -p 8080:80 nginx
This means the container's internal port 80 (where the Nginx server is listening) is mapped to port 8080 on your local machine. When this container is running, users can access the web server by visiting http://localhost:8080 on the host system. Without this mapping, the service inside the container would not be reachable from outside the container. Port mapping is essential for exposing containerised services to the host or public network.
Run with volume mounting
docker run -v /host/path:/container/path ubuntu
Run with environment variables
docker run -e MYSQL_ROOT_PASSWORD=mypassword mysql
Run and remove when stopped
docker run --rm ubuntu echo "Hello World"
Container Management
List running containers
docker ps
ps stands for process status
List all containers (including stopped)
docker ps -a
Stop a container
docker stop container_name
Start a stopped container
docker start container_name
Restart a container
docker restart container_name
Remove a container
docker rm container_name
Remove all stopped containers
docker container prune
Force remove a running container
docker rm -f container_name
Execute command in running container
docker exec -it container_name bash
The command docker exec -it container_name bash lets you run an interactive shell inside a running container. exec executes a command without restarting the container, -i keeps input open, and -t allocates a terminal. Running bash starts a shell session, useful for debugging, managing files, or exploring the container like SSH.
Copy files to/from container
docker cp file.txt container_name:/path/to/destination
docker cp container_name:/path/to/file.txt ./local/path
View container logs
docker logs container_name
Follow logs in real-time
docker logs -f container_name
View container resource usage
docker stats
Inspect container details
docker inspect container_name
Image Management
List local images
docker images
Pull an image from registry
docker pull ubuntu:20.04
Remove an image
docker rmi image_name
Remove unused images
docker image prune
Remove all unused images
docker image prune -a
Build image from Dockerfile
docker build -t my-app .
Build with custom Dockerfile name
docker build -t my-app -f custom.dockerfile .
Tag an image
docker tag source_image:tag target_image:tag
Push image to registry
docker push username/image_name:tag
Search for images
docker search nginx
View image history
docker history image_name
Docker Compose
docker compose
vs docker-compose
There are two ways to use Docker Compose:
docker-compose
(hyphenated): This is the older Python-based standalone command.docker compose
(space-separated): This is the newer plugin-based command integrated into the Docker CLI.
Key Differences
Feature | docker-compose | docker compose |
---|---|---|
CLI style | Hyphenated: docker-compose up | Subcommand: docker compose up |
Installation | Separate binary | Bundled with Docker (v20.10+) |
Written in | Python | Go (as a Docker CLI plugin) |
Lifecycle | Being deprecated | Actively maintained |
Performance | Slightly slower | Faster and better Docker integration |
Compatibility | Widely supported | Preferred for modern setups |
Which Should You Use?
Use docker compose
(with a space) if you're using Docker v20.10+, as it is the officially supported method moving forward. The old docker-compose
is being phased out and may not receive future updates.
Tip: If you're still using
docker-compose
, consider upgrading your Docker CLI and migrating todocker compose
.
Migration Note
Both commands support the same docker-compose.yml
file format, so switching typically requires no changes to your YAML filesβjust the command syntax changes.
Start services defined in docker-compose.yml
Note: docker-compose command will be depricated. Use docker compose instead.
docker compose up
Start services in background
docker compose up -d
Stop services
docker compose down
View running services
docker compose ps
View logs
docker compose logs
Follow logs
docker compose logs -f
Rebuild and start services
docker compose up --build
Scale a service
docker compose up --scale web=3
Execute command in service
docker compose exec service_name bash
Networking
List networks
docker network ls
Create a network
docker network create my-network
Connect container to network
docker network connect my-network container_name
Disconnect container from network
docker network disconnect my-network container_name
Run container on specific network
docker run --network my-network nginx
Remove network
docker network rm my-network
Inspect network
docker network inspect my-network
Volumes
List volumes
docker volume ls
Create a volume
docker volume create my-volume
Remove a volume
docker volume rm my-volume
Remove unused volumes
docker volume prune
Inspect volume
docker volume inspect my-volume
Run container with named volume
docker run -v my-volume:/data ubuntu
Run container with bind mount
docker run -v /host/path:/container/path ubuntu
Common Docker Setups
Web Server (Nginx)
docker run -d --name web-server -p 80:80 -v /path/to/html:/usr/share/nginx/html nginx
Database (MySQL)
docker run -d --name mysql-db -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=mydb -p 3306:3306 mysql:8.0
Database (PostgreSQL)
docker run -d --name postgres-db -e POSTGRES_PASSWORD=password -e POSTGRES_DB=mydb -p 5432:5432 postgres:13
Redis Cache
docker run -d --name redis-cache -p 6379:6379 redis:alpine
WordPress with MySQL
docker run -d --name wordpress-db -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=wordpress mysql:8.0
docker run -d --name wordpress-site --link wordpress-db:mysql -p 8080:80 -e WORDPRESS_DB_HOST=mysql -e WORDPRESS_DB_NAME=wordpress -e WORDPRESS_DB_PASSWORD=password wordpress
Node.js Application
docker run -d --name node-app -p 3000:3000 -v /path/to/app:/usr/src/app -w /usr/src/app node:16 npm start
Dockerfile Examples
Basic Node.js Dockerfile
Create a file named Dockerfile
:
FROM node:16-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Build and run:
docker build -t my-node-app .
docker run -p 3000:3000 my-node-app
Basic Python Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
Multi-stage Build Example
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Docker Compose Examples
Basic docker-compose.yml
version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: myapp
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
WordPress with Docker Compose
version: '3.8'
services:
wordpress:
image: wordpress:latest
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wordpress
depends_on:
- db
db:
image: mysql:8.0
environment:
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: rootpassword
volumes:
- db_data:/var/lib/mysql
volumes:
db_data:
Note: adding secrets such as passwords in docker compose file is not secure. It is advised to use a secrets manager (eg. docker secrets) to store secrets or store secrets in the .env directory.
System Maintenance
View Docker system info
docker system info
View disk usage
docker system df
Clean up everything
docker system prune -a
Clean up with volumes
docker system prune -a --volumes
Remove all containers
docker rm -f $(docker ps -aq)
Remove all images
docker rmi -f $(docker images -q)
Update all images
docker images --format "table {{.Repository}}:{{.Tag}}" | tail -n +2 | xargs -L1 docker pull
Registry Operations
Login to Docker Registry
docker login
Login to private registry
docker login registry.example.com
Tag for registry
docker tag my-app username/my-app:v1.0
Push to registry
docker push username/my-app:v1.0
Pull from private registry
docker pull registry.example.com/my-app:v1.0
Troubleshooting
Container won't start
Check logs:
docker logs container_name
Permission denied errors
Fix ownership (Linux):
sudo chown -R $USER:$USER /path/to/files
Port already in use
Find process using port:
sudo lsof -i :8080
Kill process:
sudo kill -9 PID
Out of disk space
Clean up Docker:
docker system prune -a --volumes
Container runs but app doesn't work
Check if container is listening on correct port:
docker exec container_name netstat -tlnp
Can't connect to Docker daemon
Start Docker service:
sudo systemctl start docker
Add user to docker group:
sudo usermod -aG docker $USER
Security Best Practices
Run containers as non-root user
docker run --user 1000:1000 ubuntu
Limit container resources
docker run --memory="256m" --cpus="1.0" ubuntu
Use read-only filesystem
docker run --read-only ubuntu
Drop capabilities
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx
Scan images for vulnerabilities
docker scout quickview image_name
Use official images when possible
docker pull nginx:alpine
Performance Tips
Use specific image tags
Instead of:
docker pull nginx:latest
Use:
docker pull nginx:1.21-alpine
Use multi-stage builds
Reduces final image size by excluding build dependencies.
Use .dockerignore
Create .dockerignore
file to exclude unnecessary files:
node_modules
.git
*.log
.env
Optimise layer caching
Place frequently changing instructions at the bottom of Dockerfile.
Use Alpine images
docker pull node:16-alpine
Smaller size than full distributions.
Common Workflows
Development Workflow
# 1. Create Dockerfile
nano Dockerfile
# 2. Build image
docker build -t my-app .
# 3. Run container
docker run -d -p 8080:80 --name my-app-container my-app
# 4. Check logs
docker logs my-app-container
# 5. Update and rebuild
docker stop my-app-container
docker rm my-app-container
docker build -t my-app .
docker run -d -p 8080:80 --name my-app-container my-app
Production Deployment
# 1. Pull latest code
git pull origin main
# 2. Build production image
docker build -t my-app:prod .
# 3. Stop old container
docker stop my-app-prod
# 4. Remove old container
docker rm my-app-prod
# 5. Start new container
docker run -d --name my-app-prod -p 80:8080 my-app:prod
Further Reading
Quick Reference
Command | Description |
---|---|
docker run -it | Run interactively |
docker run -d | Run in background |
docker ps | List running containers |
docker ps -a | List all containers |
docker stop | Stop container |
docker rm | Remove container |
docker images | List images |
docker rmi | Remove image |
docker exec -it | Execute command in container |
docker logs | View container logs |
docker build | Build image from Dockerfile |
docker compose up | Start services |
docker compose down | Stop services |
docker system prune | Clean up unused resources |
Docker Swarm
Docker Swarm is Dockerβs native clustering and orchestration tool. It lets you manage a group of Docker hosts (nodes) as a single, virtual Docker engine, enabling scaling, high availability, and fault tolerance.
With Docker Swarm, you can:
- Run the same services across multiple machines
- Automatically distribute containers (called services) across nodes
- Scale services up or down easily
- Deploy a full application stack using a single file (
docker-compose.yml
) - Maintain zero-downtime updates via rolling deployments
Think of Swarm as a self-organising, load-balanced Docker cluster.
Swarm Basics
Initialise a Swarm
docker swarm init
- Promotes your current machine to a manager node
- Outputs a
docker swarm join
command for other nodes
Join a Swarm (Worker or Manager)
docker swarm join --token <token> <manager-ip>:2377
Get the token using:
docker swarm join-token worker
docker swarm join-token manager
Leave a Swarm
docker swarm leave
Force-leave if manager:
docker swarm leave --force
Node Management
List Swarm Nodes
docker node ls
Inspect a Node
docker node inspect <node-id>
Drain a Node (prevent it from running containers)
docker node update --availability drain <node-id>
Re-enable a Node
docker node update --availability active <node-id>
Promote/Demote Manager/Worker
docker node promote <node-id>
docker node demote <node-id>
Services
Services are the units of scheduling in Swarm β like containers, but designed to scale and self-heal across the cluster.
Create a Service
docker service create --name web --replicas 3 -p 80:80 nginx
List Services
docker service ls
Inspect a Service
docker service inspect <service-name>
View Service Tasks (like individual containers)
docker service ps <service-name>
Update a Service
docker service update --image nginx:alpine web
Scale a Service
docker service scale web=5
Remove a Service
docker service rm web
Further Reading
Docker Swarm Cheat Sheet
Task | Command |
---|---|
Init swarm | docker swarm init |
Join swarm | docker swarm join |
List nodes | docker node ls |
Drain a node | docker node update --availability drain <id> |
Promote node | docker node promote <id> |
Demote node | docker node demote <id> |
Create service | docker service create --name web -p 80:80 nginx |
Scale service | docker service scale web=3 |
Remove service | docker service rm web |
Stack Basics
Docker Stacks let you deploy multi-service applications using a single docker-compose.yml
file in Swarm mode.
Requires Docker Swarm to be initialised with
docker swarm init
.
Deploy a Stack
docker stack deploy -c docker-compose.yml mystack
List Stacks
docker stack ls
List Services in a Stack
docker stack services mystack
List Tasks (containers) in a Stack
docker stack ps mystack
Remove a Stack
docker stack rm mystack
Example docker-compose.yml
for Stack
version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
deploy:
replicas: 3
restart_policy:
condition: on-failure
placement:
constraints: [node.role == worker]
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- db_data:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
volumes:
db_data:
Deploy This Stack
docker stack deploy -c docker-compose.yml mystack
Stack Tips
- Use
deploy:
section to define Swarm-specific settings (replicas, placement, restart policies, etc.) - Avoid
build:
in production stacks β use pre-built images. - Ensure all nodes have access to required networks, volumes, or images.
- Stacks do not support all docker compose features β e.g., no
depends_on
orcondition:
directives in Swarm.
Docker stack deployment vs Compose
Feature | docker compose | docker stack deploy |
---|---|---|
Designed for | Local development | Swarm clusters |
Deploys to | Single Docker daemon | Multiple Swarm nodes |
Orchestration | No | Yes (replication, updates) |
Rolling updates | β | β |
Service restart logic | Minimal | Advanced |
Networking | Bridge | Overlay |
Further Reading
Docker Stack Cheat Sheet
Task | Command |
---|---|
Deploy a stack | docker stack deploy -c docker-compose.yml mystack |
List stacks | docker stack ls |
List services in a stack | docker stack services mystack |
List running containers/tasks | docker stack ps mystack |
Remove a stack | docker stack rm mystack |
Docker CI/CD Guide
A practical guide to integrating Docker into your Continuous Integration and Deployment pipelines.
What is CI/CD?
CI/CD stands for Continuous Integration and Continuous Delivery (or Continuous Deployment). It is a modern software development practice that enables teams to deliver code changes more frequently and reliably through automation.
-
Continuous Integration (CI) is the practice of automatically building and testing code every time a team member commits changes to version control. This helps catch bugs early and ensures that the application is always in a deployable state.
-
Continuous Delivery (CD) automates the process of delivering code to staging or production environments after successful tests. With CD, code changes can be pushed to production safely and quickly.
-
Continuous Deployment takes CD a step further by automatically deploying every change that passes all stages of the pipeline to production without manual intervention.
Benefits of CI/CD
- Faster development cycles
- Higher software quality
- Reduced manual errors
- Quicker feedback and recovery
- Consistent and repeatable deployments
CI/CD is a cornerstone of DevOps practices, promoting collaboration between development and operations teams, and enabling organisations to respond to market changes faster.
Why Docker for CI/CD?
Docker simplifies CI/CD pipelines by offering:
- Consistent environments across build/test/deploy stages
- Isolation for dependencies and tools
- Portability across local, staging, and production
- Faster testing and deployment with containerised builds
CI/CD Workflow Overview
- Build Docker image from source
- Test application inside a container
- Push image to registry (Docker Hub, GitHub Container Registry, etc.)
- Deploy container to target environment
Prerequisites
- Docker installed locally or in your CI/CD environment
- A Dockerfile at the project root
- Docker Hub (or private registry) account
- GitHub/GitLab/CI tool of choice
Sample Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
CI Example with GitHub Actions
Create .github/workflows/docker.yml
:
name: Docker CI
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: yourdockerhubuser/yourapp:latest
Save DockerHub credentials in GitHub repo Settings β Secrets as
DOCKER_USERNAME
andDOCKER_PASSWORD
.
CI Example with GitLab CI/CD
Create .gitlab-ci.yml
:
stages:
- build
- deploy
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
deploy:
stage: deploy
script:
- echo "Deploy logic goes here (e.g. docker-compose up, SSH to server, etc.)"
GitLab will automatically set
$CI_REGISTRY
,$CI_REGISTRY_IMAGE
, and$CI_REGISTRY_USER
.
Deploy with Docker Compose
Use docker-compose.yml
on the target server:
version: '3'
services:
app:
image: yourdockerhubuser/yourapp:latest
ports:
- "80:3000"
restart: always
Deploy:
docker compose pull
docker compose up -d
Directory Structure
project/
βββ Dockerfile
βββ .dockerignore
βββ .github/
β βββ workflows/
β βββ docker.yml
βββ .gitlab-ci.yml
βββ src/
Best Practices
- Pin image versions (
node:18-alpine
, notnode:latest
) - Use
.dockerignore
to keep builds small - Scan images for vulnerabilities (
docker scout quickview
) - Use secrets in CI environments, not hardcoded passwords
- Automate deployment with version tags
Bonus: Version Tagging
In your pipeline, you can tag builds by commit or semver:
docker tag yourapp:latest yourapp:v1.0.3
docker push yourapp:v1.0.3
Or dynamically in CI (example for GitHub):
tags: youruser/yourapp:latest, youruser/yourapp:${{ github.sha }}
Further Reading
Docker Security Best Practices
Securing your Docker containers and host is critical in production environments. Below are some best practices to help keep your Docker environments secure.
1. Use Minimal Base Images
Choose minimal images like alpine
to reduce the attack surface.
2. Run Containers as Non-Root Users
Avoid running containers as the root user. Use the USER
directive in your Dockerfile.
FROM node:alpine
USER node
3. Use Docker Content Trust (DCT)
Enable DCT to ensure images are signed and verified.
export DOCKER_CONTENT_TRUST=1
4. Regularly Scan Images
Use tools like docker scan
, Trivy
, or Clair
to scan for vulnerabilities.
docker scan your-image
5. Use Docker Secrets for Sensitive Data (Swarm Only)
Avoid hardcoding secrets in environment variables or files. Docker Swarm provides a secure way to manage sensitive data like API keys, passwords, and TLS certificates.
Step 1: Initialise Docker Swarm
docker swarm init
Step 2: Create a Secret
echo "supersecretvalue" | docker secret create my_secret -
Step 3: Reference the Secret in docker-compose.yml
version: '3.8'
services:
app:
image: myapp:latest
secrets:
- my_secret
environment:
SECRET_VALUE_FILE: /run/secrets/my_secret
secrets:
my_secret:
external: true
Warning: Never store secrets directly in the
docker-compose.yml
using plainenvironment:
variables. These values can be exposed in logs, command output, or version control. Use Docker secrets or an external secrets manager (e.g., Vault, AWS Secrets Manager) for secure secret management.
6. Limit Container Capabilities
Drop unnecessary Linux capabilities:
cap_drop:
- ALL
7. Use Read-Only Filesystems
Prevent modification by running containers with read-only file systems:
docker run --read-only ...
8. Avoid Mounting Docker Socket
Mounting /var/run/docker.sock
gives root access to the host. Avoid this unless absolutely necessary.
9. Set Resource Limits
Limit memory and CPU usage per container.
docker run --memory=512m --cpus="1.0" ...
10. Network Segmentation
Use custom networks and avoid exposing unnecessary ports.
11. Keep Docker Updated
Always use the latest stable Docker release to benefit from the latest security patches.
Docker Performance Tips
Improve the efficiency and speed of your Docker containers and build processes with these performance tips.
1. Use Multi-Stage Builds
Multi-stage builds help reduce image size by keeping only what is necessary in the final image.
FROM golang:alpine as builder
WORKDIR /app
COPY . .
RUN go build -o myapp
FROM alpine
COPY --from=builder /app/myapp /myapp
CMD ["/myapp"]
2. Optimise Dockerfile Layers
Group related commands together to reduce the number of layers and cache misses.
3. Leverage Build Cache
Order Dockerfile instructions from least to most frequently changed to maximise cache efficiency.
4. Use .dockerignore
Prevent unnecessary files from being sent to the Docker daemon during build.
node_modules
.git
*.log
5. Use Lightweight Base Images
Smaller base images like alpine
reduce image size and pull time.
6. Limit Logging Overhead
Use appropriate log drivers and avoid excessive logging.
7. Use Volumes for Persistent Data
Volumes are faster and more reliable than bind mounts.
docker run -v mydata:/data ...
8. Avoid Large Layers
Install packages and remove caches in a single RUN
instruction.
RUN apk add --no-cache curl && rm -rf /var/cache/*
9. Tune Resource Limits
Appropriately configure --memory
, --cpus
, and --ulimit
to avoid resource contention.
10. Use Performance Monitoring
Use tools like cAdvisor
, Prometheus
, and Grafana
to monitor container resource usage.
Entertainment
This section covers entertainment-related tools and applications.
Topics include:
- Media downloading tools
- Streaming applications
- Content management
yt-dlp Command Wiki
A comprehensive guide to downloading videos and audio from YouTube and other sites using yt-dlp
.
What is yt-dlp and Why Use It?
yt-dlp is a powerful command-line tool for downloading videos, audio, and playlists from YouTube and hundreds of other websites. It is a modern fork of the now-inactive youtube-dl, with numerous additional features, bug fixes, and better performance.
Why use yt-dlp?
-
Actively maintained with frequent updates
-
Supports more sites and modern streaming formats
-
More powerful filtering options (e.g., resolution, codec, duration)
-
Improved audio and subtitle support
-
Faster and more reliable than youtube-dl
-
Advanced customization for filenames, metadata, and output
Installation
Windows
Using pip (Python required):
pip install yt-dlp
Update:
pip install -U yt-dlp
Standalone Binary (No Python required):
- Download
yt-dlp.exe
from: https://github.com/yt-dlp/yt-dlp/releases/latest - Move it to a folder in your
PATH
(e.g.,C:\Windows
) - Run from Command Prompt or PowerShell:
yt-dlp https://youtube.com/...
macOS
Using Homebrew:
brew install yt-dlp
Update:
brew upgrade yt-dlp
Using pip (Python required):
pip install yt-dlp
Linux
Universal (standalone binary for all distros):
sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp
sudo chmod a+rx /usr/local/bin/yt-dlp
Debian / Ubuntu / Linux Mint
sudo apt update
sudo apt install python3-pip
pip3 install --user yt-dlp
Fedora
sudo dnf install yt-dlp
If not available in your version, use the pip or binary method above.
Arch Linux / Manjaro
sudo pacman -S yt-dlp
Alpine Linux
apk add yt-dlp
Requires community repository enabled.
OpenSUSE
sudo zypper install yt-dlp
Using pip (portable, works on any distro)
pip install --user yt-dlp
Update:
pip install -U yt-dlp
Basic Usage
Download any video (default quality)
yt-dlp https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download best quality available
yt-dlp -f best https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download specific quality (1080p max)
yt-dlp -f "best[height<=1080]" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download 720p max
yt-dlp -f "best[height<=720]" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Audio Only Downloads
Download as MP3
yt-dlp -x --audio-format mp3 https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download as MP3 with best quality
yt-dlp -x --audio-format mp3 --audio-quality 0 https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download as WAV (lossless)
yt-dlp -x --audio-format wav https://www.youtube.com/watch?v=dQw4w9WgXcQ
Download as M4A (good for Apple devices)
yt-dlp -x --audio-format m4a https://www.youtube.com/watch?v=dQw4w9WgXcQ
File Naming
Clean filename (no video ID)
yt-dlp -o "%(title)s.%(ext)s" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Include uploader name
yt-dlp -o "%(uploader)s - %(title)s.%(ext)s" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Include upload date
yt-dlp -o "%(upload_date)s - %(title)s.%(ext)s" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Organize by uploader folder
yt-dlp -o "%(uploader)s/%(title)s.%(ext)s" https://www.youtube.com/watch?v=dQw4w9WgXcQ
Playlist Downloads
Download entire playlist
yt-dlp https://www.youtube.com/playlist?list=...
Download playlist as MP3s
yt-dlp -x --audio-format mp3 https://www.youtube.com/playlist?list=...
Download only one video from playlist URL
yt-dlp --no-playlist https://www.youtube.com/watch?v=...&list=...
Download specific range from playlist (videos 1β5)
yt-dlp --playlist-start 1 --playlist-end 5 https://www.youtube.com/playlist?list=...
Download only new videos (skip already downloaded)
yt-dlp --download-archive downloaded.txt https://www.youtube.com/playlist?list=...
Metadata and Thumbnails
Add metadata to audio files
yt-dlp -x --audio-format mp3 --add-metadata https://www.youtube.com/watch?v=...
Embed thumbnail as album art
yt-dlp -x --audio-format mp3 --embed-thumbnail https://www.youtube.com/watch?v=...
Download thumbnail separately
yt-dlp --write-thumbnail https://www.youtube.com/watch?v=...
All metadata options combined
yt-dlp -x --audio-format mp3 --add-metadata --embed-thumbnail --write-thumbnail https://www.youtube.com/watch?v=...
Subtitles
Download video with subtitles
yt-dlp --write-subs https://www.youtube.com/watch?v=...
Download auto-generated subtitles
yt-dlp --write-auto-subs https://www.youtube.com/watch?v=...
Download specific language subtitles
yt-dlp --write-subs --sub-langs en https://www.youtube.com/watch?v=...
Embed subtitles in video
yt-dlp --embed-subs https://www.youtube.com/watch?v=...
Error Handling
Ignore errors and continue
yt-dlp -i https://www.youtube.com/playlist?list=...
Retry failed downloads
yt-dlp --retries 3 https://www.youtube.com/watch?v=...
Skip unavailable videos
yt-dlp --ignore-errors https://www.youtube.com/playlist?list=...
Advanced Options
Limit download speed (1MB/s)
yt-dlp --limit-rate 1M https://www.youtube.com/watch?v=...
Download only videos shorter than 10 minutes
yt-dlp --match-filter "duration < 600" https://www.youtube.com/playlist?list=...
Download only videos uploaded after specific date
yt-dlp --dateafter 20240101 https://www.youtube.com/playlist?list=...
Use proxy
yt-dlp --proxy http://proxy.example.com:8080 https://www.youtube.com/watch?v=...
Bash Tips and Tricks
Handle URLs with special characters
yt-dlp "https://www.youtube.com/watch?v=...&t=30s"
Escape special characters in filenames
yt-dlp -o '%(title)s.%(ext)s' https://www.youtube.com/watch?v=...
Download multiple URLs at once
yt-dlp https://www.youtube.com/watch?v=... https://www.youtube.com/watch?v=...
Use a file with URLs
yt-dlp -a urls.txt
Background downloads
nohup yt-dlp https://www.youtube.com/playlist?list=... > download.log 2>&1 &
Check if command succeeded
if yt-dlp https://www.youtube.com/watch?v=...; then
echo "Download successful"
else
echo "Download failed"
fi
Common Issues and Solutions
"ERROR: unable to download video data"
pip install -U yt-dlp
Age-restricted videos
yt-dlp --cookies-from-browser chrome https://www.youtube.com/watch?v=...
Geo-blocked videos
yt-dlp --proxy socks5://127.0.0.1:1080 https://www.youtube.com/watch?v=...
Very long filenames
yt-dlp --trim-filenames 100 https://www.youtube.com/watch?v=...
Useful Combinations
Perfect MP3 download with metadata
yt-dlp -x --audio-format mp3 --audio-quality 0 --add-metadata --embed-thumbnail -o "%(title)s.%(ext)s" https://www.youtube.com/watch?v=...
Archive entire channel
yt-dlp -f "best[height<=1080]" --download-archive archive.txt -o "%(uploader)s/%(upload_date)s - %(title)s.%(ext)s" https://www.youtube.com/@channelname
Download with subtitles and metadata
yt-dlp --write-subs --embed-subs --add-metadata --embed-thumbnail -o "%(title)s.%(ext)s" https://www.youtube.com/watch?v=...
Quick Reference
Command | Description |
---|---|
-f best | Download best quality |
-f "best[height<=720]" | Download max 720p |
-x --audio-format mp3 | Extract audio as MP3 |
-o "%(title)s.%(ext)s" | Clean filename |
--add-metadata | Add title, artist info |
--embed-thumbnail | Add album art |
--write-subs | Download subtitles |
--no-playlist | Download single video only |
-i | Ignore errors |
--download-archive file.txt | Skip previously downloaded items |