|
-
-
-
-
-
-
-
-
-
-
translation temporarily unavailable
ENG
CHN
ESP

*Commit. Push. Regret.

April 12, 2025
read time: 20 min

TL;DR: I built my own local Git hosting for the price of a GitHub subscription — and got full control over my data, eliminated corporate dependencies, and now run a Git server with better performance than any of those GitHub-Lab-Berg-type platforms or whatever corporate Git flavor is trending today. No surprise terms, no subscriptions, no hidden costs — just full privacy and control.

I've long been an advocate for self-hosted solutions, almighty open source, and digital independence. Of course, within the bounds of what's technically feasible, sane, and not a total waste of time. But lately, this has shifted from a hobby or weekend pet project into something closer to a necessity.

Not too long ago, I set out to find a solution to a long-standing problem — one kindly invented for me by capitalism — drowning me in its glorious "benefits", now even in the world of version control, all for just $19.99/month™, powered by Enterprise Cloud Solutions®, with Intelligent Auto-Merge Technology© and AI-enhanced telemetry!

I used to be a big GitHub fan. It really was a slick, convenient platform. Great UI for its time, solid team collaboration features, and the illusion of freedom — all wrapped in the quiet understanding that eventually, it would come to an end...

Years of Disillusionment: From GitHub to Self-Hosting

2018 - Microsoft Acquires GitHub

Everyone knew, deep down, how this would end. The moment the acquisition deal was announced, it was already clear: time to start migrating projects elsewhere. Big Tech has a long history of buying out beloved platforms, draining them dry, and discarding what’s left — by any means necessary.

Back then, a lot of people sounded the alarm. I remember my classmates saying: “Come on, 4001, what could possibly go wrong?” Turns out — everything went exactly how it always does. Predictable. Inevitable. A familiar script we’ve all seen before.

Because reasons™

2019–2020 — Censorship and Early Warning Signs

Problems began with blocking developers from Iran, Syria, and other sanctioned countries. The removal of youtube-dl following DMCA takedown requests from major corporations. I remember how suddenly a repository with reverse engineering of a popular messenger disappeared — “ToS violation,” no explanations given. Purges of unwanted contributors happened without proper moderation checks, and then you’d have to spend months proving you didn’t break any rules.

2021 — The Launch of Copilot

GitHub, OpenAI (and others) used millions of repositories to train AI models without asking anyone. My code, your code, even licensed code — all thrown into the grinder without warning. And sure, it would be one thing if they only took open-source code, but the Big Cheese went full rogue and literally stole people’s private repos. I still can’t understand how the community let this slide so easily.

Meanwhile, if you so much as take something from Big Tech without permission, oh friend, that’s piracy — hoist the black flag and start reading the fine print on page 6348, where it’s obviously written that you’re handing over your immortal soul.

2022 — Lawsuits and Scandals

A class-action lawsuit was filed against GitHub and OpenAI. Lawyers claim it’s a direct violation of the DMCA and copyright laws. Amidst the corporatization, an independent fork of Gitea emerged Forgejo.

2023 — Monetization Intensifies

GitHub started aggressively pushing paid plans. Actions received stricter limits for free users. Git LFS became paid starting from the very first gigabyte. New features appeared behind paywalls — GitHub Copilot Chat, GitHub Copilot Business, and more. Suddenly, repositories containing cryptographic software began to be “moderated” — several encryption projects were removed without warning.

Still remember that Git itself is open and free? Yeah? Then let’s move on.

2024 — Forgejo Becomes a Hard Fork

Development is moving faster, the community is growing, and I decided: enough feeding Big Tech — it’s time to take control myself.

At first, it felt like venturing into a dark forest and making a tough decision. But thanks to an amazing community, it turned out to be easier than setting up a new phone.


But before we dive deep into Forgejo, its advantages, hardware setups, and so on, let’s take a closer look at GitHub itself — and figure out why 80% of developers really need to make this switch.

GitHub: How They Turn You Into a Free Resource

  • Copilot and Code Theft

    • 54 million repositories, 179 GB of Python code — that's what trained Copilot.
    • The code was licensed, but the AI outputs it without attribution.
    • Even after repositories were deleted, the data remained in the training set.
    • Private repositories are private. But not for them.
  • DMCA and Censorship

    • Removal of youtube-dl via DMCA takedown.
    • User blocks based on geolocation: Iran, Crimea, Cuba, Syria.
    • Repository removals following complaints from banks and corporations.
  • Git LFS and “Random” Limits

    • Quotas suddenly exceeded right before deadlines.
    • CI/CD minutes were removed and reintroduced with limits.
  • Closed Ecosystem and Lock-In

    • Issue Forms, Projects, GitHub Actions — convenient but closed. The deeper the integration, the harder it is to leave.
  • Monetization at Your Expense

    • Copilot costs $10–19/month. The content is code written by the community.
    • GitHub is no longer "just a code hosting platform," but an ecosystem designed to profit... from you.

Oh, btw — aren’t the US the loudest about equality? So where is it for dozens of countries, with sanctions and developer bans just for being “in the wrong place”?

Yeah, double standards, I know, I know. But in my country, that’s still called fascism.

Why Not Codeberg or Gitea

Codeberg

  • Dependence on third-party hosting — you don’t control the system, they do. Or someone else does. Clearly not you.
  • Slow performance — the web interface lags during peak hours, and compared to self-hosted solutions, the difference is noticeable immediately.
    (On local Forgejo, I initially thought only file names loaded, not their contents. Turns out... that's how fast it can be.)
  • No CI/CD or federation support.
  • Uncertainty when scaling — volunteer projects lack reserves for scaling. Often in FOSS projects, we face limits and invite-only access. I understand it’s necessary, but still.

Gitea

  • After 2022, Gitea underwent commercialization, shifting focus towards profit-driven goals.
  • Governance, previously community-led and transparent, became more closed and centralized.
  • The creation of Gitea Ltd. introduced potential conflicts of interest between the company’s commercial aims and the open-source community’s values.
  • This shift led to concerns about reduced community influence on project direction and priorities.
  • Some community members felt sidelined as decision-making became less inclusive, raising questions about the project's long-term openness and independence.

Forgejo vs Codeberg vs GitHub

CriterionForgejoCodebergGitHub
GovernanceNon-commercialNon-commercial, external serviceCorporation Microsoft
License100% FOSS100% FOSS (based on Gitea)Proprietary
PerformanceExcellent (local)Moderate, some lagVaries by region, often uneven
CI/CDWoodpecker, Drone, any CINoneGitHub Actions
AI code useNo, strictly forbiddenNoUsed for Copilot
DMCA takedownsOnly by court or legal demandPossibleBy request, automated
Geo-blockingNoneNoneFull blocks: Iran (partial), N.Korea, Cuba, Syria, Crimea, DPR/LPR. Restrictions: No Enterprise Server, Copilot, RU/BY payments. Payment issues: China, some African countries. Temporary blocks: China (2013, 2015), Turkey (2016), India
Payment restrictionsNoneNoneRU/BY cards blocked, no RU billing, sanctioned accounts blocked
Bypass restrictionsNot requiredNot requiredVPN banned in ToS; appeals or billing change only
FederationIn developmentNot supportedNot planned
Gitea compatibilityFull up to v1.22Yes (based on Gitea)No
Infrastructure controlFull (if self-hosted)NoNo

Forgejo

  • Forgejo is a community-driven fork of Gitea, focused on openness and independence.
  • 100% FOSS — no compromises, all code is free and accessible.
  • Regular development reports, open infrastructure, and transparent governance without corporate influence.
  • Emphasizes stability, performance, and ease of self-hosting.
  • Compatible with Gitea up to version 1.22, making migration easy.
  • While federation (decentralization) is still in development, the self-hosted approach already gives users full control over their data and infrastructure.

It’s a lightweight self-hosted platform for Git repositories that can be easily deployed on almost any machine. Running on a Raspberry Pi? A small cloud instance? No problem!
But Forgejo is much more than just Git hosting.

Key Features of Forgejo:

  • Project Management: Beyond Git hosting, Forgejo offers issues, pull requests, wiki, kanban boards, and much more to coordinate with your team.
  • Package Publishing: Got something to share? Use releases to host your software for downloads or use the package registry to publish to Docker, npm, and many other package managers.
  • Customizability: Want to change the look? Tweak settings? There are plenty of configuration toggles to make Forgejo work exactly how you want.
  • Power: Organizations and team permissions, CI integration, code search, LDAP, OAuth, and more. If you have advanced needs, Forgejo has you covered.
  • Privacy: From update checks to default settings, Forgejo is built with privacy as a priority for you and your team.

Here’s Codeberg if you want to see how it looks “out of the box.”

Architecture and Technical Requirements

Forgejo is built on a modern tech stack:

  • Backend: Written in Go, ensuring high performance and low resource usage.
  • Database: Supports SQLite, PostgreSQL, MySQL/MariaDB.
  • Containerization: Provides container images for use with Docker or other container tools.
  • Architecture: Cross-platform — runs on Linux, macOS, Windows, and ARM.

Minimum System Requirements:

  • RAM: 512MB (1GB+ recommended)
  • CPU: 1 core (2+ recommended)
  • Disk Space: 50GB+
  • Network: well, yes?

In short, it can run even on a microwave — no worries about that.


My way

Building the Hardware — Easy as Pie

Assembling a Raspberry Pi 5 with an M.2 HAT is so simple a kid could do it — the “hardest” part is screwing in 4 bolts and snapping the cooler into place.

My System Components

Main parts:

  • Raspberry Pi 5 (8GB) — the main board
  • Official M.2 HAT+ — connects an SSD underneath
  • ADATA Legend 800 1TB NVMe — primary storage
  • Official Active Cooler — snap-on cooling
  • Official 27W USB-C PSU — stable power supply

Step-by-Step Assembly

  1. Installing the standoffs:

    • Screw in the four M2.5 metal standoffs in the corners of the board.
    • Just use a regular screwdriver — that’s all you need.
  2. Installing the SSD:

    • Insert the ADATA Legend 800 into the M.2 slot at a ~30° angle.
    • Gently press it down and secure it with an M2x3 screw.
  3. Mounting the Active Cooler:

    • Simply snaps onto the GPIO header — no screws required.
    • Connect power to GPIO pin 4 (5V) and pin 6 (GND).

Metal Case & WiFi

Originally used the metal case — looks great, but completely blocks WiFi (Faraday cage effect: 10/10).
Ended up leaving it completely uncased in a clean, dry spot where nothing can spill on it.


Installing and Configuring the System

Preparing Raspberry Pi OS

# Flash the OS image to a microSD card
sudo rpi-imager

# Select: Raspberry Pi OS Lite (64-bit)
# Configure: SSH, WiFi, user via Advanced options

First boot & update:

# Connect via SSH
ssh pi@192.168.1.100

# Update the system
sudo apt update && sudo apt full-upgrade -y

# Enable PCIe for SSD detection
sudo nano /boot/firmware/config.txt

PCIe and NVMe Setup

Append to /boot/firmware/config.txt:

# Enable PCIe interface
dtparam=pciex1

# Force PCIe Gen 3 (critical for ADATA Legend 800)
dtparam=pciex1_gen=3

# Fan control via GPIO
dtoverlay=gpio-fan,gpiopin=18,temp=65000

Why PCIe Gen 3 matters for ADATA Legend 800: Some budget SSDs, like the ADATA series, lack Gen 2 fallback to cut costs. Forcing Gen 3 boosts read speeds from ~450 MB/s to ~900 MB/s.

# Reboot to apply changes
sudo reboot

# Check SSD presence
lsblk
# You should see /dev/nvme0n1

# Check PCIe link speed
sudo lspci -vvv | grep -A 20 "Non-Volatile"
# Speed: 8GT/s (PCIe Gen 3)

Formatting and Mounting NVMe

# Create ext4 filesystem
sudo mkfs.ext4 /dev/nvme0n1

# Mount point
sudo mkdir /mnt/nvme

# Get UUID
sudo blkid /dev/nvme0n1

# Auto-mount via fstab
echo 'UUID=your-uuid-here /mnt/nvme ext4 defaults,noatime 0 2' | sudo tee -a /etc/fstab

# Mount and verify
sudo mount -a
df -h

System Migration to NVMe

# Clone system to NVMe
sudo rpi-clone nvme0n1

# Edit bootloader to boot from NVMe
sudo rpi-eeprom-config --edit

# Set:
BOOT_ORDER=0xf416

# Save and reboot
sudo reboot

Installing Forgejo

System Preparation

# Update and install basic packages
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget git ufw fail2ban htop

# Set a static IP
sudo nano /etc/dhcpcd.conf

# Example:
interface eth0
static ip_address=192.168.1.100/24
static routers=192.168.1.1
static domain_name_servers=1.1.1.1 8.8.8.8

Docker Installation

# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/debian $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add your user to the docker group
sudo usermod -aG docker $USER
newgrp docker

Docker Compose Configuration

# Create project directory on NVMe
mkdir -p /mnt/nvme/forgejo
cd /mnt/nvme/forgejo

# Generate env vars
cat > .env << EOF
DB_PASSWORD=$(openssl rand -base64 32)
DOMAIN=forgejo.yourdomain.com
EOF
# docker-compose.yml
version: '3.8'

networks:
  forgejo:
    external: false

services:
  database:
    image: postgres:15-alpine
    container_name: forgejo-db
    restart: always
    environment:
      - POSTGRES_USER=forgejo
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=forgejo
    volumes:
      - ./postgres:/var/lib/postgresql/data
    networks:
      - forgejo

  server:
    image: codeberg.org/forgejo/forgejo:11
    container_name: forgejo
    environment:
      - USER_UID=1000
      - USER_GID=1000
      - FORGEJO__database__DB_TYPE=postgres
      - FORGEJO__database__HOST=database:5432
      - FORGEJO__database__NAME=forgejo
      - FORGEJO__database__USER=forgejo
      - FORGEJO__database__PASSWD=${DB_PASSWORD}
    restart: always
    networks:
      - forgejo
    volumes:
      - ./forgejo:/data
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "3000:3000"
      - "222:22"
    depends_on:
      - database

Launch and Initial Setup

# Start containers
docker compose up -d

# Check logs
docker compose logs -f

# Navigate to web interface: http://192.168.1.100:3000
# Select PostgreSQL as database (settings will be auto-configured)
# Create admin account

Remote Access and Network Architecture

Working from home, I occasionally "touch grass". Therefore, it was critically important to ensure access to projects from anywhere. The solution turned out to be surprisingly simple — port forwarding on the router.

Network Access Configuration

# Configure static IP on Raspberry Pi
sudo nano /etc/dhcpcd.conf

# Add to end of file:
interface eth0
static ip_address=192.168.1.100/24
static routers=192.168.1.1
static domain_name_servers=1.1.1.1 8.8.8.8

# Restart network service
sudo systemctl restart dhcpcd

Port forwarding on router:

  • 80/443 → 192.168.1.100:80/443 (HTTP/HTTPS)
  • 222 → 192.168.1.100:222 (SSH for Git)
  • 3000 → 192.168.1.100:3000 (Web UI, if not using Nginx)

Dynamic DNS configuration:

# Install ddclient for automatic DNS updates
sudo apt install ddclient

# Configuration for Cloudflare
sudo nano /etc/ddclient.conf
protocol=cloudflare
zone=yourdomain.com
ttl=120
login=your-email@domain.com
password=your-api-token
forgejo.yourdomain.com

User Management and Collaboration

Previously, I would show code on request but didn't upload it to "carnivorous platforms." With this, I gained the ability to let colleagues into my network by simply creating accounts for them.

Creating Users via Admin Panel

Web admin interface:

  1. Login as admin → Site Administration → User Accounts
  2. Create User Account:
    • Username: colleague_username
    • Email: colleague@email.com
    • Password: Temporary_Password123
    • ✓ Must Change Password (force change on first login)

CLI user creation:

# Create user via command line
docker exec -it forgejo forgejo admin user create \
  --name "john_doe" \
  --email "john@company.com" \
  --password "TempPass123!" \
  --must-change-password \
  --admin=false

# Add user to organization
docker exec -it forgejo forgejo admin user add-to-org \
  --username "john_doe" \
  --org "my-team"

Organization and Access Rights Configuration

Creating organization for team:

# Via web interface: + → New Organization
# Or via CLI:
docker exec -it forgejo forgejo admin org create \
  --name "dev-team" \
  --owner "admin" \
  --visibility "private"

Repository access rights configuration:

  • Read: View code, issues, wiki
  • Write: + create branches, commits, PR
  • Admin: + repository settings, access management

Bulk invitation via script:

#!/usr/bin/env python3
import requests
import csv

FORGEJO_URL = "https://forgejo.yourdomain.com"
ADMIN_TOKEN = "your_admin_token"

def create_user(username, email, full_name):
    url = f"{FORGEJO_URL}/api/v1/admin/users"
    headers = {"Authorization": f"token {ADMIN_TOKEN}"}
    
    data = {
        "username": username,
        "email": email,
        "full_name": full_name,
        "password": "ChangeMe123!",
        "must_change_password": True,
        "send_notify": True
    }
    
    response = requests.post(url, json=data, headers=headers)
    return response.status_code == 201

# Read from CSV file team.csv
with open('team.csv', 'r') as file:
    reader = csv.DictReader(file)
    for row in reader:
        if create_user(row['username'], row['email'], row['full_name']):
            print(f"✓ Created user: {row['username']}")
        else:
            print(f"✗ Failed to create: {row['username']}")

Result: All colleagues can view my code, we work together like on regular GitHub, but tens of times faster and more convenient. Git clone executes in seconds instead of minutes, push is instant, no limits on Actions. No money. Almost. They owe me at least one beer tho.


Hardware and Configuration

System Components

Main platform:

  • Raspberry Pi 5 (8GB) — $80
  • M.2 NVMe SSD Samsung 980 1TB — $120
  • M.2 HAT+ for Raspberry Pi 5 (bottom mount) — $25
  • Active Cooler for RPi5 — $15
  • Official 27W USB-C PSU — $12

Case and Cooling

  • no case
# Check CPU temperature
vcgencmd measure_temp

# Configure PWM fan via device tree
sudo nano /boot/firmware/config.txt

# Add:
dtparam=pciex1
dtparam=pciex1_gen=3
# Enable active cooling
dtoverlay=gpio-fan,gpiopin=18,temp=65000

Power Consumption and Monitoring

Consumption measurements:

  • Idle: 4-6W (RPi5 only)
  • Load (Git operations): 8-12W
  • Peak (CI/CD + database): 18-25W
  • SSD: +2-3W constantly
# Power monitoring via I2C
sudo apt install i2c-tools
i2cdetect -y 1

# Install power consumption monitoring
pip install rpi-power-monitor

# Create monitoring script
cat > power_monitor.py << 'EOF'
#!/usr/bin/env python3
import time
import subprocess

def get_power_consumption():
    # Reading via INA219 or external USB-C tester
    try:
        # Approximate calculation based on CPU load
        load = float(subprocess.check_output(['uptime']).decode().split()[-3].replace(',', ''))
        base_power = 4.5  # Base consumption in watts
        load_power = (load / 100) * 15  # Additional consumption under load
        return base_power + load_power
    except:
        return 0

while True:
    power = get_power_consumption()
    temp = subprocess.check_output(['vcgencmd', 'measure_temp']).decode().strip()
    print(f"Power: {power:.1f}W, {temp}")
    time.sleep(10)
EOF

chmod +x power_monitor.py

Performance Optimization

NVMe SSD optimization:

# Enable PCIe Gen 3
sudo nano /boot/firmware/config.txt
# Add:
dtparam=pciex1
dtparam=pciex1_gen=3

# Check SSD speed
sudo hdparm -tT /dev/nvme0n1

# Result:
# /dev/nvme0n1:
#  Timing cached reads:   2500 MB in  2.00 seconds = 1250 MB/sec
#  Timing buffered disk reads: 1200 MB in  3.00 seconds = 400 MB/sec

Swap configuration on SSD:

# Disable old swap
sudo swapoff /var/swap

# Create new swap on NVMe
sudo fallocate -l 4G /mnt/nvme/swapfile
sudo chmod 600 /mnt/nvme/swapfile
sudo mkswap /mnt/nvme/swapfile
sudo swapon /mnt/nvme/swapfile

# Add to fstab
echo '/mnt/nvme/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Autostart and Monitoring

Systemd service for autostart:

sudo nano /etc/systemd/system/forgejo-stack.service

[Unit]
Description=Forgejo Docker Stack
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/pi/forgejo
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target
sudo systemctl enable forgejo-stack.service
sudo systemctl start forgejo-stack.service

Result: The system runs stably 24/7, automatically recovers after reboots, consumes minimal electricity and provides better performance than GitHub for local teams! Yaaay!!!


Aftermath Actions: Security, Backups, Migration & CI/CD

Security Configuration

UFW (Uncomplicated Firewall)

# Basic firewall setup
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 222/tcp  # SSH for Git
sudo ufw enable

Fail2ban

# Create Forgejo configuration
sudo cat > /etc/fail2ban/jail.d/forgejo.conf << 'EOF'
[forgejo]
enabled = true
port = http,https
filter = forgejo
logpath = /var/log/forgejo/forgejo.log
maxretry = 5
findtime = 3600
bantime = 3600
EOF

# Create filter
sudo cat > /etc/fail2ban/filter.d/forgejo.conf << 'EOF'
[Definition]
failregex = .*Failed authentication attempt for .* from <HOST>
ignoreregex =
EOF

sudo systemctl restart fail2ban

WireGuard VPN

For maximum security, I recommend configuring Forgejo access only through VPN:

# Install WireGuard
sudo apt install -y wireguard

# Generate server keys
sudo wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod 600 /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key

Automatic Backups

It's critically important to set up automatic backups:

#!/bin/bash
# backup-forgejo.sh

BACKUP_DIR="/backup/forgejo"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_NAME="forgejo_backup_$DATE"

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Stop Forgejo for consistent backup
cd ~/forgejo
docker compose stop server

# Create data archive
tar -czf "$BACKUP_DIR/$BACKUP_NAME.tar.gz" \
    --exclude='./forgejo/log' \
    ./forgejo ./postgres

# Start Forgejo
docker compose start server

# Encrypt backup
gpg --cipher-algo AES256 --compress-algo 1 --s2k-mode 3 \
    --s2k-digest-algo SHA512 --s2k-count 65536 --force-mdc \
    --quiet --no-greeting --batch --yes \
    --passphrase-file /home/pi/.backup-passphrase \
    --symmetric --output "$BACKUP_DIR/$BACKUP_NAME.tar.gz.gpg" \
    "$BACKUP_DIR/$BACKUP_NAME.tar.gz"

# Remove unencrypted archive
rm "$BACKUP_DIR/$BACKUP_NAME.tar.gz"

# Send to cloud (Backblaze B2)
b2 sync "$BACKUP_DIR" b2://your-bucket-name/forgejo-backups/

# Clean old local backups (older than 7 days)
find "$BACKUP_DIR" -name "*.gpg" -mtime +7 -delete

echo "Backup completed: $BACKUP_NAME.tar.gz.gpg"
# Add to crontab
chmod +x backup-forgejo.sh
crontab -e
# Add: 0 2 * * * /home/pi/backup-forgejo.sh >> /var/log/forgejo-backup.log 2>&1

GitHub Migration

Forgejo has a built-in migration tool that makes it easy to migrate external repositories.

Creating GitHub Token

  1. Go to GitHub Settings → Developer settings → Personal access tokens
  2. Create new token with permissions: repo, read:user, user:email

Migration Process

Log into Forgejo. Click the plus sign (+) in the top right corner and select New Migration. Enter GitHub repository URL and your access token, then click Migrate Repository button.

For bulk migration, you can use this script:

#!/usr/bin/env python3
# mass-migrate.py

import requests
import os
import time
from urllib.parse import urljoin

GITHUB_TOKEN = os.getenv('GITHUB_TOKEN')
GITHUB_USER = os.getenv('GITHUB_USER')
FORGEJO_URL = os.getenv('FORGEJO_URL')
FORGEJO_TOKEN = os.getenv('FORGEJO_TOKEN')

def get_github_repos():
    headers = {'Authorization': f'token {GITHUB_TOKEN}'}
    repos = []
    page = 1
    
    while True:
        url = f'https://api.github.com/user/repos?page={page}&per_page=100'
        response = requests.get(url, headers=headers)
        data = response.json()
        
        if not data:
            break
            
        repos.extend(data)
        page += 1
    
    return repos

def migrate_to_forgejo(repo):
    url = urljoin(FORGEJO_URL, '/api/v1/repos/migrate')
    headers = {'Authorization': f'token {FORGEJO_TOKEN}'}
    
    data = {
        'auth_token': GITHUB_TOKEN,
        'clone_addr': repo['clone_url'],
        'repo_name': repo['name'],
        'service': 'github',
        'wiki': True,
        'issues': True,
        'pull_requests': True,
        'releases': True,
        'milestones': True,
        'labels': True,
        'private': repo['private']
    }
    
    response = requests.post(url, json=data, headers=headers)
    return response.status_code == 201

def main():
    repos = get_github_repos()
    
    for repo in repos:
        print(f"Migrating {repo['name']}...")
        
        if migrate_to_forgejo(repo):
            print(f"✓ Successfully migrated {repo['name']}")
        else:
            print(f"✗ Failed to migrate {repo['name']}")
        
        time.sleep(2)  # Rate limiting

if __name__ == '__main__':
    main()

CI/CD with Forgejo Actions

Forgejo Actions is compatible with GitHub Actions, making pipeline migration easier:

# .forgejo/workflows/build.yml
name: Build and Test

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  test:
    runs-on: ubuntu-latest
    
    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      
    - name: Setup Node.js
      uses: actions/setup-node@v4
      with:
        node-version: '18'
        cache: 'npm'
        
    - name: Install dependencies
      run: npm ci
      
    - name: Run tests
      run: npm test
      
    - name: Build project
      run: npm run build
      
    - name: Deploy to staging
      if: github.ref == 'refs/heads/develop'
      run: |
        echo "Deploying to staging..."
        # Deploy commands

Setting up Self-hosted Runners

For ARM-based CI/CD on your Raspberry Pi:

# Install Forgejo runner
wget https://code.forgejo.org/forgejo/runner/releases/download/v4.0.1/forgejo-runner-4.0.1-linux-arm64
chmod +x forgejo-runner-4.0.1-linux-arm64
sudo mv forgejo-runner-4.0.1-linux-arm64 /usr/local/bin/forgejo-runner

# Create runner directory
mkdir -p ~/forgejo-runner
cd ~/forgejo-runner

# Register runner with your Forgejo instance
forgejo-runner register \
  --instance https://forgejo.yourdomain.com \
  --token YOUR_RUNNER_TOKEN \
  --name "rpi5-runner" \
  --labels "ubuntu-latest:docker://node:18-bullseye,native:host"

# Create systemd service
sudo tee /etc/systemd/system/forgejo-runner.service << 'EOF'
[Unit]
Description=Forgejo Runner
After=network.target

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/forgejo-runner
ExecStart=/usr/local/bin/forgejo-runner daemon
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable forgejo-runner
sudo systemctl start forgejo-runner

Advanced CI/CD Pipeline Example

# .forgejo/workflows/advanced-build.yml
name: Advanced Build Pipeline

on:
  push:
    branches: [ main, develop ]
    tags: [ 'v*' ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: forgejo.yourdomain.com
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: native
    steps:
    - name: Checkout
      uses: actions/checkout@v4
      
    - name: Setup Node.js
      uses: actions/setup-node@v4
      with:
        node-version: '18'
        cache: 'npm'
        
    - name: Install dependencies
      run: npm ci
      
    - name: Run linting
      run: npm run lint
      
    - name: Run tests
      run: npm run test:coverage
      
    - name: Upload coverage
      uses: actions/upload-artifact@v4
      with:
        name: coverage-report
        path: coverage/

  build:
    needs: test
    runs-on: native
    outputs:
      image: ${{ steps.image.outputs.image }}
      digest: ${{ steps.build.outputs.digest }}
    steps:
    - name: Checkout
      uses: actions/checkout@v4
      
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
        tags: |
          type=ref,event=branch
          type=ref,event=pr
          type=semver,pattern={{version}}
          type=semver,pattern={{major}}.{{minor}}
          
    - name: Build and push
      id: build
      uses: docker/build-push-action@v5
      with:
        context: .
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        platforms: linux/arm64

  deploy:
    needs: build
    runs-on: native
    if: github.ref == 'refs/heads/main'
    environment: production
    steps:
    - name: Deploy to production
      run: |
        echo "Deploying ${{ needs.build.outputs.image }}"
        # Deployment commands here

Financial Analysis

Cost Comparison (2 years):

GitHub Team (5 users):

  • $4/month × 5 users × 24 months = $480
  • GitHub Actions (additional): ~$200/year × 2 = $400
  • Total: ~$880

Self-hosted solution:

  • Raspberry Pi 5 (8GB): $80
  • Samsung 1TB SSD: $120
  • Case + cooling + cables: $50
  • Electricity (25W × 24/7 × $0.15/kWh): $13/year × 2 = $26
  • Total: ~$250

Savings: more than $600. Even more, if you're still live with parents.


Additional benefits:

  • Full control over data
  • Unlimited private repositories
  • Unlimited CI/CD minutes
  • DevOps practices learning
  • Ability to host other services

Troubleshooting

Common Issues:

1. High CPU consumption:

# Check processes
docker exec forgejo top
# Optimize Git GC
docker exec forgejo git config --global gc.auto 256

2. SSH problems:

# Check SSH keys
ssh-keyscan -p 222 your-domain.com
# Add to known_hosts
ssh-keyscan -p 222 your-domain.com >> ~/.ssh/known_hosts

3. Slow loading:

# Clean logs
docker exec forgejo sh -c 'find /data/log -name "*.log" -mtime +7 -delete'
# Optimize database
docker exec forgejo-db vacuumdb -U forgejo -d forgejo -z

Conclusion

The transition to Forgejo gave me not only financial savings, but complete independence from corporate platforms. The main difference between Gitea and Forgejo now is not functionality, but project vision and governance. By choosing Forgejo, I support community-driven development and get:

  • Performance: Faster than GitHub thanks to local hosting
  • Security: Full control over data and infrastructure
  • Autonomy: Independence from external services
  • Flexibility: Ability to customize for any needs
  • Learning: Hands-on experience with Docker, nginx, monitoring

Self-hosting isn't about complexity, it's about freedom. If you value control over your data and want to break free from the Big Tech ecosystem, Forgejo on Raspberry Pi 5 is an excellent solution to start your journey toward technological independence.

Next steps:

  1. Set up federation when the feature becomes stable
  2. Integrate with Woodpecker CI for more powerful CI/CD
  3. Add Grafana monitoring dashboards
  4. Configure automatic scaling as team grows

Remember: every self-hosted service is a step toward a decentralized internet and technological independence.

Commit. Push. Regret. | 4001