06 January 2026

How I Got Unlimited GitLab CI/CD Minutes for $200: Self-Hosted Runner on Beelink Mini S12 Pro

I've been using GitLab.com for my Android app projects and the 400 free CI/CD minutes per month has been great but it doesn't take long to run out. Turns out, Android builds are hungry beasts. A single pipeline run for my app takes about 8-10 minutes, which means I was burning through those 400 minutes in less than a week. GitLab wanted another $10 for every 1,000 additional minutes after that.

So I did the math: at my build frequency, I'd be looking at $30-50/month just for CI/CD. That's $360-600 per year. For a side project. Yeah, no thanks.

This got me thinking: what if I just ran my own GitLab Runner locally? I already had a Beelink mini S12 Pro sitting around (more on why this specific hardware is perfect in a minute), and I'd been wanting to try out Proxmox anyway. Spoiler alert: it works beautifully, and I've been running unlimited builds 24/7 for about two months now.

Here's how I set it all up, what worked, what didn't, and whether this approach makes sense for you.

Why Self-Host a GitLab Runner Anyway?

Before we get into the technical stuff, let me clarify what self-hosting actually means here. You're not running your own GitLab instance (that's a different rabbit hole). Your code still lives on GitLab.com, your pipelines still show up in the GitLab UI exactly like before, and everything works identically to cloud runners.

The only difference? Instead of GitLab spinning up a cloud VM to run your builds, your local hardware does the work. The runner polls GitLab.com for jobs, executes them locally, and reports results back. From GitLab's perspective, it just sees another runner executing jobs.

What you get:

  • Unlimited free CI/CD minutes (no more rationing builds)
  • Faster builds (no queue times, and local caching helps)
  • Full control over the build environment
  • Same GitLab.com experience (green checkmarks, artifacts, logs, everything)

What you need:

  • Hardware that can run 24/7 without destroying your electricity bill
  • A decent internet connection (upload speed matters for pushing artifacts)
  • Basic Linux knowledge (you're running commands in a terminal)

Why I Chose the Beelink Mini S12 Pro

Let's talk hardware because this is where most people get stuck. You need something that:

  1. Can run 24/7 without sounding like a jet engine or melting your wallet
  2. Has enough power to actually compile code (especially Android builds)
  3. Won't break the bank upfront

I went with the Beelink mini S12 Pro, and after two months of running builds non-stop, I'm convinced this is the sweet spot for self-hosted CI/CD.

The Specs That Matter

  • Intel N100 processor (4 cores, 3.4GHz boost)
  • 16GB DDR4 RAM (upgradeable)
  • 500GB NVMe SSD
  • Tiny footprint (12cm x 11cm x 4cm)
  • Silent operation (fanless or ultra-quiet, depending on model)

Why These Specs Work for GitLab Runners

The N100 processor is surprisingly capable. It's a newer Alder Lake-N chip that punches way above its weight for compilation tasks. My Android builds that took 8-10 minutes on GitLab's cloud runners now finish in 6-8 minutes locally. The 4 cores handle Docker-in-Docker builds without breaking a sweat.

16GB RAM is the minimum I'd recommend if you're doing Android builds. Docker containers need room to breathe, and Android Gradle builds are notoriously memory-hungry. With 16GB, I can run builds comfortably with headroom for other services.

The SSD matters more than you'd think. Pulling Docker images, caching Gradle dependencies, and writing build artifacts all benefit from fast local storage. The 500GB model gives plenty of room for Docker images and build caches without constant cleanup.

The Real Kicker: Power Consumption

This is where the Beelink S12 Pro shines for always-on workloads:

  • Idle power draw: 5-10W
  • Under build load: 15-25W
  • Annual electricity cost: $10-20/year (at $0.12/kWh)

Compare that to GitLab.com's pricing: if you're burning through 3,000-4,000 CI minutes per month, you're paying $30-40/month, or $360-480/year. The Beelink pays for itself in 6-7 months just on avoided CI costs, plus you have a capable mini PC you can use for other homelab projects.

Beelink mini S12 Pro on Amazon

Full disclosure: I'm recommending this specific hardware because it genuinely works well for this use case. If you buy through affiliate links in this post, I might earn a small commission at no extra cost to you. I only recommend stuff I actually use and would buy again.

Alternatives I Considered (and Why I Didn't Choose Them)

Raspberry Pi 4 (8GB): Too slow for Android builds. ARM architecture means no x86 Docker images. Pass.

Old laptop: Power consumption is terrible (50-80W idle), and you're stuck with whatever specs it has. Plus, laptop parts aren't designed for 24/7 operation.

Larger mini PC (Intel i5/i7): Overkill for CI/CD, higher power draw (30-50W idle), and costs $500+. Diminishing returns.

AWS/DigitalOcean VM: Monthly costs add up fast. A 2vCPU/4GB instance costs $20-40/month. You never own the hardware, and you're still paying forever.

The Beelink S12 Pro sits in the Goldilocks zone: powerful enough, efficient enough, cheap enough.

The Setup: Proxmox + LXC Container

I wanted to keep the setup clean and resource-efficient, so I went with Proxmox as the host OS and ran the GitLab Runner inside an LXC container instead of a full VM.

Why LXC Instead of a Full VM?

LXC containers are lightweight:

  • Uses 100-200MB RAM idle vs 500MB+ for a VM
  • Boots in seconds
  • Near-native performance (no hypervisor overhead)
  • Can still run Docker inside with proper configuration
  • Good isolation from the Proxmox host

For a single-purpose service like a GitLab Runner, LXC is perfect. You get the benefits of containerization without the bloat of a full virtual machine.

Step-by-Step Setup Guide

This is going to be fairly detailed because I spent a few hours troubleshooting AppArmor and Docker permission issues that aren't well-documented anywhere. If you follow these steps exactly, you'll save yourself that headache.

1. Create the LXC Container in Proxmox

Download Ubuntu 22.04 LTS template:

  1. In Proxmox web UI, go to your storage (usually local)
  2. Click CT TemplatesTemplates
  3. Search for "ubuntu-22.04" and download

Create the container:

  1. Click Create CT button (top right)
  2. General tab:
    • Hostname: gitlab-runner
    • Set a strong root password
    • Important: Uncheck "Unprivileged container" (Docker needs privileged mode)
  3. Template tab: Select Ubuntu 22.04
  4. Disks tab: Set disk size to 20GB (sufficient for runner + Docker images)
  5. CPU tab: Allocate 2 cores (adjust based on available resources)
  6. Memory tab:
    • Memory: 2048 MB (2GB minimum)
    • Swap: 512 MB
  7. Network tab: Use default bridge (vmbr0), DHCP or static IP

Don't start the container yet! We need to configure it for Docker support first.

2. Configure Container for Docker Support

This is the part that tripped me up initially. LXC containers need specific settings to run Docker properly, or you'll get cryptic AppArmor errors.

In the Proxmox shell, run:

# Replace 100 with your actual container ID
pct set 100 -features nesting=1,keyctl=1

Then edit the container config:

nano /etc/pve/lxc/100.conf

Add these lines at the end:

lxc.apparmor.allow_incomplete: 1
lxc.mount.auto: proc:rw sys:rw
lxc.cap.drop:
lxc.apparmor.profile: unconfined

Save and exit (Ctrl+X, Y, Enter).

Now start the container:

pct start 100

You might see an AppArmor warning when starting. This is normal and can be ignored.

What these settings do:

  • nesting=1 allows running containers inside containers
  • keyctl=1 enables keyring support that Docker needs
  • lxc.apparmor.profile: unconfined runs without AppArmor restrictions
  • The other lines give Docker the filesystem access it needs

Without these, Docker will fail with permission errors or refuse to start containers.

3. Install Docker Inside the Container

Access the container console:

  1. In Proxmox UI, select your container
  2. Click Console
  3. Login as root with your password

Update the system:

apt update && apt upgrade -y

Install basic tools:

apt install -y curl wget git nano

Install Docker:

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Set up Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker
apt update
apt install -y docker-ce docker-ce-cli containerd.io

# Start and enable Docker
systemctl start docker
systemctl enable docker

# Verify Docker works
docker run hello-world

If you see "Hello from Docker!", you're golden. If you get permission errors, double-check the LXC config from step 2.

4. Install GitLab Runner

# Download the official GitLab Runner repository script
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | bash

# Install GitLab Runner
apt install -y gitlab-runner

# Verify installation
gitlab-runner --version

5. Register the Runner with Your GitLab Project

Get your registration token from GitLab.com:

  1. Go to your project → Settings → CI/CD → Runners
  2. Click New project runner
  3. Configure settings (tags, whether to run untagged jobs)
  4. Click Create runner
  5. Copy the registration token (starts with glrt-...)

Register the runner:

gitlab-runner register

Answer the prompts:

Prompt Your Answer
GitLab URL https://gitlab.com (just press Enter)
Registration token Paste the glrt-... token
Description Beelink S12 Pro LXC (or whatever you want)
Tags beelink,android (optional, comma-separated)
Executor docker (important!)
Default Docker image alpine:latest

6. Configure Runner for Android Builds

Edit the runner config:

nano /etc/gitlab-runner/config.toml

Find your runner section and adjust:

[[runners]]
  name = "Beelink S12 Pro LXC"
  url = "https://gitlab.com"
  token = "YOUR_RUNNER_TOKEN"
  executor = "docker"
  [runners.docker]
    tls_verify = false
    image = "alpine:latest"
    privileged = true  # Required for Docker-in-Docker
    disable_cache = false
    volumes = ["/cache"]
    pull_policy = "if-not-present"

The privileged = true setting is crucial for Android builds that use Docker-in-Docker (like custom build containers).

Restart the runner:

gitlab-runner restart

7. Verify Everything Works

Check runner status:

gitlab-runner status

Should show "Service is running."

Check in GitLab.com:

  • Go to Settings → CI/CD → Runners
  • Your runner should have a green circle (online)

Test with a real build:

  1. Push a commit to your GitLab project
  2. Watch the pipeline start in GitLab UI
  3. Click on a job to see logs streaming from your Beelink
  4. Check that artifacts upload properly

If everything works, congratulations! You now have unlimited CI/CD minutes.

How It Actually Works (the Magic Explained)

The runner constantly polls GitLab.com (every few seconds) asking "got any jobs for me?" When a job arrives:

  1. Runner pulls the specified Docker image from your GitLab registry (or Docker Hub)
  2. Starts a container using that image
  3. Clones your code into the container
  4. Runs the commands from your .gitlab-ci.yml
  5. Streams logs back to GitLab.com in real-time
  6. Uploads artifacts (like APK files) to GitLab.com
  7. Reports job status (passed/failed) back

From GitLab.com's perspective, everything looks identical to cloud runners. Green checkmarks on commits, real-time logs, downloadable artifacts, everything works exactly the same.

Your .gitlab-ci.yml requires zero changes. Whatever pipeline you had running on cloud runners will work as-is on your self-hosted runner.

Real-World Performance and Costs

I've been running this setup for about two months now, building an Android app with fairly typical CI/CD workflows. Here's what I've learned:

Build Performance

My typical Android pipeline:

  • Checkout code
  • Run unit tests
  • Lint checks
  • Build debug APK
  • Build release APK (signed)
  • Upload artifacts

Cloud runner time: 8-10 minutes
Beelink S12 Pro time: 6-8 minutes

The local runner is actually faster because:

  • No queue time (it's dedicated hardware)
  • Gradle dependencies cache locally between builds
  • Docker images stay cached
  • No network latency for artifact uploads (only final upload to GitLab)

Power and Cost Analysis

Hardware cost: ~$200 for the Beelink mini S12 Pro

Electricity cost:

  • Idle: 7W average
  • Under load: 20W average
  • Assume 8 hours/day active building: (7W × 16h + 20W × 8h) / 24 = 10.67W average
  • Annual electricity: 10.67W × 24h × 365 days = 93.5 kWh/year
  • At $0.12/kWh: $11.22/year

GitLab.com cost (my previous usage):

  • ~3,500 CI minutes/month (after 400 free)
  • 3,100 paid minutes = $31/month
  • Annual cost: $372/year

Payback period: 200 / (372 - 11.22) = 6.6 months

After 6-7 months, the hardware is paid off, and you're saving $30/month indefinitely. Plus, you have a capable mini PC you can repurpose for other homelab projects (Pi-hole, Home Assistant, Plex, whatever).

Beelink mini S12 Pro on Amazon

Honest Pros and Cons

Pros:

  • Unlimited builds (seriously, go nuts)
  • Faster build times (no queue, local caching)
  • One-time cost vs recurring monthly fees
  • Hardware you own and can repurpose
  • Great learning experience with Proxmox/LXC/Docker
  • Low power consumption (guilt-free 24/7 operation)

Cons:

  • Upfront hardware cost (~$200)
  • Requires basic Linux/Docker knowledge
  • Your internet goes down = no builds
  • You're responsible for maintenance/updates
  • Single point of failure (though you can still fall back to cloud runners)
  • Takes a few hours to set up initially

Who this makes sense for:

  • Developers running frequent CI/CD jobs (3,000+ minutes/month)
  • Homelab enthusiasts who want another project
  • Teams/small companies tired of cloud runner costs
  • Anyone who values learning infrastructure skills

Who should stick with cloud runners:

  • Infrequent builders (400 free minutes is plenty)
  • People who don't want to manage infrastructure
  • Teams needing enterprise SLA guarantees
  • Projects requiring massive parallel builds (you'd need beefier hardware)

What I'd Do Differently

Looking back after two months, here's what I'd change:

Start with 4GB RAM instead of 2GB. My Android builds occasionally hit memory limits with 2GB, especially when Gradle decides to be aggressive with parallel tasks. Upgrading to 4GB eliminated all memory pressure.

Use a static IP from the start. I started with DHCP, and while it worked fine, having a static IP makes it easier to SSH into the container and check logs remotely.

Set up automatic backups immediately. Proxmox makes this easy with vzdump, and I really should have configured scheduled backups from day one. Rebuilding the runner config would be annoying if the container got corrupted somehow.

Test with a small project first. I jumped straight to my main Android project, which has a complex multi-stage pipeline. Starting with a simpler project would have made troubleshooting easier.

Is This Worth It for You?

If you're burning through GitLab's free CI minutes and looking at monthly costs of $20+, absolutely. The Beelink mini S12 Pro setup pays for itself in 6-7 months, and after that you're saving $30-40/month indefinitely.

Beyond the cost savings, there's real value in owning your build infrastructure. You learn Proxmox, LXC, Docker, and CI/CD architecture deeply. These are valuable skills that transfer to professional DevOps work.

And honestly? It just feels good to run git push and know that your own hardware is crunching through those builds, not some mysterious cloud VM you're renting by the minute.

If you're on the fence, consider this: the Beelink S12 Pro is useful beyond just GitLab Runners. Once you have Proxmox set up, you can run:

  • Pi-hole for network-wide ad blocking
  • Home Assistant for smart home automation
  • Plex or Jellyfin for media streaming
  • Development/staging environments
  • Whatever other homelab projects you dream up

The initial investment unlocks all of that, not just CI/CD.

Beelink mini S12 Pro on Amazon

Final Thoughts

This setup has been running rock-solid for two months. The Beelink S12 Pro hasn't skipped a beat, Proxmox is stable, and the LXC container just works. I haven't touched the configuration since initial setup except to occasionally run apt update inside the container.

If you're doing frequent Android (or any) builds on GitLab.com and the CI minute limits are cramping your style, I genuinely recommend trying this approach. The upfront effort is a few hours, the upfront cost is ~$200, and the ongoing cost is basically nothing.

Plus, you'll learn a ton about infrastructure along the way. That's worth something on its own.

If you end up setting this up, drop a comment and let me know how it goes. I'd love to hear what works (or doesn't work) for your specific use case.

Questions or stuck on something? Drop them in the comments and I'll do my best to help troubleshoot.


Affiliate Disclosure: This post contains affiliate links to the Beelink mini S12 Pro. If you purchase through these links, I may earn a small commission at no extra cost to you. I only recommend hardware I personally use and would buy again. Your support helps me keep writing these guides.

11 June 2025

Kotlin script running on GitHub Actions

I wanted to run a script regularly and didn't really want to run it on my local machine each time. So it turns out you can run scripts regularly on GitHub Actions. What's even better is its super simple to setup. Of course there are some limits to how many minutes you can run per GitHub's billing page. Currently this is 2000 minutes.

First I created a fresh repository with one single file in it. I named that file Test.main.kts and added the following contents:

val helloName = "Bob"

println("Hello $helloName from the kotlin script")

I commit that to my repo and then create a GitHub Action, this can be named whatever you want, mine is named main.yml here are the contents:

name: Just a test

# configure manual trigger
on:
  workflow_dispatch:

jobs:
  just-a-test:
    name: Run a Kotlin script 
    runs-on: ubuntu-latest
    timeout-minutes: 5
    steps:
      - uses: actions/checkout@v2
      - name: Run Kotlin script
        run: kotlinc -script ./Test.main.kts

This checks out your repo onto a ubuntu runner and runs your script. Super simple and this will run whenever you manually execute it using the web control in GitHub. Of course you can schedule this if needed.

The slight complication comes when you want imports, this is where you need to balance local setup and GitHub with scripts. I have been using IntelliJ and INtelliJ with scripts don't seem to play super well yet, maybe I'm missing something. You can add imports into the script file using @file:DependsOn() but how you get this into your project locally is still a bit confusing to me. It might be that IntelliJ doesn't really support this lightweight type of script based project yet.

01 March 2023

GitLab pipeline - Container registry configuration

This one has taken me a few hours and is pretty stupid. If you try running your GitLab pipeline and you get this error:

$ docker pull --quiet $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG || true
invalid reference format
$ docker build --cache-from $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG .
invalid argument ":appname" for "-t, --tag" flag: invalid reference format

Then I believe the problem is your container registry. This is a random setting your need to enable buried deep in the GitLab settings.

Goto:

Settings -> General -> Visibility, project features, permissions

The random setting you're looking for is 

Container registry - Every project can have its own space to store its Docker images


https://docs.gitlab.com/ee/user/packages/container_registry/index.html

https://stackoverflow.com/a/57729959


22 January 2023

AWS S3 API call using babbel to check incomplete multi part uploads

Sometimes AWS is hard work!

For a long time I've had some photos backed up in an S3 Glacier bucket. Glacier is just long term storage, very cheap and slow to access. Meaning you shouldn't really rely on getting regular access to these files. In my case they're for worst case scenarios use only.

The problem is every month my bill looks a bit strange. I have a Glacier entry and I also have an S3 entry on my bill. We aren't talking much money here, just a few pennies but it bothers me, there shouldn't really be anything on S3, it should all be under Glacier. So I gave every bucket a tag so I could see which bucket was the culprit. Tags are really useful because I can group by tag spending on the AWS cost explorer. So this should identify what's going on. Unfortunately no such luck, the S3 spending is always under "No tag key". After what was probably longer than I should admit curiosity got the better of me and I emailed support. Support came back to me saying this is probably multi part uploads that have failed.

The plot thickens! I can't confirm this is indeed the cause yet, but I learnt a few things along the way, so here are some general AWS related findings.

Incomplete Multi Part Uploads.

There's been a fair amount written about this, and if you're reading this blog you probably get it already so I'll be short. When you try to upload a large file it can't be sent in one chunk across the internet. So we break it down and send it bit by bit. So what support is implying is that I started uploading files and some parts succeeded and some failed. So that left incomplete "chunks" or packets probably more accurately on my S3 account. 

AWS's solution

They have written a blog post about this, but it's not very up to date and personally I couldn't get Storage Lens to show incomplete multi part uploads.

https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/

Listing incomplete multi part uploads

It's absurd that this is not simple from the console. However I really wanted to see if I did in fact have any incomplete MPU's before I applied a rule that deleted them. So I set about dusting off my old REST API skills to see if I could create a script that would list incomplete MPUs. I discovered I could use an API method for this:

https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html

I really wanted to use Kotlin for this but I wasn't setup to run Kotlin stand alone apps, so it was faster for me to make this an Android "app". I used an awesome tool called babbel okhttp-aws-signer to sort out the AWS Signature Version 4 stuff as I know how frustrating that can be:

https://github.com/babbel/okhttp-aws-signer

This library is fantastic. So much easier than trying to workout the AWS Signature Version 4 nonsense yourself.

Here is the script I used to generate and make an AWS S3 REST API request:



val url = "http://" + bucketname + "." + serviceName + "." + region + ".amazonaws.com/?uploads=1"
val dateused = SimpleDateFormat("yyyyMMdd'T'HHmmss'Z'", Locale.US).format(Date())

val signer = OkHttpAwsV4Signer(region, serviceName)

val request = Request.Builder()
    .url(url)
    .build()

val newRequest = request.newBuilder()
    .addHeader("host", request.url.host)
    .addHeader("x-amz-date", dateused)
    .addHeader("x-amz-content-sha256", "".hash())
    .build()

val signed = signer.sign(newRequest, accessKeyId, secretAccessKey)


You'll note that I had to use ?uploads=1 to get the babel library to work. This is because it requires query params to be pairs and you get a NPE if you don't do so. You'll also notice I had to add the new (to me) header x-amz-content-sha256 which apparently is just an empty hash if the body is empty. This is a GET request so of course it is.

You'll need your access key and your access key secret which you can generate in the security credentials area of your account.

So after all that and quite a lot of work I was able to prove that there were in fact a few incomplete multi part uploads on my S3 account. So I've created a policy to delete them, let's see if that works.

I really recommend babbel okhttp-aws-signer as it makes things like this quite simple and helps you answer a few AWS questions without too much drama.

Hope this has helped you with something AWS related!












19 December 2022

Android docker compose image with Gitlab for CI/CD

This was a right pain and I do love to blog about things that cause me trouble and strife! I created a new Gitlab project and noticed that you can create from template. How exciting! So I selected Android and was delighted to spot a docker file and a yml file had been generated for CI/CD so it integrates nicely with Gitlab pipelines. Amazing! I can run my unit tests and do a build on every commit, this is fantastic.

Gitlab pipelines are a great little tool that builds and runs your project and can be configured as needed to do all sorts of clever things. What's more there's no hardware config involved, it uses docker so you just commit the docker image and let some hardware somewhere in the world build it for you. So simple.

Naturally this builds nicely from the template but the first thing I did with this file was upgrade the compile and target versions in Android's gradle file to the latest version so my app would be compatible with the Google Play Store.

Fail!

Warning: License for package Android SDK Platform 33 not accepted.

FAILURE: Build failed with an exception.

What went wrong:

A problem occurred configuring project ':app'.

Failed to install the following Android SDK packages as some licences have not been accepted.

platforms;android-33 Android SDK Platform 33

To build this project, accept the SDK license agreements and install the missing components using the Android Studio SDK 
Manager.

What's happening here is Gitlab pipeline is creating a docker image from our new dockerfile and running the Android build using this docker image. However the docker image is out of date so when it looks for Android SDK 33 it can't find it and tries to install it, but this fails because it can't find an accepted license for that SDK. Not a big deal, I've played with sdkmanager before we can easily update this docker image.

So looking at our newly generated docker file I can see this is badly out of date. It's using the old android-sdk tool which has been replaced by sdkmanager and it's using SDK 28 and some old build tools. Also it's running off jdk 8 which I'm sure we can improve on.

I won't go through every step I took as this would be a really long post, but I'll post some of the things I learnt along the way.

Firstly playing with Docker was really fun, but it was also quite frustrating. Constantly building and re-building took a while. What I found faster was to install Docker desktop and run it locally. This also means you can mount your docker image and run bash to debug your environment.

Also I learnt that a docker file is not docker compose. A docker file is just a dockerfile and you use this to create an image using the docker build command. Docker compose is the yml or yaml file. This allows you to create multi-container applications. Think of this a bit like config for the docker container that is running your docker image.

Another thing I learnt was that Android sdkmanager is fussy and you need to be quite precise with paths and directories or the toys and the pram part ways. The license problem mentioned above ended up being really frustrating. What I eventually found was I had to make sure the sdk wasn't in the root folder and just giving /sdk write permissions wasn't enough. I had to explicitly grant the /sdl/licenses folder write permissions. After many many runs that straightened all that out.

Below is my new dockerfile for Gitlab. You'll notice the following big differences from the stock file I got from Gitlab:

  • It now uses JDK11
  • It uses SDK 33
  • It uses sdkmanager 
  • It updates sdkmanager as it goes
  • It should (hopefully) be able to install the latest SDK and accept the license at build time
I'm wondering if I should pass in the SDK and Android build tools versions from the yaml file. I'm also wondering about installing gradle in the docker file as this takes a few seconds each time.


# This Dockerfile creates a static build image for CI
#!/bin/sh

FROM openjdk:11-jdk

# Just matched `app/build.gradle`
ENV ANDROID_COMPILE_SDK "33"
# Just matched `app/build.gradle`
ENV ANDROID_BUILD_TOOLS "30.0.0"
# Version from https://developer.android.com/studio/releases/sdk-tools
ENV ANDROID_SDK_TOOLS "8512546_latest"
ENV ANDROID_HOME /opt/sdk
ENV ANDROID_SDK_PATH /opt/sdk

# install OS packages
RUN apt-get --quiet update --yes
RUN apt-get --quiet install --yes wget apt-utils tar unzip lib32stdc++6 lib32z1 build-essential ruby ruby-dev

# We use this for xxd hex->binary
RUN apt-get --quiet install --yes vim-common
# create sdk directory, install Android SDK
# https://dl.google.com/android/repository/commandlinetools-linux-8512546_latest.zip
RUN mkdir -p /opt/sdk
RUN wget --quiet --output-document=android-sdk.zip https://dl.google.com/android/repository/commandlinetools-linux-${ANDROID_SDK_TOOLS}.zip && \
mv android-sdk.zip /opt/sdk && \
cd /opt/sdk/ && \
unzip android-sdk.zip

#Sort out the mess - https://stackoverflow.com/a/65262939
RUN mkdir /opt/sdk/cmdline-tools/tools && \
mv /opt/sdk/cmdline-tools/bin /opt/sdk/cmdline-tools/tools/ && \
mv /opt/sdk/cmdline-tools/lib /opt/sdk/cmdline-tools/tools/ && \
mv /opt/sdk/cmdline-tools/NOTICE.txt /opt/sdk/cmdline-tools/tools/ && \
mv /opt/sdk/cmdline-tools/source.properties /opt/sdk/cmdline-tools/tools/ && \
export PATH="${PATH}:/opt/sdk/cmdline-tools/tools/bin:${ANDROID_HOME}"

RUN chmod -R 777 /opt/sdk

RUN /opt/sdk/cmdline-tools/tools/bin/sdkmanager --licenses
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager --update
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "platforms;android-${ANDROID_COMPILE_SDK}"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "platform-tools"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "build-tools;${ANDROID_BUILD_TOOLS}"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "extras;m2repository;com;android;support;constraint;constraint-layout;1.0.2"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "extras;android;m2repository"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "extras;google;google_play_services"
RUN echo y | /opt/sdk/cmdline-tools/tools/bin/sdkmanager "patcher;v4" "tools"
RUN chmod -R 777 /opt/sdk
RUN chmod -R 777 /opt/sdk/licenses
RUN yes | /opt/sdk/cmdline-tools/tools/bin/sdkmanager --licenses
RUN chmod -R 777 /opt/sdk

# install FastLane
COPY Gemfile.lock .
COPY Gemfile .
RUN gem install bundler -v 1.16.6
RUN bundle install


Anyway, I really hope this post is of use. I've certainly enjoyed learning about Docker.











05 April 2022

Git on Windows with Putty

This is something that's super simple on Linux but every couple of years I have to setup a new Windows machine with GIT and Putty (pageant to be exact). Every time I do it I seem to forget one of the steps and spend a few hours trying to remember what I've missed. So here is the note to myself so I can remember how to do it.

Components to install:

  1. Putty including of course puttygen and pageant

  2. Git Bash


1) Generate a ssh key 

I won’t go into details here as there are a million tutorials already on the internet but you want to use Puttygen to create a new private/public security key pair. You can save this as a .ppk file on your local machine.


2) Upload the public key to GitLab / Github

Copy the public key from inside Puttygen to GitLab or GitHub. You'll need to find the page on your account where you can add an SSH key and paste it in there. Be sure this is the public key and not the private one.


3) Run pageant 

Run pageant and it should appear in your system tray. Now open it and add the .ppk to your keys.


4) Add a GIT_SSH environment variable

Using this for advice:

https://stackoverflow.com/a/43313491

Create an environment variable that points to plink. This is probably the step I forget the most.

5) Configure GIT

You now need to ensure you have the correct url added to your git repo. A few times I've used the http one or the wrong url or something. Carefully copy the SSH (git@gitlab.com...) url from Gitlab or Github and add this to your git project as a remote repo url. You can check you've done this correctly by typing git remote -v


That's it. You should definitely restart Git bash and restarting your PC wouldn't hurt either, just make sure you start up pageant again. Another thing I forget is how to get this to run on startup. Oh it's a complicated life!

24 January 2022

Publishing Android Jacoco test results to SonarQube

This is something I really though would be pretty straightforward but I wanted my Team City build server to push my unit test coverage to SonarQube. 

SonarQube is a really cool application that shows you all sorts of really interesting insights and problems with your code. Inefficiencies, security problems and tech debt are all highlighted by SonarQube in a nicely presented dashboard. It works with all sorts of languages but of course with Android we're focusing on Java and Kotlin.

I'd managed to get SonarQube setup and the basic code details pushed over fairly easily, but I couldn't get the test coverage over.

There are of course other ways around, I could have installed a better plugin on Team City which would have helped I'm sure, or I could have installed a Sonar Gradle plugin, but I didn't want to do that.

The trick ended up being to get xml test coverage. Once I had that sorted I could push that xml report to Sonar and it started showing my test coverage.

Nothing I did in the standard build.gradle file seemed to work, the standard jacoco plugin seemed to ignore my pleas for xml coverage reports. Eventually I found a really great little script that helped me get there.

https://medium.com/wandera-engineering/android-kotlin-code-coverage-with-jacoco-sonar-and-gradle-plugin-6-x-3933ed503a6e

Big thanks go to the Auther of this piece for figuring this out.