A dive into the trinity

A joke that originated from my roomate Andrew goes something like this

nixos trinitarianism

Naturally, as with all things cursed, seeing that compelled me to try it. As with anything new, it seemed scary at first - a massive departure from the typical package management paradigm. However, having used it on-and-off for a couple weeks or so, I can confidently say there's a lot to like about it!

The motivation

So why NixOS over all the other distros out there? NixOS definitely pales in comparison to the mainstay distros (Debian, Ubutntu, RHEL, etc) in terms of official support from software vendors.

The big motivation for NixOS is avoiding DLL hell. You think Linux doesn't suffer from DLL hell? Think again! Think of the multiple times you've seen apt or pacman fail leaving you with a system full of garbage in your /usr/bin! Think of the multitude of precompiled Linux binaries that will never run on your machine because of a different libc! Think of the amount of library packages you installed for schoolwork once and never touched again! That's what Nix is trying to avoid, in the big picture.

Of course, there's been other efforts to prevent this - most notably, Fedora's Silverblue aims to do something similar - with an immutable root and flatpaks to manage a package's dependencies. However, NixOS's approach to doing this differs from Silverblue significantly.

NixOS attempts to avoid this DLL (or rather, package) hell by taking inspiration from the world of functional programming - the seminal paper that describes Nix as a whole lays out the functional nix language that NixOS uses to determine package dependecies and settings.

The DevOps and sysadmin people will be immediately drawn to the reproducability that NixOS provides - the "godfile" configuration.nix is all that is needed to re-create an OS (sans any user data, obviously), and a typical NixOS development environment strives to be completely reproducable - the same source code should yield the same binary.

Great, how do I get started?

There is no substitute for hands-on experience! Trust me, unlike other distros, this isn't something that you can learn by osmosis! It's very much equivalent to learning your first functional language!

You'd want to get the NixOS ISO and write it to a boot USB, then follow the instructions to install it. These instructions are very sparse, so don't be afraid to Google!

Some installation notes and stumbling blocks:

  1. The live ISO automatically assumes DHCP. While this is to be expected, it's a bit of a stumbling block for people without DHCP (ahem, ACM UMN server closet). To get around this, we first tell the live system the static IP

    ifconfig <dev name> netmask
    route add default gw
    echo "nameserver >> /etc/resolv.conf

    Then, in configuration.nix, we need to specify the same parameters.

        networking.hostName = "hostname";
        networking.useDHCP = false;
        networking.defaultGateway = {
            address = ""
            interface = "eth0";
        networking.interfaces.eth0.ipv4.addresses = [{
            address = "";
            prefixLength = 24;
        networking.nameservers = [ "" "" ];
  2. Early KMS - early KMS, where the display driver and KMS is initialized during the initramfs stage, has some benefits in terms of laptop power savings, DRI2, and some alleged kernel space power savings. To enable early KMS on a NixOS configuration, edit configuration.nix

    boot.initrd.availableKernelModules = [ "i915" ];
    boot.initrd.kernelModules = [ "i915" ];

    Note that i915 is the Intel kernel module! Do lsmod to investigate what modules you need rather than blindly using i915!

Even more to talk about

There is a lot to the Nix ecosystem, as you can see from the trinity at the top of the post. We've only scratched the surface of NixOS! Not the Nix ecosystem, just one piece of it. I really do encourage you to try it out on your own, and do stay tuned for my NixOS updates!

Til next time, Shaun.

Sysadmining is fun!


A great importance of being a sysadmin is documenting how your systems work. I've never been much of a fan of the "oh I know how to do it in my mind" mentality that most sysadmins have, and I don't blame them - I do it too, sometimes.

I just learned that a "rite" of being the sysadmin of ACM is that I get to pave over everything on every system and do things my way. Being the "benevolent dictator" for the year I'm sysadmin. I don't quite agree with that, and I intend for the next sysadmin to know the ins and outs of ACM UMN's server closet.


Peg DHCP is a joke RFC describing a hilariously impractical way to hand out IP addresses using clothespins. It also works surprisingly well in describing how ACM UMN's IP block is set up. Yep, no DHCP, no dynamic allocation of IP addresses.

The spreadsheets detailing which IP is assigned to who is hilariously outdated, and the seemingly agreed-upon way of tracking IPs was "find an IP in our range, if it doesn't respond to ping, assign it." I've finally created a new spreadsheet detailing what IPs are in use, but it seems like everyone else is as clueless as me on what some IPs are being used for...

A deep dive into some of ACM UMN's systems

  1. argo: This is the main system of ACM UMN. Members, on request, can get a free VM with up to 4 cores and up to 4GB of RAM, as well as a public IP under UMN's IP range. This is a 24-core Sandy Bridge based machine with roughly 6TB of storage. It runs Rocky Linux, a downstream of RHEL.

  2. vm: Nicknamed "vehicular manslaugheter", this is a new-ish system (Broadwell IIRC?) that was supposed to be the replacement for argo, but it now just sits there being used as a NixOS build machine. Work is ongoing in migrating all active student VMs to vm, but it's going to be a while, as I need to get used to NixOS and also document everything I do and why.

  3. medusa: A machine held together with duct tape and magic. Currently runs Debian 11 and has roughly 6TB of space, which will be helpful when we start an activitypub thing as a club.

  4. garlic: Our all-purpose CUDA machine! Currently has a GTX 1080 and an RTX 3080, and has docker-nvidia for all sorts of CUDA shenanigans!

The big mess

It seems that the previous sysadmins knew their security craft well, because I can't seem to actually be able to get into the management interfaces of the various switches we use. Mind you - this is mostly a matter of communication (TL; DR I haven't actually asked how they did it.)

What I do know: A Raspberry Pi named WOPR (Wargames 👀) hosts sshuttle, a VPN-over-SSH server that allows me to transparently access most of the management IPs in the 10.99.99.X range.

What I don't know: Actually accessing the Cisco switch management interfaces. These switches have not been updated in a long time, and Chrome/Firefox refuses to work with the web interfaces of these managed switches. :\ (This is a subtle reminder for you all to update the firmware on your various networking devices.)

The TO-DOs

  1. Gaining access to the network switch management interfaces.

  2. Re-working the networking setup in the server closet such that we don't rely on Peg DHCP anymore. (This is a project in the works with multiple people on the new ACM UMN Systems Committee. Stay tuned to see how this goes.)