Have you ever noticed it's way simpler to just say what you want—trusting the person will figure it out—rather than writing out detailed instructions, monitoring the process, and then staying on top of it forever after?
A person is the manager of their system. Yes, they monitor it, especially if it's Linux, and orchestrate how it functions across different levels. But everything has its limits: when you're talking to another person, especially a subordinate, things go much smoother if you tell them "make a presentation about moles" rather than "wake up → come to work → sit at your workstation → log in → open PowerPoint → blah blah blah".
Annoyed 'bout that?
Good news everyone! This problem has been solved, and today we'll explore where this is already being applied, how successfully, and why 90% of routine work will look exactly like this in the future.
The article discusses the main control points. Yes, I know that there were other languages before Assembly. Yes, I know that I missed a lot of theory. But this doesn't matter, considering that this is not the point of the article, but rather a side explanation of evolution
low-lvl & imperative past
Back in the day—and (I'm talking way back), like when computers were the size of rooms and debugging meant literally finding bugs in vacuum tubes—everything was imperative. You had to tell the machine exactly what to do, step by step, like explaining to a particularly dense toddler how to tie their shoes.
; lnitialize data section
.data
num1 DW 5 ; First number
num2 DW 3 ; Second number
result DW 0 ; Storage for result
buffer DB 10 DUP(0) ; Buffer for output
.code
; load first number into AX register
MOV AX, [num1] ; AX = 5
; load second number into BX register
MOV BX, [num2] ; BX = 3
; Perform addition
ADD AX, BX ; AX = AX + BX = 5 + 3 = 8
; store result in memory
MOV [result], AX ; result = 8
; prepare for output (convert to ASCII)
MOV DX, AX ; Copy result to DX
ADD DX, 30h ; Convert to ASCII ('0' = 30h)
MOV [buffer], DX ; Store ASCII in buffer
; sys call to display result
MOV AH, 09h ; DOS function: display string
LEA DX, buffer ; Load address of buffer
INT 21h ; Call DOS interrupt
; exit
MOV AH, 4Ch ; DOS function: terminate program
MOV AL, 0 ; Exit code
INT 21h ; Call DOS interrupt
What's happening here? We're manually orchestrating every single step: declare memory locations, load values into CPU registers, perform arithmetic operations, convert numbers to displayable format, make system calls for output, and explicitly terminate the program. Twenty-plus explicit instructions just to add two numbers and show the result.
This wasn't a design choice; it was survival. Memory was measured in kilobytes, CPU cycles were precious, and abstraction was a luxury nobody could afford. You wrote assembly because that's what the machine understood, period.
Every. Single. Operation. Had to be micromanaged. Error handling was scattered everywhere. Adding a new field to update meant touching 15 different functions. It was maintenance hell.
The cost of this micromanagement wasn’t just time. It was bugs. Security holes. A single MOV in the wrong place and you’ve got a buffer overflow that takes down your whole system—or opens it to attackers.
so why assembly still lives?
Despite its reputation as arcane and painful, assembly never really died. In fact, it's still very much alive—just not where most developers live.
Assembly is still used when:
- Performance is absolutely critical — think signal processing, game engines, embedded real-time systems.
- Memory layout must be tightly controlled — like in kernel code, firmware, and cryptographic primitives.
- You're writing code for platforms without higher-level runtimes — bare metal microcontrollers, bootloaders, BIOS.
- You’re doing reverse engineering, security research, or exploit dev — because bugs live at the bottom.
Even in 2025, every Rust or Python program ultimately runs on something built with assembly (or compiled to it). It’s the invisible foundation. But the key difference? You don’t write it by hand unless you really, really have to.
Assembly today is like soldering: essential, but only done directly when precision matters more than convenience.
Most modern developers don’t want to micromanage how memory is loaded or how a register is saved mid-context-switch. Instead, they want to focus on what their system should do—not how the CPU should do it.
That brings us to the next step in this evolutionary chain: high-level imperative languages, like C, Python, or Go. They still rely on detailed instructions, but now you're telling the machine "sort this list" instead of "store this value in AX, compare it to BX, jump if greater...".
and then it evolved
Assembly was brutal, but it taught us the rules of the game. Eventually, we realized: we don’t need to tell the machine every twitch of its fingers. We can give it commands, not just movements.
enter high-level imperative languages
These languages—starting with C, then later C++, Java, Python, Go—let you describe logic in compact, readable form. You're still responsible for how the job gets done, but you're no longer shuffling bytes by hand.
#include <stdio.h>
int main() {
int num1 = 5;
int num2 = 3;
int result = num1 + num2;
printf("Result: %d\n", result);
return 0;
}
This tiny C snippet replaces the two dozen instructions from our assembly example. You don’t care about registers or memory segments anymore. The compiler figures it out. You focus on logic.
But don’t get too comfortable—you're still micromanaging the flow: declaring variables, ordering operations, managing scope, handling memory (in C, at least).
And if you want the computer to update a record in a file or spin up a background task, you still have to spell it out step by step.
High-level imperative code is basically assembly with better nouns
It was a huge upgrade. But it still wasn’t the dream
but it didn’t jump straight to declarative
But there was no magical leap from mov eax, ebx to terraform apply. Between those extremes, a whole ecosystem of intermediate paradigms evolved — each nudging us closer to what we want, and further from how to make it happen.
These middle steps included:
1. structured programming
We gave up goto, embraced control flow blocks: if, while, for, switch. Programs became less like tangled string and more like flowcharts.
for (int i = 0; i < 10; i++) {
printf("%d\n", i);
}
2. object-oriented programming (AKA OOP)
Then came objects — packaging state and behavior together. Instead of just what to do, you started thinking in terms of who should do it.
class Dog {
void bark() {
System.out.println("woof woof *dog noises*");
}
}
But you still had to tell it what to do, method by method. Just now, it was the dog doing it.
3. event-driven & reactive
In UIs, backends, even spreadsheets, reactivity crept in. Code responded to changes, events, signals — not just ran top-down. Think JavaScript, spreadsheets, later: React.
button.onclick = () => alert("clicked");
You weren't writing instructions. You were wiring up behavior.
4. configuration languages
Things like .ini, .toml, .yaml, even Dockerfiles — they weren’t full programming languages. But they declared state. You didn’t write a loop to install packages — you listed them.
FROM node:18
COPY . .
RUN npm install
CMD ["node", "index.js"]
A step away from scripts. A step toward intent.
5. declarative subsystems inside imperative hosts
Even inside imperative code, declarative ideas were taking hold:
- Regex is declarative. You don’t say how to match — you describe the pattern.
- CSS is declarative. You don’t say draw a red box — you say this thing is red and boxy.
- SQL? Super declarative. You say what data you want. The engine figures out how to get it.
SELECT name FROM users WHERE active = true;
So what happened?
Over time, we started to realize: every time we stop micromanaging how, we unlock massive leverage.
And that’s how we got to declarative systems.
the declarative breakthrough
So here we are. After decades of telling computers exactly how to breathe, we finally figured out the secret: just tell them what you want.
Declarative programming flips the script entirely. Instead of writing a recipe, you write a specification. Instead of micromanaging every step, you describe the end state and let the system figure out how to get there.
# Kubernetes deployment - pure declaration
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:latest
ports:
- containerPort: 8080
What are we saying here? "I want 3 copies of my app running, accessible on port 8080." We're not saying how to start containers, how to handle failures, how to distribute them across nodes. Kubernetes figures that out.
Compare this to the imperative equivalent:
# The old way - imperative hell
docker pull my-app:latest
docker run -d -p 8080:8080 --name app1 my-app:latest
docker run -d -p 8081:8080 --name app2 my-app:latest
docker run -d -p 8082:8080 --name app3 my-app:latest
# Set up health checks
while true; do
if ! docker ps | grep app1; then
docker run -d -p 8080:8080 --name app1 my-app:latest
fi
if ! docker ps | grep app2; then
docker run -d -p 8081:8080 --name app2 my-app:latest
fi
# ... and so on, forever
sleep 30
done
One is 15 lines of "what I want." The other is infinite lines of "how to babysit it."
the magic happens in the controller
But how does this actually work? The secret sauce is the control loop pattern. Declarative systems constantly ask: "What did they want? What do I have? How do I make them match?"
// Simplified Kubernetes controller logic
for {
desired := getDesiredState() // Read the YAML spec
current := getCurrentState() // Check what's actually running
if desired != current {
reconcile(desired, current) // Make it so
}
time.Sleep(30 * time.Second) // Check again in a bit
}
This is why Kubernetes "just works." You say "I want 3 replicas," one crashes, and within 30 seconds there's a new one. You didn't write any crash handling code. You just declared your intent.
terraform: infrastructure as shopping list
Terraform took this idea and ran with it for infrastructure. Instead of writing scripts to create servers, databases, and networks, you write a shopping list:
# What I want, not how to get it
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1d0"
instance_type = "t2.micro"
count = 2
tags = {
Name = "web-server"
}
}
resource "aws_db_instance" "main" {
engine = "postgres"
engine_version = "13.7"
instance_class = "db.t3.micro"
allocated_storage = 20
}
Run terraform apply and it figures out the order: create the VPC first, then subnets, then security groups, then instances. Change the count from 2 to 5? It creates 3 more. Delete a resource from the file? It destroys it.
The imperative equivalent would be hundreds of lines of AWS CLI commands, error handling, rollback logic, and state tracking. Nightmare fuel.
react: UI that thinks for itself
React brought declarative thinking to user interfaces. Instead of manually manipulating DOM elements, you declare what the UI should look like for any given state:
function TodoApp({ todos }) {
return (
<div>
<h1>My Todos ({todos.filter(t => !t.done).length} left)</h1>
<ul>
{todos.map(todo => (
<li key={todo.id} className={todo.done ? 'done' : 'pending'}>
{todo.text}
</li>
))}
</ul>
</div>
);
}
Change the todos array, and React figures out exactly which DOM elements to update, create, or remove. You describe the final UI, not the transitions.
Compare to the old jQuery days:
// Imperative DOM manipulation hell
function updateTodoList(todos) {
const list = document.getElementById('todo-list');
list.innerHTML = ''; // Clear everything
todos.forEach(todo => {
const li = document.createElement('li');
li.textContent = todo.text;
li.className = todo.done ? 'done' : 'pending';
li.setAttribute('data-id', todo.id);
list.appendChild(li);
});
// Update counter
const counter = document.getElementById('counter');
const remaining = todos.filter(t => !t.done).length;
counter.textContent = `${remaining} left`;
}
Every state change required manually orchestrating DOM updates. Miss a step? Bugs. Forget to clean up? Memory leaks.
sql: the original declarative language
SQL was way ahead of its time. Since the 1970s, you've been able to say what data you want without worrying about how to get it:
-- I want customer names and their order totals
-- Figure out how to join, aggregate, and optimize
SELECT
c.name,
SUM(o.amount) as total_spent
FROM customers c
JOIN orders o ON c.id = o.customer_id
WHERE o.created_at > '2024-01-01'
GROUP BY c.id, c.name
HAVING SUM(o.amount) > 1000
ORDER BY total_spent DESC;
The query planner figures out whether to use indexes, which table to scan first, whether to sort or hash-join. You just describe the result you want.
Imagine writing this imperatively:
# The horror of manual query execution
customers = load_all_customers()
orders = load_all_orders()
result = []
for customer in customers:
customer_orders = []
for order in orders:
if (order.customer_id == customer.id and
order.created_at > datetime(2024, 1, 1)):
customer_orders.append(order)
total = sum(order.amount for order in customer_orders)
if total > 1000:
result.append((customer.name, total))
result.sort(key=lambda x: x[1], reverse=True)
This is O(n²), has no optimization, and probably runs out of memory on large datasets. The SQL version? The database figures out the most efficient execution plan.
the pattern emerges
Notice what's happening across all these examples:
- You declare intent — what the end state should look like
- The system has a controller — something that knows how to achieve that state
- Reconciliation happens automatically — the system continuously works to match reality to your declaration
- You don't handle the edge cases — the controller deals with failures, race conditions, and optimization
This is the declarative pattern. And once you see it, you can't unsee it. It's everywhere.
the final frontier: declarative operating systems
We've seen declarative thinking conquer applications, infrastructure, and user interfaces. But there's one last bastion of imperative chaos that's been holding out: the operating system itself.
Think about how you set up a new machine today. You install the OS, then you run a bunch of commands:
# The traditional way - imperative system management
sudo apt update
sudo apt install git vim nodejs npm docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
sudo usermod -aG docker $USER
git clone https://github.com/myuser/dotfiles
cd dotfiles && ./install.sh
# ... 50 more commands
# ... pray nothing breaks
# ... repeat on every machine`
What happens when you want to replicate this setup? You write a script. What happens when the script breaks halfway through? You debug, fix, and run it again. What happens when you want to remove something? You manually hunt down all the pieces it installed.
It's imperative hell, but for your entire operating system.
enter nixos: the os that thinks declaratively
NixOS looked at this mess and said: "What if we could declare our entire operating system?"
Instead of running commands to configure your system, you write a single configuration file that describes what your system should look like:
# /etc/nixos/configuration.nix - your entire system in one file
{ config, pkgs, ... }:
{
# sys packages I want installed
environment.systemPackages = with pkgs; [
git
nodejs
pnpm
docker
librewolf
neovim
emacs
zathura
];
# services I want running
services = {
openssh.enable = true;
docker.enable = true;
nginx = {
enable = true;
virtualHosts."mysite.local" = {
root = "/var/www/mysite";
};
};
};
# some additional variables
environment.sessionVariables = {
XCURSOR_THEME = "Adwaita";
XCURSOR_SIZE = "24";
HYPRCURSOR_THEME = "Adwaita";
PATH = [
"$HOME/.config/emacs/bin"
# "$HOME/.local/bin"
];
};
# users I want to exist
users.users.alice = {
isNormalUser = true;
extraGroups = [ "wheel" "docker" ];
packages = with pkgs; [
tree
curl
];
};
# firewall rules
networking.firewall = {
enable = true;
allowedTCPPorts = [ 22 80 443 ];
};
}
That's it. Your entire system configuration in one declarative file. Want to apply it? sudo nixos-rebuild switch. Want to replicate it on another machine? Copy the file and run the same command.
But here's where it gets really wild...
The shift from imperative to declarative isn't just about cleaner code. It fundamentally changes what's possible.
flakes: nixos grows up and gets reproducible
NixOS was already mind-bending, but it had a problem: your configuration could work perfectly on your machine and fail mysteriously on someone else's. Why? Because it depended on which version of the Nix package repository you happened to have.
Enter Nix flakes — the grown-up version that says "let's pin everything to exact versions and make this actually reproducible."
# flake.nix - your system, locked to specific versions
{
description = "my_bulletproof_config";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
home-manager.url = "github:nix-community/home-manager/release-23.11";
};
outputs = { self, nixpkgs, home-manager }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
home-manager.nixosModules.home-manager
];
};
};
}
Now your entire system — every package, every service, every configuration file — is locked to specific git commits. Same input, same output, every time. No more "works on my machine."
Want to upgrade? Change the input URLs and rebuild. Want to rollback? sudo nixos-rebuild switch --rollback --flake /etc/nixos#nixos. Want to try someone else's setup? nix run github:their-username/their-config.
the time machine effect
But here's where NixOS breaks your brain completely. Because everything is declarative and purely functional, every system configuration is a separate entity. When you rebuild your system, you're not modifying the old one — you're creating a new one entirely.
# Your system configurations are like git commits
$ sudo nixos-rebuild switch --flake /etc/nixos#nixos
building...
activating the configuration...
$ nixos-rebuild list-generations
1 2024-01-15 10:30:45 (current)
2 2024-01-14 15:22:13
3 2024-01-12 09:45:32
4 2024-01-10 16:18:27
Each generation is a complete, bootable system configuration. Upgrade broke something? Boot into the previous generation from GRUB. Want to compare what changed between two configurations? nix store diff-closures.
You can literally time-travel your operating system. Try doing that with Fedora xoxo.
development environments that just work
Flakes took this reproducibility superpower and applied it to development environments. Instead of "install Node 18, Python 3.11, and hope your system doesn't explode," you declare exactly what you need:
# flake.nix for a web project
{
inputs.nixpkgs.url = "nixpkgs/nixos-unstable";
outputs = { nixpkgs, ... }: let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.${system}.default = pkgs.mkShell {
buildInputs = [
pkgs.nodejs_18
pkgs.python311
pkgs.postgresql_15
pkgs.redis
];
shellHook = ''
echo "Dev environment loaded!"
echo "Node: $(node --version)"
echo "Python: $(python --version)"
'';
};
};
}
Now anyone can clone your repo and run nix develop. Boom — they have exactly the same development environment as you, down to the patch version of every dependency. No Docker containers, no virtual machines, no "did you remember to install Redis?"
the network effect kicks in
Once you have reproducible environments, magic starts happening. Your CI/CD pipeline runs in exactly the same environment as your laptop. Your production servers are built from the same flake as your development box. Your coworker's machine is bit-for-bit identical to yours.
# One flake, multiple outputs
outputs = { nixpkgs, ... }: {
# Development environment
devShells.x86_64-linux.default = /* ... */;
# Production Docker image
packages.x86_64-linux.docker = pkgs.dockerTools.buildImage {
/* ... exact same dependencies ... */
};
# NixOS module for deployment
nixosModules.myapp = /* ... */;
};
The old world had "development works, staging is broken, production is a mystery." The new world has "if it builds, it works everywhere, identically."
downsides
You can check out the downsides in this video. Yeah, basically all the so-called downsides are just a skill issue. Some people write that it's because you have to work a lot on the system, "everything needs to be ready right away" and stuff like that. But damn, how much more ready can it get? Complete reproducibility, for crying out loud!
Working on Arch, everything broke so often I can't even put it into words. I worked on it for several years, customized everything possible, even customized the kernel to run it on uConsole. Damn, it's such a complex product...
| Distribution | Stability | Ease of Use | Package Management | Target User |
|---|---|---|---|---|
| Ubuntu | High | Very Easy | APT (Stable) | Beginners, general use |
| Debian | Very High | Moderate | APT (Conservative) | Servers, stability-focused |
| Arch Linux | Low | Hard | Pacman (Bleeding edge) | Advanced users, DIY |
| Fedora | High | Easy | DNF (Current) | Developers, latest features |
| openSUSE | High | Easy | Zypper | Enterprise, KDE users |
| Mint | High | Very Easy | APT (Stable) | Windows migrants, simplicity |
| CentOS/RHEL | Very High | Moderate | YUM/DNF | Enterprise servers |
| Manjaro | Moderate | Easy | Pacman (Delayed) | Arch benefits, easier setup |
What's specifically wrong with Arch:
- Rolling release instability - Updates can break your system at any time since you're always on the bleeding edge
- Manual maintenance required - You need to constantly monitor news, manually resolve conflicts, and babysit updates
- Fragile customizations - Heavy customization makes the system even more prone to breaking with updates
- Time-consuming troubleshooting - When things break (and they will), you spend hours debugging instead of being productive
- Documentation dependency - Everything requires reading wikis and forums; nothing "just works" out of the box
- Package conflicts - AUR packages and custom builds often conflict with official packages
- No safety net - Unlike stable distros, there's no fallback when experimental packages cause issues
The "skill issue" argument misses the point - even skilled users shouldn't have to constantly fix their OS when they could be doing actual work.
| Feature | Arch Linux | NixOS | Winner |
|---|---|---|---|
| System Stability | Low (rolling, breaks often) | High (atomic updates) | NixOS |
| Reproducibility | None (manual setup) | Perfect (declarative config) | NixOS |
| Rollbacks | Manual snapshots only | Built-in, instant | NixOS |
| Package Manager | Pacman (traditional) | Nix (functional) | NixOS |
| Learning Curve | Steep (Linux knowledge) | Very Steep (functional concepts) | Arch |
| Documentation | Excellent Wiki | Good but scattered | Arch |
| Package Availability | Large (AUR included) | Large (nixpkgs) | Tie |
| Customization | Manual, fragile | Declarative, robust | NixOS |
| Development Environments | Manual setup | Perfect isolation | NixOS |
| System Recovery | Difficult, time-consuming | Easy rollback | NixOS |
| Community Size | Large, established | Growing, passionate | Arch |
| Enterprise Use | Rare | Increasing adoption | NixOS |
Key Differences Arch's problems that NixOS solves:
- No reproducibility - Arch setups are unique snowflakes that can't be replicated
- Update anxiety - Every
pacman -Syuis a gamble that might break your system - Dependency hell - Conflicting packages and manual resolution
- Time waste - Constant maintenance instead of productive work
NixOS advantages:
- Atomic updates - Either everything works or nothing changes
- Instant rollbacks - Broken update? Reboot to previous generation 3.Declarative configuration - Your entire system in one config file
- Perfect dev environments - Isolated, reproducible project setups
Where Arch still wins:
- Simpler mental model - Traditional package management is easier to understand
- Better documentation - The Arch Wiki is unmatched
- Faster initial setup - NixOS requires learning functional programming concepts
The bottom line: NixOS eliminates the "skill issue" entirely by making broken systems impossible through mathematical guarantees. But... some people do manage to make it work. Yes, NixOS isn't the easiest to learn if you're switching from Arch. But after going through the period of suffering, you understand what you were fighting for.
the declarative future
The computing world is inevitably moving toward declarative systems. We've already seen this shift in infrastructure with Docker, Kubernetes, and Terraform - tools that let you describe what you want rather than how to build it. NixOS simply applies this same principle to your entire operating system.
Traditional imperative systems like Arch represent the old way of thinking: manually executing commands, hoping nothing breaks, and crossing your fingers with each update. It's the equivalent of managing servers by SSH-ing in and running random commands.
Declarative systems are the future because they solve fundamental problems that imperative systems can't: reproducibility, reliability, and predictability. When your entire system configuration is code, you get version control, testing, and collaboration for free.
The initial learning curve is steep, but it's an investment in never having to debug a broken system again. While others are reinstalling their OS or spending weekends fixing update conflicts, you're simply rolling back to a working state or reproducing your exact setup on any machine.
The question isn't whether declarative systems will dominate - it's how long it will take for everyone else to catch up. Those who make the switch now are just getting ahead of the inevitable.
Yeah, it’s a time investment, no doubt. Buying an iPhone, installing Windows, and handing over all your personal data to a corporation — that’s faster and easier, I’ll give you that. But once you decide to dive into Arch or NixOS, the “time issue” comes with the territory. Still, it’s not a second job like some people make it out to be — it’s a hobby. I’m not stressed when I’m ricing my config or poking around under the hood — that’s how I unwind. But if that feels like work to you... just install Windows tho.
thanks for the time honeybun :*