When I began Gurp, my primary criterion was “must do everything my current Ansible setup does”. With a successful, hands-free rebuild of all my zones, this goal is achieved, and I’m cutting release 1.0.0.
Let’s have a look at some charts. I love charts. Gurp has a --metrics-to
option, which makes it push run summary metrics to my VictoriaMetrics instance,
in InfluxDB format.
Here’s the apply time. In each of these zones Gurp is managing somewhere between thirty and eighty resources.
The outliers have lots of packages and some Ruby Gems. If it doesn’t need to
shell out to pkg
or gem
, Gurp can fire up, compile your machine definition
and assert (and possibly correct) the state of dozens of resources in under a
second. When I maintained these same configs with Puppet, it was taking at least
a couple of minutes per zone. Ansible was worse. Gurp can do thirty zones in
thirty seconds.
How about resource usage? Ansible runs maxed out the CPU on my box, and Chef used so much memory most of my zones wouldn’t let it run.
That’s percentage CPU (I need to fix that axis), and none of the spikes correlate with Gurp runs. How about memory?
Flat. You can’t see Gurp at all.
What I Got Right
-
Janet Config. The more config I describe with my Janet DSL, the more I like it. It’s clear and simple, and it’s so easy to write modules that I started doing it without even realising.
I have fewer problems with malformed Janet than I used to have with YAML. I use Helix with janet-lsp, so it’s easy to spot mismatched parens., and it you have any, Janet (and therefore Gurp) gives you pretty helpful errors. I also have Spork installed on my dev box, which gives Helix a
:fmt
command that makes Janet look lovely. Lisp may not be everyone’s cup of tea, but I like it. -
No variable/attribute hierarchy. Chef et al have complicated ways of automatically inheriting variables. I chose to not do this at all, and make the user fetch things with Janet. In my environment, this works brilliantly. I have a
globals.janet
file containing a bunch of(def)
s, all using an appropriate data structure. My modules(import)
this, and use(get)
. It couldn’t be clearer. -
Keeping it Simple. If I had tried to cover everything I could think of, rather than everything I need, I’d probably still be writing the
pkg
doer.
What I Got Wrong
-
Janet -> Rust. I wasted far too much time parsing
Janet
structures with Rust. Using JSON as an interchange format between front and back made life so much easier. -
References. I put quite a bit of effort into allowing Gurp resources to reference properties of other resources, and I never use it. It’s cleaner and easier to stick a
(def)
at the top of the file and refer to it in both places. -
Focusing on being “correct”. “Global mutable state” might be as dirty a string of words as programming has, but using it in the Janet front-end allowed me to throw out a stack of over-complicated and flaky semi-functional experiments and helper macros, and simply get the job done. Maybe it’s a shame Janet doesn’t have something like Clojure’s
atom
to sweeten the pill, but in a single-threaded one-and-done process, I don’t even need that. -
Focusing on speed. I made horrible and oversimplified doers by trying to minimise my calls to
pkg
,pkgin
andgem
. Much of that horribleness is still there, but with more on top to get around the limitations of the original decisions. Truth is, there’s no way to get around how long these external tools take, but it still galls me that my lovely fast program stops dead whilepkg
breaks out the Python.
What I’m Still Not Sure About
-
Not having a generic doer. I very nearly put this in the first section, because, as of today, I like the idea that Gurp will not let you run an arbitrary command. It’s safer, it lets you do more reasonable no-op planning, and it means that important operations (i.e. anything you want to do to a live system) requires some thought and testing. It makes everything “official”. I realise this would limit adoption, but as I have no ambition or expectation for anyone beyond myself to run Gurp; that’s fine.
-
Explicit Dependencies. So far I have not needed any explicit
before
orafter
type markers. Just having Gurp do resource types in a particular order has been enough. But, again, there’s something at the back of my head telling me I’ll need it one day. -
Secrets. Anything involving secrets gets complicated fast, and avoiding complexity has been my top priority on Gurp. I have a
gurp-config
Git repo which contains all my host, role, and module definitions, and the aforementionedglobals.janet
. The modules that need to also includesecrets.janet
, which is outside the repo and not under version control at all. As it’s just a plain text file, I obviously wouldn’t recommend that approach for any real situation, but it’s fine in my home lab, and that’s the itch I need to scratch. Gurp has the wherewithal to use some CLI tool like SOPS to decrypt secrets, but runninggurp compile --format=json
would expose them as plaintext. There’s never a nice way to solve any secrets problem and, given I don’t have anything which really needs to be secret, I’m not in a rush to tackle it.
What Next
The whole codebase could probably use a refactor. So many changes of plan have left vestigial tails everywhere, and some of the Janet is proper spaghetti. I’ve got good test coverage for the front-end, so refactoring that ought not to be too risky, and I always feel very safe refactoring Rust.
There’s a branch with bhyve
zone support, but I’m a bit ambivalent about it.
It was one thing adding LX support, but bhyve
, with cloud-init is a lot
messier, and Gurp offers no way whatsoever to configure a “real” Linux or BSD
instance. Gurping an Unbuntu bhyve
zone means you need Puppet or something to
configure it, and that feels like it makes this whole exercise a bit pointless.
As I mentioned when
I wrote about bootstrapping zones, I have a rough
idea about client-server Gurp. A central instance would have access to all your
files and configs, and clients would request their data from it. The first step
would be to make file
’s :from
accept URIs, which would be useful anyway. The
question is, would the client or the server compile the Janet? If the former,
you would only have to get the compiled JSON data, but you couldn’t have any
host-specific logic in your front-end code, because you’d be inspecting the
server. (I don’t do this at all, but I can easily see cases where it would make
sense.) If the latter, you’ve likely got to pull a whole load of files down and
assemble them somehow. Offering both probably puts you in some sort of awful,
confusing compromise, like pull-mode Ansible. And once you start doing HTTP,
you’ve got to do HTTPS, and then we’re in the world of certificates, and we’re
back to secrets management, and I really don’t think I can be bothered with it.
I’ve covered most of illumos’s OS primitives, but in a very basic way. There are some fairly basic things Gurp simply can’t do, like selecting package mediators, or configuring network interfaces. Some things it covers, like managing packages, it does in the most limited way possible. So there’s plenty of room for improvement and plenty of scope for new features. But as of this moment, I’m not sure how much of this I want to take on. I wanted to see if I could replicate my Ansible usage with Rust and Janet, and I could. That might be enough for a little while.