I’ve been blogging since 2005, but not all old posts have been imported here.
I’ve just realised Consfigurator 1.3.0, with some readtable enhancements. So now instead of writing
(firewalld:has-policy "athenet-allow-fwd"
#>EOF><?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
you can write
(firewalld:has-policy "athenet-allow-fwd" #>>~EOF>>
<?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
which is a lot more readable when it appears in a list of other properties. In addition, instead of writing
(multiple-value-bind (match groups)
(re:scan-to-strings "^uid=(\\d+)" (connection-connattr connection 'id))
(and match (parse-integer (elt groups 0))))
you can write just (#1~/^uid=(\d+)/p (connection-connattr connection 'id))
.
On top of the Perl-inspired syntax, I’ve invented the new trailing option p
to attempt to parse matches as numbers.
Another respect in which Consfigurator’s readtable has become much more useful
in this release is that I’ve finally taught Emacs about these reader macros,
such that unmatched literal parentheses within regexps or heredocs don’t cause
Emacs (and especially Paredit) to think that the code couldn’t be valid Lisp.
Although I was able mostly to reuse propertising algorithms from the built-in
perl-mode
, I did have to learn a lot more about how parse-partial-sexp
really works, which was pretty cool.
The emacsclient(1) program is used to connect to Emacs running as a daemon. emacsclient(1) can go in your EDITOR/VISUAL environment variables so that you can edit things like Git commit messages and sudoers files in your existing Emacs session, rather than starting up a new instances of Emacs. It’s not only that this is usually faster, but also that it means you have all your session state available – for example, you can yank text from other files you were editing into the file you’re now editing.
Another, somewhat different use of emacsclient(1) is to open new Emacs frames
for arbitrary work, not just editing a single, given file. This can be in a
terminal or under a graphical display manager. I use emacsclient(1) for this
purpose about as often as I invoke it via EDITOR/VISUAL. I use emacsclient
-nc
to open new graphical frames and emacsclient -t
to open new text-mode
frames, the latter when SSHing into my work machine from home, or similar. In
each case, all my buffers, command history etc. are available. It’s a real
productivity boost.
Some people use systemd socket activation to start up the Emacs daemon. That
way, they only need ever invoke emacsclient
, without any special options,
and the daemon will be started if not already running. In my case, instead,
emacsclient
on PATH is a wrapper
script that checks
whether a daemon is running and starts one if necessary. The main reason I
have this script is that I regularly use both the installed version of Emacs
and in-tree builds of Emacs out of emacs.git, and the script knows how to
choose what to launch and what to try to connect to. In particular, it
ensures that the in-tree emacsclient(1) is not used to try to connect to the
installed Emacs, which might fail due to protocol changes. And it won’t use
the in-tree Emacs executable if I’m currently recompiling Emacs.
I’ve recently enhanced my wrapper script to make it possible to have the primary Emacs daemon always running under gdb. That way, if there’s a seemingly-random crash, I might be able to learn something about what happened. The tricky thing is that I want gdb to be running inside an instance of Emacs too, because Emacs has a nice interface to gdb. Further, gdb’s Emacs instance – hereafter “gdbmacs” – needs to be the installed, optimised build of Emacs, not the in-tree build, such that it’s less likely to suffer the same crash. And the whole thing must be transparent: I shouldn’t have to do anything special to launch the primary session under gdb. That is, if right after booting up my machine I execute
% emacsclient foo.txt
then gdbmacs should start, it should then start the primary sesion under gdb, and finally the real emacsclient(1) should connect to the primary session and request editing foo.txt. I’ve got that all working now, and there are some nice additional features. If the primary session hits a breakpoint, for example, then emacsclient requests will be redirected to gdbmacs, so that I can still edit files etc. without losing the information in the gdb session. I’ve given gdbmacs a different background colour, so that if I request a new graphical frame and it pops up with that colour, I know that the main session is wedged and I might like to investigate.
First attempt: remote attaching
My first attempt, which was running for several weeks, had a different
architecture. Instead of having gdbmacs start up the primary session, the
primary session would start up gdbmacs, send over its own PID, and ask gdbmacs
to use gdb’s functionality for attaching to existing processes. In
after-init-hook
I had to code to check whether we are an Emacs that just
started up out of my clone emacs.git, and if so, we invoke
% emacsclient --socket-name=gdbmacs --spw/installed \
--eval '(spw/gdbmacs-attach <the pid>)'
The --spw/installed
option asks the wrapper script to start up gdbmacs using
the Emacs binary on PATH, not the one in emacs.git/. (We can’t use the
server-eval-at
function because we need the wrapper script to start up
gdbmacs if it’s not already running.)
Over in gdbmacs, the spw/gdbmacs-attach
function then did something like
this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb (format "gdb -i=mi --pid=%d src/emacs" pid))
(gdb-wait-for-pending (lambda () (gud-basic-call "continue"))))
Having gdbmacs attach to the existing process is more robust than having
gdbmacs start up Emacs under gdb. If anything goes wrong with attaching, or
with gdbmacs more generally, you’ve still got the primary session running
normally; it just won’t be under a debugger. More significantly, the wrapper
script doesn’t need to know anything about the relationship between the two
daemons. It just needs to be able to start up both in-tree and installed
daemons, using the --spw/installed
option to determine which. The
complexity is all in Lisp, not shell script (the wrapper is a shell script
because it needs to start up fast).
The disadvantage of this scheme is that the primary session’s stdout and
stderr are not directly accessible to gdbmacs. There is a function
redirect-debugging-output
to deal with this situation, and I experimented
with having the primary session call this and send the new output filename to
gdbmacs, but it’s much less smooth than having gdbmacs start up the primary
session itself.
I think most people would probably prefer this scheme. It’s definitely cleaner to have the two daemons start up independently, and then have one attach to the other. But I decided that I was willing to complexify my wrapper script in order to have the primary session’s stdout and stderr attached to gdbmacs in the normal way.
Second attempt: daemons starting daemons
In this version, the relevant logic is shifted out of Lisp into the wrapper
script. When we execute emacsclient foo.txt
, the script first determines
whether the primary session is already running, using something like this:
[ -e /run/user/1000/emac/server \
-a -n "$(ss -Hplx src /run/user/1000/emacs/server)" ]
The ss(8) tool is used to determine if anything is listening on the socket.
The script also uses flock(1) to have other instances of the wrapper script
wait, in case they are going to cause the daemon to exit, or something. If
the daemon is running, then we can just exec emacs.git/lib-src/emacsclient
to handle the request. If not, we first have to start up gdbmacs:
installed_emacsclient=$(PATH=$(echo "$PATH" \
| sed -e "s#/directory/containing/wrapper/script##") \
command -v emacsclient)
"$installed_emacsclient" -a '' -sgdbmacs --eval '(spw/gdbmacs-attach)'
spw/gdbmacs-attach
now does something like this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb "gdb -i=mi --args src/emacs --fg-daemon")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "set cwd ~")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "run"))))))
"$installed_emacsclient"
exits as soon as spw/gdbmacs-attach
returns,
which is before the primary session has started listening on the socket, so
the wrapper script uses inotifywait(1) to wait until /run/user/1000/server
appears. Then it is finally able to exec ~/src/emacs/lib-src/emacsclient
to
handle the request.
A particular kind of complexity
The wrapper script must be highly reliable. I use my primary Emacs session
for everything, on the same laptop that I do my academic work. The main way I
get at it is via a window manager shortcut that executes emacsclient -nc
to
request a new frame, such that if there is a problem, I won’t see any error
output until I open an xterm and tail ~/.swayerr
/~/.xsession-errors
. And
as starting gdbmacs and only then starting up less optimised, debug in-tree
builds of Emacs is not fast, I would have to wait at least ten seconds without
any Emacs frame popping up before I could suppose that something was wrong.
This is where the first scheme, where the complexity is all in Lisp, really seems attractive. My emacsclient(1) wrapper script has several other facilities and convenience features, some of which are general and some of which are only for my personal usage patterns, and the code for all those is now interleaved with the special cases for gdbmacs and the primary session that I’ve described in this post. There’s a lot that could go wrong, and it’s all in shell, and its output isn’t readily visible to the user. I’ve done a lot of testing, and I’m pretty confident in the script in its current form, but if I need to change or add features, I’ll have to do a lot of testing again before I can deploy to my usual laptop.
Single-threaded, readily interactively-debuggable Emacs Lisp really shines for
this sort of “do exactly what I mean, as often as possible” code, and you find
a lot of it in Emacs itself, third party packages, and peoples’ init.el
files. You can add all sorts of special cases to your interactive commands to
make Emacs do just what is most useful, and have confidence that you can
manage the resulting complexity. In this case, though, I’ve got piles of just
this sort of complexity out in an opaque shell script. The ultimate goal,
though, is debugging Emacs, such that one can run yet more DJWIM Emacs Lisp,
which perhaps justifies it.
I’ve come up with a new reprepro wrapper for adding rebuilds of existing Debian packages to a local repository: reprepro-rebuilder. It should make it quicker to update local rebuilds of existing packages, patched or unpatched, working wholly out of git. Here’s how it works:
Start with a git branch corresponding to the existing Debian package you want to rebuild. Probably you want
dgit clone foo
.Say
reprepro-rebuilder unstable
, and the script will switch you to a branchPREFIX/unstable
, where PREFIX is a short name for your reprepro repository, and updatedebian/changelog
for a local rebuild. If the branch already exists, it will be updated with a merge.You can now do any local patching you might require. Then, say
reprepro-rebuilder --release
. (The command from step (2) will offer to release immediately for the case that no additional patching is required.)At this point, your reprepro will contain a source package coresponding to your local rebuild. You can say
reprepro-rebuilder --wanna-build
to build any missing binaries for all suites, for localhost’s Debian architecture. (Again, the command from step (3) will offer to do this immediately after adding the source package.)
Additionally, if you’re rebuilding for unstable, reprepro-rebuilder will offer to rebuild for backports, too, and there are a few more convenience features, such as offering to build binaries for testing between steps (2) and (3). You can leave the script waiting to release while you do the testing.
I think that the main value of this script is keeping track of the distinct
steps of a relatively fiddly, potentially slow-running workflow for you,
including offering to perform your likely next step immediately. This means
that you can be doing something else while the rebuilds are trundling along:
you just start reprepro-rebuilder unstable
in a shell, and unless additional
patching is required between steps (2) and (3), you just have to answer script
prompts as they show up and everything gets done.
If you need to merge from upstream fairly regularly, and then produce binary
packages for both unstable and backports, that’s quite a lot of manual steps
that reprepro-rebuilder takes care of for you. But the script’s command line
interface is flexible enough for the cases where more intervention is
required, too. For example, for my Emacs snapshot builds, I have another
script to replace steps (1) and (2), which merges from a specific branch that
I know has been manually tested, and generates a special version number. Then
I say reprepro-rebuilder --release
and the script takes care of preparing
packages for unstable and bullseye-backports, and I can have my snapshots on
all of my machines without a lot of work.
The ThinkPad x220 that I had been using as an ssh terminal at home finally developed one too many hardware problems a few weeks ago, and so I ordered a Raspberry Pi 4b to replace it. Debian builds minimal SD card images for these machines already, but I wanted to use the usual ext4-on-LVM-on-LUKS setup for GNU/Linux workstations. So I used Consfigurator to build a custom image.
There are two key advantages to using Consfigurator to do something like this:
As shown below, it doesn’t take a lot of code to define the host, it’s easily customisable without writing shell scripts, and it’s all declarative. (It’s quite a bit less code than Debian’s image-building scripts, though I haven’t carefully compared, and they are doing some additional setup beyond what’s shown below.)
You can do nested block devices, as required for ext4-on-LVM-on-LUKS, without writing an intensely complex shell script to expand the root filesystem to fill the whole SD card on first boot. This is because Consfigurator can just as easily partition and install an actual SD card as it can write out a disk image, using the same host definition.
Consfigurator already had all the capabilities to do this, but as part of this project I did have to come up with the high-level wrapping API, which didn’t exist yet. My first SD card write wouldn’t boot because I had to learn more about kernel command lines; the second wouldn’t boot because of a minor bug in Consfigurator regarding /etc/crypttab; and the third build is the one I’m using, except that the first boot runs into a bug in cryptsetup-initramfs. So as far as Consfigurator is concerned I would like to claim that it worked on my second attempt, and had I not been using LUKS it would have worked on the first :)
The code
(defhost erebus.silentflame.com ()
"Low powered home workstation in Tucson."
(os:debian-stable "bullseye" :arm64)
(timezone:configured "America/Phoenix")
(user:has-account "spwhitton")
(user:has-enabled-password "spwhitton")
(disk:has-volumes
(physical-disk
(partitioned-volume
((partition
:partition-typecode #x0700 :partition-bootable t :volume-size 512
(fat32-filesystem :mount-point #P"/boot/firmware/"))
(partition
:volume-size :remaining
(luks-container
:volume-label "erebus_crypt"
:cryptsetup-options '("--cipher" "xchacha20,aes-adiantum-plain64")
(lvm-physical-volume :volume-group "vg_erebus"))))))
(lvm-logical-volume
:volume-group "vg_erebus"
:volume-label "lv_erebus_root" :volume-size :remaining
(ext4-filesystem :volume-label "erebus_root" :mount-point #P"/"
:mount-options '("noatime" "commit=120"))))
(apt:installed "linux-image-arm64" "initramfs-tools"
"raspi-firmware" "firmware-brcm80211"
"cryptsetup" "cryptsetup-initramfs" "lvm2")
(etc-default:contains "raspi-firmware"
"ROOTPART" "/dev/mapper/vg_erebus-lv_erebus_root"
"CONSOLES" "ttyS1,115200 tty0"))
and then you just insert the SD card and, at the REPL on your laptop,
CONSFIG> (hostdeploy-these laptop.example.com
(disk:first-disk-installed-for nil erebus.silentflame.com #P"/dev/mmcblk0"))
There is more general information in the OS installation tutorial in the Consfigurator user’s manual.
Other niceties
Configuration management that’s just as easily applicable to OS installation as it is to the more usual configuration of hosts over SSH drastically improves the ratio of cost-to-benefit for including small customisations one is used to.
For example, my standard Debian system configuration properties (omitted from the code above) meant that when I was dropped into an initramfs shell during my attempts to make an image that could boot itself, I found myself availed of my custom Space Cadet-inspired keyboard layout, without really having thought at any point “let’s do something to ensure I can have my usual layout while I’m figuring this out.” It was just included along with everything else.
As compared with the ThinkPad x220, it’s nice how the Raspberry Pi 4b is silent and doesn’t have any LEDs lit by default once it’s booted. A quirk of my room is that one plug socket is controlled by a switch right next to the switch for the ceiling light, so I’ve plugged my monitor into that outlet. Then when I’ve finished using the new machine I can flick that switch and the desk becomes completely silent and dark, without actually having to suspend the machine to RAM, thereby stopping cron jobs, preventing remote access from the office to fetch uncommitted files, etc..
I’d like to share some pointers for using Gnus together with notmuch rather than notmuch together with notmuch’s own Emacs interface, notmuch.el. I set about this because I recently realised that I had been poorly reimplementing lots of Gnus features in my init.el, primarily around killing threads and catching up groups, supported by a number of complex shell scripts. I’ve now switched over, and I’ve been able to somewhat simplify what’s in my init.el, and drastically simplify my notmuch configuration outside of Emacs. I’m always more comfortable with less Unix and more Lisp when it’s feasible.
The basic settings are
gnus-search-default-engines
andgnus-search-notmuch-remove-prefix
, explained in(info "(gnus) Searching")
, and an entry for your maildir ingnus-secondary-select-methods
, explained in(info "(gnus) Maildir")
. Then you will haveG G
andG g
in the group buffer to make and save notmuch searches.I think it’s important to have something equivalent to
notmuch-saved-searches
configured programmatically in your init.el, rather than interactively adding each saved search to the group buffer. This is because, as notmuch users know, these saved searches are more like permanent, virtual inboxes than searches. You can learn how to do this by looking at howgnus-group-make-search-group
callsgnus-group-make-group
. I have some code running ingnus-started-hook
which does something like this for each saved search:(if (gnus-group-entry group) (gnus-group-set-parameter group 'nnselect-specs ...) (gnus-group-make-group ...))
The idea is that if you update your saved search in your init.el, rerunning this code will update the entries in the group buffer. An alternative would be to just kill every nnselect search in the group buffer each time, and then recreate them. In addition to reading
gnus-group-make-search-group
, you can look in~/.newsrc.eld
to see the sort ofnnselect-specs
group parameters you’ll need your code to produce.I’ve very complicated generation of my saved searches from some variables, but that’s something I had when I was using notmuch.el, too, so perhaps I’ll describe some of the ideas in there in another post.
You’ll likely want to globally bind a function which starts up Gnus if it’s not already running and then executes an arbitrary notmuch search. For that you’ll want
(unless (gnus-alive-p) (gnus))
, and not(unless (gnus-alive-p) (gnus-no-server))
. This is because you need Gnus to initialise nnmaildir before doing any notmuch searches. Gnus passes--output=files
to notmuch and constructs a summary buffer of results by selecting mail that it already knows about with those filenames.When you’re programmatically generating the list of groups, you might also want to programmatically generate a topics topology. This is how you do that:
(with-current-buffer gnus-group-buffer (gnus-topic-mode 0) (setq gnus-topic-alist nil gnus-topic topology nil) ;; Now push to those two variables. You can also use ;; `gnus-topic-move-matching' to move nnmaildir groups into, e.g., ;; "misc". (gnus-topic-mode 1) (gnus-group-list-groups))
If you do this in
gnus-started-hook
, the values for those variables Gnus saves into~/.newsrc.eld
are completely irrelevant and do not need backing up/syncing.When you want to use
M-g
to scan for new mail in a saved search, you’ll need to have Gnus also rescan your nnmaildir inbox, else it won’t know about the filenames returned by notmuch and the messages won’t appear. This is similar to thegnus
vs.gnus-no-server
issue above. I’m using:before
advice tognus-request-group-scan
to scan my nnmaildir inbox each time any nnselect group is to be scanned.If you are used to linking to mail from Org-mode buffers, the existing support for creating links works fine, and the standard
gnus:
links already contain the Message-ID. But you’ll probably want opening the link to perform a notmuch search for id:foo rather than trying to use Gnus’s own jump-to-Message-ID code. You can do this using:around
or:override
advice fororg-gnus-follow-link
: look atgnus-group-read-ephemeral-search-group
to do the search, and then callgnus-summary-goto-article
.
I don’t think that the above is especially hacky, and don’t expect changes to Gnus to break any of it. Implementing the above for your own notmuch setup should get you something close enough to notmuch.el that you can take advantage of Gnus’ unique features without giving up too much of notmuch’s special features. However, it’s quite a bit of work, and you need to be good at Emacs Lisp. I’d suggest reading lots of the Gnus manual and determining for sure that you’ll benefit from what it can do before considering switching away from notmuch.el.
Reading through the Gnus manual, it’s been amazing to observe the extent to which I’d been trying to recreate Gnus in my init.el, quite oblivious that everything was already implemented for me so close to hand. Moreover, I used Gnus ten years ago when I was new to Emacs, so I should have known! I think that back then I didn’t really understand the idea that Gnus for mail is about reading mail like news, and so I didn’t use any of the features, back then, that more recently I’ve been unknowingly reimplementing.
I recently released Consfigurator 1.0.0 and I’m now returning to my Common Lisp reading. Building Consfigurator involved the ad hoc development of a cross between a Haskell-style functional DSL and a Lisp-style macro DSL. I am hoping that it will be easier to retain lessons about building these DSLs more systematically, and making better use of macros, by finishing my studying of macrology books and papers only after having completed the ad hoc DSL. Here’s my current list:
Finishing off On Lisp and Let Over Lambda.
Richard C. Waters. 1993. “Macroexpand-All: an example of a simple lisp code walker.” In Newsletter ACM SIGPLAN Lisp Pointers 6 (1).
Michael Raskin. 2017. “Writing a best-effort portable code walker in Common Lisp.” In Proceedings of 10th European Lisp Symposium (ELS2017).
Cullpepper et. al. 2019. “From Macros to DSLs: The Evolution of Racket”. Summet of Advances in Programming Languages.
One thing that I would like to understand better is the place of code walking in macro programming. The Raskin paper explains that it is not possible to write a fully correct code walker in ANSI CL. Consfigurator currently uses Raskin’s best-effort portable code walker. Common Lisp: The Language 2 includes a few additional functions which didn’t make it into the ANSI standard that would make it possible to write a fully correct code walker, and most implementations of CL provide them under one name or another. So one possibility is to write a code walker in terms of ANSI CL + those few additional functions, and then use a portability layer to get access to those functions on different implementations (e.g. trivial-cltl2).
However, both On Lisp and Let Over Lambda, the two most substantive texts on CL macrology, both explicitly put code walking out-of-scope. I am led to wonder: does the Zen of Common Lisp-style macrology involve doing without code walking? One key idea with macros is to productively blur the distinction between designing languages and writing code in those languages. If your macros require code walking, have you perhaps ended up too far to the side of designing whole languages? Should you perhaps rework things so as not to require the code walking? Then it would matter less that those parts of CLtL2 didn’t make it into ANSI. Graham notes in ch. 17 of On Lisp that read macros are technically more powerful than defmacro because they can do everything that defmacro can and more. But it would be a similar sort of mistake to conclude that Lisp is about read macros rather than defmacro.
There might be some connection between arguments for and against avoiding code walking in macro programming and the maintainance of homoiconicity. One extant CL code walker, hu.dwim.walker, works by converting back and forth between conses and CLOS objects (Raskin’s best-effort code walker has a more minimal interface), and hygienic macro systems in Scheme similarly trade away homoiconicity for additional metadata (one Lisp programmer I know says this is an important sense in which Scheme could be considered not a Lisp). Perhaps arguments against involving much code walking in macro programming are equivalent to arguments against Racket’s idea of language-oriented programming. When Racket’s designers say that Racket’s macro system is “more powerful” than CL’s, they would be right in the sense that the system can do all that defmacro can do and more, but wrong if indeed the activity of macro programming is more powerful when kept further away from language design. Anyway, these are some hypotheses I am hoping to develop some more concrete ideas about in my reading.
Consfigurator has long has combinators OS:TYPECASE and OS:ETYPECASE to conditionalise on a host’s operating system. For example:
(os:etypecase
(debian-stable (apt:installed-backport "notmuch"))
(debian-unstable (apt:installed "notmuch")
You can’t distinguish between stable releases of Debian like this, however, because while that information is known, it’s not represented at the level of types. You can manually conditionalise on Debian suite using something like this:
(defpropspec notmuch-installed :posix ()
(switch ((os:debian-suite (get-hostattrs-car :os)) :test #'string=)
("bullseye" '(apt:installed-backport "notmuch"))
(t '(apt:installed "notmuch"))))
but that means stepping outside of Consfigurator’s DSL, which has various disadvantages, such as a reduction in readability. So today I’ve added some new combinators, so that you can say
(os:debian-suite-case
("bullseye" (apt:installed-backport "notmuch"))
(t (apt:installed "notmuch")))
For my own use I came up with this additional simple wrapper:
(defmacro for-bullseye (atomic)
`(os:debian-suite-case
("buster")
("bullseye" ,atomic)
;; Check the property is actually unapplicable.
,@(and (get (car atomic) 'punapply) `((t (unapplied ,atomic))))))
So now I can say
(for-bullseye (apt:pinned '("elpa-org-roam") '(os:debian-unstable) 900))
which is a succinct expression of the following: “on bullseye, pin elpa-org-roam to sid with priority 900, drop the pin when we upgrade the machine to bookworm, and don’t do anything at all if the machine is still on buster”.
As a consequence of my doing Debian development but running Debian stable everywhere, I accumulate a number of tweaks like this one over the course of each Debian stable release. In the past I’ve gone through and deleted them all when it’s time to upgrade to the next release, but then I’ve had to add properties to undo changes made for the last stable release, and write comments saying why those are there and when they can be safely removed, which is tedious and verbose. This new combinator is cleaner.
I am pleased to announce Consfigurator 1.0.0.
Reaching version 1.0.0 signifies that we will try to avoid API breaks. You should be able to use Consfigurator to manage production systems.
You can find the source at https://git.spwhitton.name/consfigurator for browsing online or git cloning.
Releases are made by publishing signed git tags to that repository. The tag for this release is named ‘v1.0.0’, and is signed by me.
On Debian/etc. systems, apt-get install cl-consfigurator
-8<-
Consfigurator is a system for declarative configuration management using Common Lisp. You can use it to configure hosts as root, deploy services as unprivileged users, build and deploy containers, install operating systems, produce disc images, and more. Some key advantages:
Apply configuration by transparently starting up another Lisp image on the machine to be configured, so that you can use the full power of Common Lisp to inspect and control the host.
Also define properties of hosts in a more restricted language, that of :POSIX properties, to configure machines, containers and user accounts where you can’t install Lisp. These properties can be applied using just an SSH or serial connection, but they can also be applied by remote Lisp images, enabling code reuse.
Flexibly chain and nest methods of connecting to hosts. For example, you could have Consfigurator SSH to a host, sudo to root, start up Lisp, use the setns(2) system call to enter a Linux container, and then deploy a service. Secrets, and other prerequisite data, are properly passed along.
Combine declarative semantics for defining hosts and services with a multiparadigmatic general-purpose programming language that won’t get in your way.
Declarative configuration management systems like Consfigurator and Propellor share a number of goals with projects like the GNU Guix System and NixOS. However, tools like Consfigurator and Propellor try to layer the power of declarative and reproducible configuration semantics on top of traditional, battle-tested UNIX system administration infrastructure like distro package managers, package archives and daemon configuration mechanisms, rather than seeking to replace any of those. Let’s get as much as we can out of all that existing distro policy-compliant work!
I like to look at the Emacs subreddit and something I’ve noticed recently is people asking “should I start by writing my own Emacs config, or should I use this or that prepackaged one?” There is also this new config generator published by Philip Kaludercic. I find implicit in these the idea that one’s init.el is a singular product. To start using Emacs, newcomers seem to think, you need to couple it with a completed init.el, and so there is the question of writing your own or using one someone else has written. I think that an appropriate analogy is certain shell scripts. If you want to burn backups to DVDs you might download someone’s DVD burning shell script which tries to make that easy, or you might write your own. In both cases, you are likely to want to tweak the script after you’ve started using it, but there is nevertheless a discrete point at which you go from having part of a script and not being able to burn DVDs, to having a completed script and now being able to burn DVDs. Similarly, the idea that you can’t start using Emacs until you couple it with an init.el is like thinking that there is a process of producing or downloading an init.el, and only after that can you begin using Emacs.
This thinking makes sense if you’re developing one of the large Emacs configuration frameworks like Spacemacs or Doom Emacs. The people behind those projects are seeking to build something quite different from Emacs, using Emacs as a base, and for many people using that new, quite different thing is preferable to using Emacs. Then indeed, until you’ve finished developing your configuration framework’s init.el to a degree that you’re ready to release version 0.1 of your framework, you haven’t got something that’s ready to use. Like the shell script, there’s a discrete point after which you have a product, and there’s lots of labour that must precede it. (I think it’s pretty cool that Emacs is flexible enough to be something on its own and also a base for these projects.)
However, this temporal structure does not make sense to me when it comes to using just Emacs. I find the idea that one’s init.el is a singular product strange. In particular, my init.el is not something which transforms Emacs into something else, like the init.el that’s part of Doom Emacs does. It’s just a long collection of incrementally developed, largely unrelated hacks and tweaks. You could insist that it transforms default Emacs into Sean’s Emacs, but I think that misleadingly implies that there’s an overarching design and cohesion to my init.el, which just isn’t there – and it would be weird if it was there, because then I would be doing something more like the developers behind Doom Emacs. So if you’re not going to use one of the large configuration frameworks, then there is no task of “writing your own init.el” that stands before you. You just start using Emacs, and as part of that you’re going to write functions and rebind keys, etc., and your init.el is the file in which those changes are collected. The choice is not between writing your own init.el or downloading a prepackaged one. It’s between using Emacs, or using another product that has been built out of Emacs. The latter necessarily involves a completed init.el, but that’s an implementation detail.
I am very happy Kaludercic’s configuration generator has been made available, but I would be inclined to rename it. A new user of Emacs is likely to be overwhelmed with unintuitive defaults that have stuck around for mostly historical reasons. There are a lot of them, so it is a lot to ask of new users that they just identify the defaults that don’t suit them and add lines to their init.el to change those. When too many things are unintuitive, it’s hard to know where to start. Kaludercic’s configuration generator is a way to walk newcomers through the most significant defaults in a way that something structured like a reference manual would struggle to do. The result is some Lisp code, but I would prefer not to refer to that result as an Emacs configuration. It’s a series of configuration snippets that you can add to your Emacs configuration to help deal with the newcomer’s problem of too many unintuitive defaults.
I’m not sure it’s important to actually rename Kaludercic’s tool to something which says it’s a generator of configuration snippets rather than a generator of configurations. But I would like to challenge the idea that to start using Emacs you first need to couple it with a completed init.el. If you’re going to use Emacs, rather than Spacemacs or Doom Emacs, you can just start using it. If you find yourself butting up against a lot of unintuitive defaults, then you can use a walkthrough tool like Kaludercic’s to figure out what you need to add to your init.el to deal with those. But that is better understood as just more of the tweaking and customisation that Emacs users are always getting up to, not some prerequisite labour.
Here are a few new features I’ve added to GNU ELPA and upstream GNU Emacs recently. Text is adapted from the in-tree documentation I wrote for the new features. Thanks to everyone who offered feedback on my patches.
New feature to easily bypass Eshell’s own pipelining
Prefixing |
, <
or >
with an asterisk, i.e. *|
, *<
or *>
, will
cause the whole command to be passed to the operating system shell. This is
particularly useful to bypass Eshell’s own pipelining support for pipelines
which will move a lot of data.
This has long been an obstacle when it comes to using Eshell as one’s main shell. The new syntax is easy to use and covers a lot of different use cases.
New Eshell module to help supplying absolute file names to remote commands
After enabling the new eshell-elecslash
module, typing a forward slash as
the first character of a command line argument will automatically insert the
Tramp prefix. The automatic insertion applies only when default-directory
is remote and the command is a Lisp function. This frees you from having to
keep track of whether commands are Lisp function or external when supplying
absolute file name arguments.
This is another attempt to solve an Eshell papercut. Suppose you execute
cd /ssh:root@@example.com:
find /etc -name "*gnu*"
and in reviewing the output of the command, you identify a file /etc/gnugnu
that should be moved somewhere else. So you type
mv /etc/gnugnu /tmp
But since mv
refers to the local Lisp function eshell/mv
, not a remote
shell command (unlike find(1)), to say this is to request that the local file
/etc/gnugnu
be moved into the local /tmp
directory. After you enable
eshell-elecslash
, to then when you type the above mv
invocation you will
get the following input, which is what you intended:
mv /ssh:root@example.com:/etc/gnugnu /ssh:root@example.com:/tmp
imenu
is now bound to M-g i
globally
This is a useful command but everyone has to come up with their own binding for it. No longer.
New macro-writing macros, cl-with-gensyms
and cl-once-only
These two macros are quite interesting. In the history of Common Lisp-style
macros, these are the only two macro-writing macros that have emerged as
essential tools for intermediate and advanced macrology. Most any other
macro-writing macros are either project- or programmer-specific. In his book
on Lisp macros Doug Hoyte
proposes an alternative
to defmacro
, defmacro!
, which is just the same as defmacro
except that
it builds in facilities equivalent to cl-with-gensyms
and cl-once-only
.
I’ve long wanted to have these macros available in core Emacs Lisp, too, and
now they are.
New package on GNU ELPA: transient-cycles
Many commands can be conceptualised as selecting an item from an ordered list or ring. Sometimes after running such a command, you find that the item selected is not the one you would have preferred, but the preferred item is nearby in the list. If the command has been augmented with transient cycling, then it finishes by setting a transient map with keys to move backwards and forwards in the list of items, so you can select a nearby item instead of the one the command selected. From the point of view of commands subsequent to the deactivation of the transient map, it is as though the first command actually selected the nearby item, not the one it really selected.
Protesilaos Stavrou helped me test the package and has written up some usage notes.
This is an idea I came up with in 2020, and refined in my init.el since then. This year I made it into a package.