I’ve been blogging since 2005, but not all old posts have been imported here.
I finally figured out how to have an application launcher with my usual Emacs completion keybindings:
This is with Icomplete. If you use another completion framework it will look different. Crucially, it’s what you are already used to using inside Emacs, with the same completion style (flex vs. orderless vs. …), bindings etc..
Here is my Sway binding:
bindsym p exec i3-dmenu-desktop \
--dmenu="dmenu_emacsclient 'Application: '", \
mode "default"
(for me this is inside a mode { }
block)
The dmenu_emacsclient script is
here.
It relies on the function spw/sway-completing-read
from my
init.el.
As usual, this code is available for your reuse under the terms of the GNU GPL. Please see the license and copyright information in the linked files.
You also probably want a for_window directive in your Sway config to enable floating the window, and perhaps to resize it. Enjoy having your Emacs completion bindings for application launching, too!
I spent eight years doing teaching and research in Philosophy at the University of Arizona, in Tucson, Arizona, from 2015 to 2023. I now have a love for America and its people, even though I am not sure I could ever live there again. Americans would say that Tucson is an outlier, an odd post-frontier town which is not reflective of the rest of the nation’s cities. And I only really visited New York, the Bay Area, and two towns in Mississippi, so I mostly take them at their word. But I could see something in common between these places that’s distinct from where else I’ve lived. I will not seek to capture that here, but instead focus on how life in Tucson was, and some things I learned.
When I first arrived I was very unsure about whether it would be a good idea to stay. I was ambivalent about reentering academia, and uneasy with the contractual terms under which I would be able to study there without paying any money for it. Once I did decide to stay for at least one semester, I tried to get myself set up with a daily routine that would be suitable for making progress with my classes, while also allowing me time to pursue my other interests. So I went to check out the library, that being where I’d done all my work as an undergraduate. I was appalled to find that there wasn’t a culture of silence. Supposedly the upper floors were designated as quiet, but the only way I could feel confident in not being interrupted was to find one of the small study desks sequestered in far corners, with those moveable shelves of books they have in university libraries between me and everyone else.
This initial problem with finding quiet and concentration somewhat epitomises a lot of my academic experiences in Tucson. I felt that the academic culture in the US was a noisy one: talking loudly to each other was valued a lot more highly than it had been in the UK, and real deep reading and thinking was something that people did on their own, at home, and didn’t talk about much. You talked about all the writing you had been doing, and indeed about what people you’d read had said, but with the latter it was as though the actual reading had happened outside of time, and the things happening within time were on-campus activities, and the hours of writing. You might say, well, it was grad school, of course the focus is more on producing one’s own work. But we did read a lot, in fact, and it’s not as though undergraduate Philosophy at Oxford didn’t involve regularly spending a lot of time writing, even if tute essays are something strange and staccato when compared to what we tried to write in grad school. And this is not to say that I didn’t learn and develop a great deal from many of those loud conversations, both in and out of seminar, but I think a productive campus needs more quiet, too.
We had two kinds of classes, lecture-style with both undergrad and graduate students, though in smaller groups than undergrad, and seminars with almost exclusively graduate students. Many people would take as many seminars as they were allowed to, and we all continued to join seminars once we’d completed coursework. But a few of us, including me, joined as many lectures as we could, even after completing coursework. I just love listening to masters of their domains of study. This was distinctly uncool – you’ve got to practice producing in order to become a philosopher yourself, would go the thought. But it’s not as if I didn’t produce too. And you can’t be disdainful of continuing to pump good philosophy into your head. Perhaps my attraction to the lecture classes was because it was somewhat closer to the deep reading with which I was familiar, that proved elusive on my American campus. You have to do the hard work to make philosophical progress, but you can’t engage with philosophy only by doing what feels like hard graft if you want to succeed, I think. You have to engage with it in other ways too, like just by listening.
A quality about the Americans I knew well which struck me early was their generosity with time, friendliness and just materially. I mean to include here peers who were my friends, as well as people who were part of my life for extended periods, but with whom I didn’t have enough in common for friendship. When I first arrived in Tucson I lived in a house in the Sam Hughes neighbourhood, owned by the parents of one of my two roommates, Nick. He was from Phoenix, and was taking a second undergraduate degree after deciding that he didn’t really want to follow in his father’s footsteps and become a doctor, but wanted to be a programmer. Nick and I would drive to the supermarket together every Saturday in his big Ford truck, and we developed a habit of listening to The Eagle’s Take It Easy on the ride back. I never signed a lease for living in that place. At one point I was short of American money after spending a lot on a summer trip, and I asked whether I could pay my rent erratically for a while, as my stipend came in over the following academic year, rather than transferring savings from the UK. It was no problem to do this. One Spring Break and one Thanksgiving, I joined Nick in driving up to Paradise Valley, Phoenix, to stay with his family. His mother had sat in the state House of Representatives as a Republican, and had two very yappy chihuahuas, traumatised as they had been by a previous owner. At one point they had to stay with us down in Tucson for a few days. One of them refused to walk on the tile floor, and we had to create a bridge of doormats between the carpeted room in which it was sleeping and the front door.
Nick introduced me to the American love for pulp cinema, which we don’t really have in the UK. Once Nick graduated and I developed closer friendships in my department, I watched a lot more such films with philosophers.
After living with Nick I lived alone, for nine months, in a small terraced bungalow, for barely any rent. The people around me were mostly economically deprived retirees, and some young people working jobs like driving some kind of tractor around on the extended grounds of the airport, on his own, far away from the planes. At one point a different corporation took over management of the properties, and they tried to make us pay an additional fee for the laundry room that had until then been included. They did this by installing a lock, and telling us we had to come down to the office to pay the new fee and receive a key. My neighbour Wilma and I took the bus down to the office and objected, and eventually got keys for free. Now that I think about it, I don’t know whether other existing tenants ended up paying for it. I improved my understanding of how the economically deprived, even in the West, can get casually abused by businesses, from this.
Wilma would sit behind her screen door in the evening, without the lights on, and a disembodied greeting would float out to me, among the crying cicadas, as I biked up to my own place. I had a nine month lease and I left that place right after because I was fed up with the insects infesting the place. But at the same time, living there was when I figured out how to be happy with my life in Tucson, and I maintained that happiness from then until the pandemic, when everything got hard for most everyone. Wilma was generous like Nick.
Before I said goodbye to Nick and moved in next door to Wilma, I tried to live a life involving the kind of variety that my life in Korea had had, before I went to Tucson. I was continually frustrated in this, because it was too distant from the lives that the people around me led for me to be able to figure out how to do it there, and more mundanely, because of how car-centric Tucson is. When I moved into my place on my own I somehow decided that I would try focusing entirely on my university work, and I also expanded that work a bit by registered for a seminar in Japanese literature up at the East Asian Studies department. My future PhD thesis supervisor Julia joined me for that seminar and one more the next semester, and I was able to draw upon some novels we read for my thesis.
I didn’t have Internet access at my little place, and we had finally got some designated-silent shared offices for grad students, in addition to the noisy ones where people held office hours, and talked loudly about philosophy. Suddenly my life got a lot more focused and quieter. I would get up and scramble an egg with some cheese and black pepper, and have it in a pitta bread-like thing which I sliced, froze, and defrosted in the toaster. I’d head to campus, early, and write. I’d do my classes and reading. Then I’d go swim in the big outside pool the university had, in the dark. I’d do one or two lengths at a time and then hold onto the edge and just think hard. I especially did this after my literature classes. They ran until 6pm, I think, and then I’d go to the pool, and do my lengths interspersed with thinking hard about the literature we’d discussed. Then after a long time out I’d go home late, and listen to pre-downloaded tabletop roleplaying podcasts. I slept the best I ever have, in the quiet among the noises of insects – it really was quieter despite all that noise – on this wonderful Japanese floor bed I’d found on Amazon. What I discovered during that time was the power of a simple life, I think. Or perhaps it was more about not trying to live a more complex life than the place you live allows. Or perhaps it wasn’t anything more than about the benefits of giving up fighting against a prevalent culture of workaholism – but at least, it was giving in to that situation in a way which strongly benefitted me. Going with the flow, or something.
I tried to build upon my new focus with the next phase of time in Tucson. I moved into the university’s grad student dorms, living right next to campus, in the middle of a commercial district for students that felt like one had left Tucson and gone somewhere more contemporary. This was a change I appreciated a lot, having, as I said, grown tired with all the bugs. At this time I got to know my now-fiancée Ke. I had finished with class credits but sat in on so many classes and reading groups, while still continuing to write a lot, that my work life didn’t change too much. While most people would start teaching their own classes at this point, I asked if I could continue to be assigned teaching assistant roles instead; I started teaching on my own only during the pandemic. My social life, aside from time with Ke and her roommate, mostly involved cycling East for forty minutes or so, to a house in which three fellow philosophers lived. I loved those evening rides there and nighttime rides back. Tucson is a dark city for the astronomy, and it’s also flat and bike-friendly, so for most of that journey I was on a route where various things had been set up to discourage cars from staying on the same roads as cyclists. The friends I had who lived in that house, Brandon, Tyler and Nathan, and later Nathan’s partner Meg and Tyler’s partner Amanda, were now the humblingly generous Americans in my life. We got two tabletop roleplaying groups going, with me and Nathan running a game each, and playing in each other’s. Later we were a pandemic pod, watching through Terrace House: Opening New Doors together.
I also significantly ramped up my involvement in Debian at around this time. Each Saturday morning I would visit a local coffee roasters, Caffe Lucè, have an excellent bagel and a couple of cups of coffee with half-and-half, and work on my packages.
I’ve described how a built for myself something of a sense of belonging studying Philosophy in Tucson. But ultimately, it did not compare in this regard to the place I was most content, which was in Balliol, my Oxford college. The Arizona grad students would go out for beer at a nice place called Time Market on some Friday nights, and while it was often a very good time, I would walk home with this heavy feeling of disappointment. I can now identify this as the lack of a sense of camraderie and belonging which I thought was essential to a productive academic environment. I can now also see that I had an intellectual kinship with Julia, Nathan, Tyler, Ke and others which was just as valuable, but it was still something had only with individuals, lacking a sense of being part of something not only bigger but also concrete, actually in the world. The pressures of professional academia in the US didn’t seem to leave us enough space to have what I remember us having had at Balliol. Not that the Balliol I inhabited still exists – it was dependent as much on the place as the people I was there with.
The advent of the pandemic, and the remainder of my time in Tucson after the pandemic, eroded this life I’d figured out. Our department eroding too was part of that – a lot of people moved away to be with their partners or families when lockdowns began, and faculty retired (and in one case tragically died), and so we lost a critical mass of intellectually energetic individuals. This hit me hard, and I did not have the emotional resources remaining, post-pandemic, to try to kick start things again, as previous versions of myself might have tried to do. I find, though, that most of my memories of life and Philosophy in Tucson are of the good times, and I find it easy, now at least, to write a post like this one.
When I think back to all the classes I took, discussions I had and essays I wrote and revised, I can see significant intellectual development. At the same time, it was as though my development in other senses was put on hold for those eight years, in a way that it had not been at Oxford and in Korea. (I even find myself wanting to say that my whole life was put on hold, but that would be hyperbolic even if it felt that way sometimes, for as I have said, I developed many important friendships.) Postgraduate Philosophy was just too consuming. I don’t know if it could have been other way, but I knew all along that it had to stop at some point; I knew that I couldn’t put all the other respects in which I wanted to grow on hold forever. Somehow, Oxford got this balance right: it managed to be just as satisfyingly intense and thrilling, without being quite all-consuming. Of course, I probably have rose-tinted glasses. It does seem, though, that European hard work manages to be more balanced, at least for what I seek to achieve, than American hard work.
During my final year, a current postdoc at Oxford happened to visit Tucson to speak at a political philosophy conference. Our quiet (to her), old-fashioned, relatively informal academic life out in the desert as grad students seemed to have a lot of advantages over hers in Oxford, despite how she had graduated from her doctorate and had obtained an academic job, and we were students. Until I met her, I had taken for granted, I think, all the ways that academic life in Tucson was quite like Balliol undergrad had been – she told me how her colleagues are all on Twitter, but none of us were, really. When I first arrived in Tucson I found it distressing how much more of an ivory tower it seemed, with Oxford being such a politically engaged place. In the end I am very glad I did a humanities PhD where I did, and am deeply grateful to America.
Ian suggested I share the highly involved build process for my doctoral dissertation, which I submitted for examination earlier this year. Beyond just compiling a PDF from Markdown and LaTeX sources, there are just two, simple-seeming goals: produce a PDF that passes PDF/A validation, for long term archival, and replace the second page with a scanned copy of the page after it was signed by the examiners. Achieving these two things reproducibly turned out to require a lot of complexity.
First we build dissertation1.tex out of a number of LaTeX and Markdown files, and a Pandoc metadata.yml, using Pandoc in a Debian sid chroot. I had to do the latter because I needed a more recent Pandoc than was available in Debian stable at the time, and didn’t dare upgrade anything else. Indeed, after switching to the newer Pandoc, I carefully diff’d dissertation1.tex to ensure nothing other than what I needed had changed.
dissertation1.tex: preamble.tex \
citeproc-preamble.tex \
committee.tex \
acknowledgements.tex \
dedication.tex \
contents.tex \
abbreviations.tex \
abstract.tex \
metadata.yaml \
template.latex \
philos.csl \
philos.bib \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md
schroot -c melete-sid -- pandoc -s -N -C -H preamble.tex \
--template=template.latex -B committee.tex \
-B acknowledgements.tex -B dedication.tex \
-B contents.tex -B abbreviations.tex -B abstract.tex \
ch1.md ch1_appA.md ch2.md ch3.md ch3_appB.md ch4.md ch5.md \
citeproc-preamble.tex metadata.yaml -o $@
With hindsight, I think that I should have eschewed Pandoc in favour of plain LaTeX for a project as large as this was. Pandoc is good for journal submissions, where one is responsible for the content but not really the presentation. However, one typesets one’s own dissertation, without anyone else’s help. I decided to commit dissertation1.tex to git, because Pandoc’s LaTeX generation is not too stable.
We then compile a first PDF. My Makefile comments say that pdfx.sty requires this particular xelatex invocation. pdfx.sty is supposed to make the PDF satisfy the PDF/A-2B long term archival standard … but dissertation1.pdf doesn’t actually pass PDF/A validation. We instead rely on GhostScript to produce a valid PDF/A-2B, at the final step. But we have to include pdfx.sty at this stage to ensure that the hyperlinks in the PDF are PDF/A-compatible – without pdfx.sty, GhostScript rejects hyperref’s links.
dissertation1.pdf: \
dissertation1.tex dissertation1.xmpdata committee_watermark.png
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
xelatex -shell-escape -output-driver="xdvipdfmx -z 0" $<
As I said, the second page of the PDF needs to be replaced with a scanned version of the page after it was signed by the examiners. The usual tool to stitch PDFs together is pdftk. But pdftk loses the PDF’s metadata. For the true, static metadata like the title, author and keywords, it would be no problem to add them back. But the metadata that’s lost includes the PDF’s table of contents, which PDF readers display in a sidebar, with clickable links to chapters, and the sections within those. This information is not static because each time any of the source Markdown and LaTeX files change, there is the potential for the table of contents to change. So we have to extract all the metadata from dissertation1.pdf and save it to one side, before we stitch in the scanned page. We also have to hack the metadata to ensure that the second page will have the correct orientation.
SED = /^PageMediaNumber: 2$$/ { n; s/0/90/; n; s/612 792/792 612/ }
KEYWORDS = virtue ethics, virtue, happiness, eudaimonism, good lives, final ends
dissertation1_meta.txt: dissertation1.pdf
printf "InfoBegin\nInfoKey: Keywords\nInfoValue: %s\n%s\n" \
"${KEYWORDS}" "$$(pdftk $^ dump_data)" \
| sed "${SED}" >$@
Now we can stitch in the signed page, and then put the metadata back. You can’t do this in one invocation of pdftk, so far as I could see.
dissertation1_stitched_updated.pdf: \
dissertation1_stitched.pdf dissertation1_meta.txt
pdftk dissertation1_stitched.pdf \
update_info dissertation1_meta.txt output $@
dissertation1_stitched.pdf: dissertation1.pdf
pdftk A=$^ \
B=$$HOME/annex/philos/Dissertation/committee_signed.pdf \
cat A1 B1 A3-end output $@
Finally, we use GhostScript to reprocess the PDF into two valid PDF/A-2Bs, one optimised for the web. This requires supplying a colour profile, a PDFA_def.ps postscript file, a whole sequence of GhostScript options, and some raw postscript on the command line, which gives the PDF reader some display hints.
GS_OPTS1 = -sDEVICE=pdfwrite -dBATCH -dNOPAUSE -dNOSAFER \
-sColorConversionStrategy=UseDeviceIndependentColor \
-dEmbedAllFonts=true -dPrinted=false -dPDFA=2 \
-dPDFACompatibilityPolicy=1 -dDetectDuplicateImages \
-dPDFSETTINGS=/printer -sOutputFile=$@
GS_OPTS2 = PDFA_def.ps dissertation1_stitched_updated.pdf \
-c "[ /PageMode /UseOutlines \
/Page 1 /View [/XYZ null null 1] \
/PageLayout /SinglePage /DOCVIEW pdfmark"
all: Whitton_dissert_web.pdf Whitton_dissert_gradcol.pdf
Whitton_dissert_gradcol.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} ${GS_OPTS2}
Whitton_dissert_web.pdf: \
PDFA_def.ps dissertation1_stitched_updated.pdf srgb.icc
gs ${GS_OPTS1} -dFastWebView=true ${GS_OPTS2}
And here’s PDFA_def.ps, based on a sample in the GhostScript docs:
% Define an ICC profile :
/ICCProfile (srgb.icc)
def
[/_objdef {icc_PDFA} /type /stream /OBJ pdfmark
[{icc_PDFA}
<<
/N 3
>> /PUT pdfmark
[{icc_PDFA} ICCProfile (r) file /PUT pdfmark
% Define the output intent dictionary :
[/_objdef {OutputIntent_PDFA} /type /dict /OBJ pdfmark
[{OutputIntent_PDFA} <<
/Type /OutputIntent % Must be so (the standard requires).
/S /GTS_PDFA1 % Must be so (the standard requires).
/DestOutputProfile {icc_PDFA} % Must be so (see above).
/OutputConditionIdentifier (sRGB)
>> /PUT pdfmark
[{Catalog} <</OutputIntents [ {OutputIntent_PDFA} ]>> /PUT pdfmark
Phew!
I’ve just realised Consfigurator 1.3.0, with some readtable enhancements. So now instead of writing
(firewalld:has-policy "athenet-allow-fwd"
#>EOF><?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
you can write
(firewalld:has-policy "athenet-allow-fwd" #>>~EOF>>
<?xml version="1.0" encoding="utf-8"?>
<policy priority="-40" target="ACCEPT">
<ingress-zone name="trusted"/>
<egress-zone name="internal"/>
</policy>
EOF)
which is a lot more readable when it appears in a list of other properties. In addition, instead of writing
(multiple-value-bind (match groups)
(re:scan-to-strings "^uid=(\\d+)" (connection-connattr connection 'id))
(and match (parse-integer (elt groups 0))))
you can write just (#1~/^uid=(\d+)/p (connection-connattr connection 'id))
.
On top of the Perl-inspired syntax, I’ve invented the new trailing option p
to attempt to parse matches as numbers.
Another respect in which Consfigurator’s readtable has become much more useful
in this release is that I’ve finally taught Emacs about these reader macros,
such that unmatched literal parentheses within regexps or heredocs don’t cause
Emacs (and especially Paredit) to think that the code couldn’t be valid Lisp.
Although I was able mostly to reuse propertising algorithms from the built-in
perl-mode
, I did have to learn a lot more about how parse-partial-sexp
really works, which was pretty cool.
The emacsclient(1) program is used to connect to Emacs running as a daemon. emacsclient(1) can go in your EDITOR/VISUAL environment variables so that you can edit things like Git commit messages and sudoers files in your existing Emacs session, rather than starting up a new instances of Emacs. It’s not only that this is usually faster, but also that it means you have all your session state available – for example, you can yank text from other files you were editing into the file you’re now editing.
Another, somewhat different use of emacsclient(1) is to open new Emacs frames
for arbitrary work, not just editing a single, given file. This can be in a
terminal or under a graphical display manager. I use emacsclient(1) for this
purpose about as often as I invoke it via EDITOR/VISUAL. I use emacsclient
-nc
to open new graphical frames and emacsclient -t
to open new text-mode
frames, the latter when SSHing into my work machine from home, or similar. In
each case, all my buffers, command history etc. are available. It’s a real
productivity boost.
Some people use systemd socket activation to start up the Emacs daemon. That
way, they only need ever invoke emacsclient
, without any special options,
and the daemon will be started if not already running. In my case, instead,
emacsclient
on PATH is a wrapper
script that checks
whether a daemon is running and starts one if necessary. The main reason I
have this script is that I regularly use both the installed version of Emacs
and in-tree builds of Emacs out of emacs.git, and the script knows how to
choose what to launch and what to try to connect to. In particular, it
ensures that the in-tree emacsclient(1) is not used to try to connect to the
installed Emacs, which might fail due to protocol changes. And it won’t use
the in-tree Emacs executable if I’m currently recompiling Emacs.
I’ve recently enhanced my wrapper script to make it possible to have the primary Emacs daemon always running under gdb. That way, if there’s a seemingly-random crash, I might be able to learn something about what happened. The tricky thing is that I want gdb to be running inside an instance of Emacs too, because Emacs has a nice interface to gdb. Further, gdb’s Emacs instance – hereafter “gdbmacs” – needs to be the installed, optimised build of Emacs, not the in-tree build, such that it’s less likely to suffer the same crash. And the whole thing must be transparent: I shouldn’t have to do anything special to launch the primary session under gdb. That is, if right after booting up my machine I execute
% emacsclient foo.txt
then gdbmacs should start, it should then start the primary sesion under gdb, and finally the real emacsclient(1) should connect to the primary session and request editing foo.txt. I’ve got that all working now, and there are some nice additional features. If the primary session hits a breakpoint, for example, then emacsclient requests will be redirected to gdbmacs, so that I can still edit files etc. without losing the information in the gdb session. I’ve given gdbmacs a different background colour, so that if I request a new graphical frame and it pops up with that colour, I know that the main session is wedged and I might like to investigate.
First attempt: remote attaching
My first attempt, which was running for several weeks, had a different
architecture. Instead of having gdbmacs start up the primary session, the
primary session would start up gdbmacs, send over its own PID, and ask gdbmacs
to use gdb’s functionality for attaching to existing processes. In
after-init-hook
I had to code to check whether we are an Emacs that just
started up out of my clone emacs.git, and if so, we invoke
% emacsclient --socket-name=gdbmacs --spw/installed \
--eval '(spw/gdbmacs-attach <the pid>)'
The --spw/installed
option asks the wrapper script to start up gdbmacs using
the Emacs binary on PATH, not the one in emacs.git/. (We can’t use the
server-eval-at
function because we need the wrapper script to start up
gdbmacs if it’s not already running.)
Over in gdbmacs, the spw/gdbmacs-attach
function then did something like
this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb (format "gdb -i=mi --pid=%d src/emacs" pid))
(gdb-wait-for-pending (lambda () (gud-basic-call "continue"))))
Having gdbmacs attach to the existing process is more robust than having
gdbmacs start up Emacs under gdb. If anything goes wrong with attaching, or
with gdbmacs more generally, you’ve still got the primary session running
normally; it just won’t be under a debugger. More significantly, the wrapper
script doesn’t need to know anything about the relationship between the two
daemons. It just needs to be able to start up both in-tree and installed
daemons, using the --spw/installed
option to determine which. The
complexity is all in Lisp, not shell script (the wrapper is a shell script
because it needs to start up fast).
The disadvantage of this scheme is that the primary session’s stdout and
stderr are not directly accessible to gdbmacs. There is a function
redirect-debugging-output
to deal with this situation, and I experimented
with having the primary session call this and send the new output filename to
gdbmacs, but it’s much less smooth than having gdbmacs start up the primary
session itself.
I think most people would probably prefer this scheme. It’s definitely cleaner to have the two daemons start up independently, and then have one attach to the other. But I decided that I was willing to complexify my wrapper script in order to have the primary session’s stdout and stderr attached to gdbmacs in the normal way.
Second attempt: daemons starting daemons
In this version, the relevant logic is shifted out of Lisp into the wrapper
script. When we execute emacsclient foo.txt
, the script first determines
whether the primary session is already running, using something like this:
[ -e /run/user/1000/emac/server \
-a -n "$(ss -Hplx src /run/user/1000/emacs/server)" ]
The ss(8) tool is used to determine if anything is listening on the socket.
The script also uses flock(1) to have other instances of the wrapper script
wait, in case they are going to cause the daemon to exit, or something. If
the daemon is running, then we can just exec emacs.git/lib-src/emacsclient
to handle the request. If not, we first have to start up gdbmacs:
installed_emacsclient=$(PATH=$(echo "$PATH" \
| sed -e "s#/directory/containing/wrapper/script##") \
command -v emacsclient)
"$installed_emacsclient" -a '' -sgdbmacs --eval '(spw/gdbmacs-attach)'
spw/gdbmacs-attach
now does something like this:
(let ((default-directory (expand-file-name "~/src/emacs/")))
(gdb "gdb -i=mi --args src/emacs --fg-daemon")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "set cwd ~")
(gdb-wait-for-pending
(lambda ()
(gud-basic-call "run"))))))
"$installed_emacsclient"
exits as soon as spw/gdbmacs-attach
returns,
which is before the primary session has started listening on the socket, so
the wrapper script uses inotifywait(1) to wait until /run/user/1000/server
appears. Then it is finally able to exec ~/src/emacs/lib-src/emacsclient
to
handle the request.
A particular kind of complexity
The wrapper script must be highly reliable. I use my primary Emacs session
for everything, on the same laptop that I do my academic work. The main way I
get at it is via a window manager shortcut that executes emacsclient -nc
to
request a new frame, such that if there is a problem, I won’t see any error
output until I open an xterm and tail ~/.swayerr
/~/.xsession-errors
. And
as starting gdbmacs and only then starting up less optimised, debug in-tree
builds of Emacs is not fast, I would have to wait at least ten seconds without
any Emacs frame popping up before I could suppose that something was wrong.
This is where the first scheme, where the complexity is all in Lisp, really seems attractive. My emacsclient(1) wrapper script has several other facilities and convenience features, some of which are general and some of which are only for my personal usage patterns, and the code for all those is now interleaved with the special cases for gdbmacs and the primary session that I’ve described in this post. There’s a lot that could go wrong, and it’s all in shell, and its output isn’t readily visible to the user. I’ve done a lot of testing, and I’m pretty confident in the script in its current form, but if I need to change or add features, I’ll have to do a lot of testing again before I can deploy to my usual laptop.
Single-threaded, readily interactively-debuggable Emacs Lisp really shines for
this sort of “do exactly what I mean, as often as possible” code, and you find
a lot of it in Emacs itself, third party packages, and peoples’ init.el
files. You can add all sorts of special cases to your interactive commands to
make Emacs do just what is most useful, and have confidence that you can
manage the resulting complexity. In this case, though, I’ve got piles of just
this sort of complexity out in an opaque shell script. The ultimate goal,
though, is debugging Emacs, such that one can run yet more DJWIM Emacs Lisp,
which perhaps justifies it.
I’ve come up with a new reprepro wrapper for adding rebuilds of existing Debian packages to a local repository: reprepro-rebuilder. It should make it quicker to update local rebuilds of existing packages, patched or unpatched, working wholly out of git. Here’s how it works:
Start with a git branch corresponding to the existing Debian package you want to rebuild. Probably you want
dgit clone foo
.Say
reprepro-rebuilder unstable
, and the script will switch you to a branchPREFIX/unstable
, where PREFIX is a short name for your reprepro repository, and updatedebian/changelog
for a local rebuild. If the branch already exists, it will be updated with a merge.You can now do any local patching you might require. Then, say
reprepro-rebuilder --release
. (The command from step (2) will offer to release immediately for the case that no additional patching is required.)At this point, your reprepro will contain a source package coresponding to your local rebuild. You can say
reprepro-rebuilder --wanna-build
to build any missing binaries for all suites, for localhost’s Debian architecture. (Again, the command from step (3) will offer to do this immediately after adding the source package.)
Additionally, if you’re rebuilding for unstable, reprepro-rebuilder will offer to rebuild for backports, too, and there are a few more convenience features, such as offering to build binaries for testing between steps (2) and (3). You can leave the script waiting to release while you do the testing.
I think that the main value of this script is keeping track of the distinct
steps of a relatively fiddly, potentially slow-running workflow for you,
including offering to perform your likely next step immediately. This means
that you can be doing something else while the rebuilds are trundling along:
you just start reprepro-rebuilder unstable
in a shell, and unless additional
patching is required between steps (2) and (3), you just have to answer script
prompts as they show up and everything gets done.
If you need to merge from upstream fairly regularly, and then produce binary
packages for both unstable and backports, that’s quite a lot of manual steps
that reprepro-rebuilder takes care of for you. But the script’s command line
interface is flexible enough for the cases where more intervention is
required, too. For example, for my Emacs snapshot builds, I have another
script to replace steps (1) and (2), which merges from a specific branch that
I know has been manually tested, and generates a special version number. Then
I say reprepro-rebuilder --release
and the script takes care of preparing
packages for unstable and bullseye-backports, and I can have my snapshots on
all of my machines without a lot of work.
The ThinkPad x220 that I had been using as an ssh terminal at home finally developed one too many hardware problems a few weeks ago, and so I ordered a Raspberry Pi 4b to replace it. Debian builds minimal SD card images for these machines already, but I wanted to use the usual ext4-on-LVM-on-LUKS setup for GNU/Linux workstations. So I used Consfigurator to build a custom image.
There are two key advantages to using Consfigurator to do something like this:
As shown below, it doesn’t take a lot of code to define the host, it’s easily customisable without writing shell scripts, and it’s all declarative. (It’s quite a bit less code than Debian’s image-building scripts, though I haven’t carefully compared, and they are doing some additional setup beyond what’s shown below.)
You can do nested block devices, as required for ext4-on-LVM-on-LUKS, without writing an intensely complex shell script to expand the root filesystem to fill the whole SD card on first boot. This is because Consfigurator can just as easily partition and install an actual SD card as it can write out a disk image, using the same host definition.
Consfigurator already had all the capabilities to do this, but as part of this project I did have to come up with the high-level wrapping API, which didn’t exist yet. My first SD card write wouldn’t boot because I had to learn more about kernel command lines; the second wouldn’t boot because of a minor bug in Consfigurator regarding /etc/crypttab; and the third build is the one I’m using, except that the first boot runs into a bug in cryptsetup-initramfs. So as far as Consfigurator is concerned I would like to claim that it worked on my second attempt, and had I not been using LUKS it would have worked on the first :)
The code
(defhost erebus.silentflame.com ()
"Low powered home workstation in Tucson."
(os:debian-stable "bullseye" :arm64)
(timezone:configured "America/Phoenix")
(user:has-account "spwhitton")
(user:has-enabled-password "spwhitton")
(disk:has-volumes
(physical-disk
(partitioned-volume
((partition
:partition-typecode #x0700 :partition-bootable t :volume-size 512
(fat32-filesystem :mount-point #P"/boot/firmware/"))
(partition
:volume-size :remaining
(luks-container
:volume-label "erebus_crypt"
:cryptsetup-options '("--cipher" "xchacha20,aes-adiantum-plain64")
(lvm-physical-volume :volume-group "vg_erebus"))))))
(lvm-logical-volume
:volume-group "vg_erebus"
:volume-label "lv_erebus_root" :volume-size :remaining
(ext4-filesystem :volume-label "erebus_root" :mount-point #P"/"
:mount-options '("noatime" "commit=120"))))
(apt:installed "linux-image-arm64" "initramfs-tools"
"raspi-firmware" "firmware-brcm80211"
"cryptsetup" "cryptsetup-initramfs" "lvm2")
(etc-default:contains "raspi-firmware"
"ROOTPART" "/dev/mapper/vg_erebus-lv_erebus_root"
"CONSOLES" "ttyS1,115200 tty0"))
and then you just insert the SD card and, at the REPL on your laptop,
CONSFIG> (hostdeploy-these laptop.example.com
(disk:first-disk-installed-for nil erebus.silentflame.com #P"/dev/mmcblk0"))
There is more general information in the OS installation tutorial in the Consfigurator user’s manual.
Other niceties
Configuration management that’s just as easily applicable to OS installation as it is to the more usual configuration of hosts over SSH drastically improves the ratio of cost-to-benefit for including small customisations one is used to.
For example, my standard Debian system configuration properties (omitted from the code above) meant that when I was dropped into an initramfs shell during my attempts to make an image that could boot itself, I found myself availed of my custom Space Cadet-inspired keyboard layout, without really having thought at any point “let’s do something to ensure I can have my usual layout while I’m figuring this out.” It was just included along with everything else.
As compared with the ThinkPad x220, it’s nice how the Raspberry Pi 4b is silent and doesn’t have any LEDs lit by default once it’s booted. A quirk of my room is that one plug socket is controlled by a switch right next to the switch for the ceiling light, so I’ve plugged my monitor into that outlet. Then when I’ve finished using the new machine I can flick that switch and the desk becomes completely silent and dark, without actually having to suspend the machine to RAM, thereby stopping cron jobs, preventing remote access from the office to fetch uncommitted files, etc..
I’d like to share some pointers for using Gnus together with notmuch rather than notmuch together with notmuch’s own Emacs interface, notmuch.el. I set about this because I recently realised that I had been poorly reimplementing lots of Gnus features in my init.el, primarily around killing threads and catching up groups, supported by a number of complex shell scripts. I’ve now switched over, and I’ve been able to somewhat simplify what’s in my init.el, and drastically simplify my notmuch configuration outside of Emacs. I’m always more comfortable with less Unix and more Lisp when it’s feasible.
The basic settings are
gnus-search-default-engines
andgnus-search-notmuch-remove-prefix
, explained in(info "(gnus) Searching")
, and an entry for your maildir ingnus-secondary-select-methods
, explained in(info "(gnus) Maildir")
. Then you will haveG G
andG g
in the group buffer to make and save notmuch searches.I think it’s important to have something equivalent to
notmuch-saved-searches
configured programmatically in your init.el, rather than interactively adding each saved search to the group buffer. This is because, as notmuch users know, these saved searches are more like permanent, virtual inboxes than searches. You can learn how to do this by looking at howgnus-group-make-search-group
callsgnus-group-make-group
. I have some code running ingnus-started-hook
which does something like this for each saved search:(if (gnus-group-entry group) (gnus-group-set-parameter group 'nnselect-specs ...) (gnus-group-make-group ...))
The idea is that if you update your saved search in your init.el, rerunning this code will update the entries in the group buffer. An alternative would be to just kill every nnselect search in the group buffer each time, and then recreate them. In addition to reading
gnus-group-make-search-group
, you can look in~/.newsrc.eld
to see the sort ofnnselect-specs
group parameters you’ll need your code to produce.I’ve very complicated generation of my saved searches from some variables, but that’s something I had when I was using notmuch.el, too, so perhaps I’ll describe some of the ideas in there in another post.
You’ll likely want to globally bind a function which starts up Gnus if it’s not already running and then executes an arbitrary notmuch search. For that you’ll want
(unless (gnus-alive-p) (gnus))
, and not(unless (gnus-alive-p) (gnus-no-server))
. This is because you need Gnus to initialise nnmaildir before doing any notmuch searches. Gnus passes--output=files
to notmuch and constructs a summary buffer of results by selecting mail that it already knows about with those filenames.When you’re programmatically generating the list of groups, you might also want to programmatically generate a topics topology. This is how you do that:
(with-current-buffer gnus-group-buffer (gnus-topic-mode 0) (setq gnus-topic-alist nil gnus-topic topology nil) ;; Now push to those two variables. You can also use ;; `gnus-topic-move-matching' to move nnmaildir groups into, e.g., ;; "misc". (gnus-topic-mode 1) (gnus-group-list-groups))
If you do this in
gnus-started-hook
, the values for those variables Gnus saves into~/.newsrc.eld
are completely irrelevant and do not need backing up/syncing.When you want to use
M-g
to scan for new mail in a saved search, you’ll need to have Gnus also rescan your nnmaildir inbox, else it won’t know about the filenames returned by notmuch and the messages won’t appear. This is similar to thegnus
vs.gnus-no-server
issue above. I’m using:before
advice tognus-request-group-scan
to scan my nnmaildir inbox each time any nnselect group is to be scanned.If you are used to linking to mail from Org-mode buffers, the existing support for creating links works fine, and the standard
gnus:
links already contain the Message-ID. But you’ll probably want opening the link to perform a notmuch search for id:foo rather than trying to use Gnus’s own jump-to-Message-ID code. You can do this using:around
or:override
advice fororg-gnus-follow-link
: look atgnus-group-read-ephemeral-search-group
to do the search, and then callgnus-summary-goto-article
.
I don’t think that the above is especially hacky, and don’t expect changes to Gnus to break any of it. Implementing the above for your own notmuch setup should get you something close enough to notmuch.el that you can take advantage of Gnus’ unique features without giving up too much of notmuch’s special features. However, it’s quite a bit of work, and you need to be good at Emacs Lisp. I’d suggest reading lots of the Gnus manual and determining for sure that you’ll benefit from what it can do before considering switching away from notmuch.el.
Reading through the Gnus manual, it’s been amazing to observe the extent to which I’d been trying to recreate Gnus in my init.el, quite oblivious that everything was already implemented for me so close to hand. Moreover, I used Gnus ten years ago when I was new to Emacs, so I should have known! I think that back then I didn’t really understand the idea that Gnus for mail is about reading mail like news, and so I didn’t use any of the features, back then, that more recently I’ve been unknowingly reimplementing.
I recently released Consfigurator 1.0.0 and I’m now returning to my Common Lisp reading. Building Consfigurator involved the ad hoc development of a cross between a Haskell-style functional DSL and a Lisp-style macro DSL. I am hoping that it will be easier to retain lessons about building these DSLs more systematically, and making better use of macros, by finishing my studying of macrology books and papers only after having completed the ad hoc DSL. Here’s my current list:
Finishing off On Lisp and Let Over Lambda.
Richard C. Waters. 1993. “Macroexpand-All: an example of a simple lisp code walker.” In Newsletter ACM SIGPLAN Lisp Pointers 6 (1).
Michael Raskin. 2017. “Writing a best-effort portable code walker in Common Lisp.” In Proceedings of 10th European Lisp Symposium (ELS2017).
Cullpepper et. al. 2019. “From Macros to DSLs: The Evolution of Racket”. Summet of Advances in Programming Languages.
One thing that I would like to understand better is the place of code walking in macro programming. The Raskin paper explains that it is not possible to write a fully correct code walker in ANSI CL. Consfigurator currently uses Raskin’s best-effort portable code walker. Common Lisp: The Language 2 includes a few additional functions which didn’t make it into the ANSI standard that would make it possible to write a fully correct code walker, and most implementations of CL provide them under one name or another. So one possibility is to write a code walker in terms of ANSI CL + those few additional functions, and then use a portability layer to get access to those functions on different implementations (e.g. trivial-cltl2).
However, both On Lisp and Let Over Lambda, the two most substantive texts on CL macrology, both explicitly put code walking out-of-scope. I am led to wonder: does the Zen of Common Lisp-style macrology involve doing without code walking? One key idea with macros is to productively blur the distinction between designing languages and writing code in those languages. If your macros require code walking, have you perhaps ended up too far to the side of designing whole languages? Should you perhaps rework things so as not to require the code walking? Then it would matter less that those parts of CLtL2 didn’t make it into ANSI. Graham notes in ch. 17 of On Lisp that read macros are technically more powerful than defmacro because they can do everything that defmacro can and more. But it would be a similar sort of mistake to conclude that Lisp is about read macros rather than defmacro.
There might be some connection between arguments for and against avoiding code walking in macro programming and the maintainance of homoiconicity. One extant CL code walker, hu.dwim.walker, works by converting back and forth between conses and CLOS objects (Raskin’s best-effort code walker has a more minimal interface), and hygienic macro systems in Scheme similarly trade away homoiconicity for additional metadata (one Lisp programmer I know says this is an important sense in which Scheme could be considered not a Lisp). Perhaps arguments against involving much code walking in macro programming are equivalent to arguments against Racket’s idea of language-oriented programming. When Racket’s designers say that Racket’s macro system is “more powerful” than CL’s, they would be right in the sense that the system can do all that defmacro can do and more, but wrong if indeed the activity of macro programming is more powerful when kept further away from language design. Anyway, these are some hypotheses I am hoping to develop some more concrete ideas about in my reading.
Consfigurator has long has combinators OS:TYPECASE and OS:ETYPECASE to conditionalise on a host’s operating system. For example:
(os:etypecase
(debian-stable (apt:installed-backport "notmuch"))
(debian-unstable (apt:installed "notmuch")
You can’t distinguish between stable releases of Debian like this, however, because while that information is known, it’s not represented at the level of types. You can manually conditionalise on Debian suite using something like this:
(defpropspec notmuch-installed :posix ()
(switch ((os:debian-suite (get-hostattrs-car :os)) :test #'string=)
("bullseye" '(apt:installed-backport "notmuch"))
(t '(apt:installed "notmuch"))))
but that means stepping outside of Consfigurator’s DSL, which has various disadvantages, such as a reduction in readability. So today I’ve added some new combinators, so that you can say
(os:debian-suite-case
("bullseye" (apt:installed-backport "notmuch"))
(t (apt:installed "notmuch")))
For my own use I came up with this additional simple wrapper:
(defmacro for-bullseye (atomic)
`(os:debian-suite-case
("buster")
("bullseye" ,atomic)
;; Check the property is actually unapplicable.
,@(and (get (car atomic) 'punapply) `((t (unapplied ,atomic))))))
So now I can say
(for-bullseye (apt:pinned '("elpa-org-roam") '(os:debian-unstable) 900))
which is a succinct expression of the following: “on bullseye, pin elpa-org-roam to sid with priority 900, drop the pin when we upgrade the machine to bookworm, and don’t do anything at all if the machine is still on buster”.
As a consequence of my doing Debian development but running Debian stable everywhere, I accumulate a number of tweaks like this one over the course of each Debian stable release. In the past I’ve gone through and deleted them all when it’s time to upgrade to the next release, but then I’ve had to add properties to undo changes made for the last stable release, and write comments saying why those are there and when they can be safely removed, which is tedious and verbose. This new combinator is cleaner.