Posts from 2005–2010 temporarily unavailable.

I’ve been playing in a 5e campaign for around two months now. In the past ten days or so I’ve been reading various source books and Internet threads regarding the design of 5th edition. I’d like to draw some comparisons and contrasts between 5th edition, and the 3rd edition family of games (DnD 3.5e and Paizo’s Pathfinder, which may be thought of as 3.75e).

The first thing I’d like to discuss is that wizards and clerics are no longer Vancian spellcasters. In rules terms, this is the idea that individual spells are pieces of ammunition. Spellcasters have a list of individual spells stored in their heads, and as they cast spells from that list, they cross off each item. Barring special rules about spontaneously converting prepared spells to healing spells, for clerics, the only way to add items back to the list is to take a night’s rest. Contrast this with spending points from a pool of energy in order to use an ability to cast a fireball. Then the limiting factor on using spells is having enough points in your mana pool, not having further castings of the spell waiting in memory.

One of the design goals of 5th edition was to reduce the dominance of spellcasters at higher levels of play. The article to which I linked in the previous paragraph argues that this rebalancing requires the removal of Vancian magic. The idea, to the extent that I’ve understood it, is that Vancian magic is not an effective restriction on spellcaster power levels, so it is to be replaced with other restrictions—adding new restrictions while retaining the restrictions inherent in Vancian magic would leave spellcasters crippled.

A further reason for removing Vancian magic was to defeat the so-called “five minute adventuring day”. The compat ability of a party that contains higher level Vancian spellcasters drops significantly once they’ve fired off their most powerful combat spells. So adventuring groups would find themselves getting into a fight, and then immediately retreating to fully rest up in order to get their spells back. This removes interesting strategic and roleplaying possibilities involving the careful allocation of resources, and continuing to fight as hit points run low.

There are some other related changes. Spell components are no longer used up when casting a spell. So you can use one piece of bat guano for every fireball your character ever casts, instead of each casting requiring a new piece. Correspondingly, you can use a spell focus, such as a cool wand, instead of a pouch full of material components—since the pouch never runs out, there’s no mechanical change if a wizard uses an arcane focus instead. 0th level spells may now be cast at will (although Pathfinder had this too). And there are decent 0th level attack spells, so a spellcaster need not carry a crossbow or shortbow in order to have something to do on rounds when it would not be optimal to fire off one of their precious spells.

I am very much in favour of these design goals. The five minute adventuring day gets old fast, and I want it to be possible for the party to rely on the cool abilities of non-spellcasters to deal with the challenges they face. However, I am concerned about the flavour changes that result from the removal of Vancian magic. These affect wizards and clerics differently, so I’ll take each case in turn.

Firstly, consider wizards. In third edition, a wizard had to prepare and cast Read Magic (the only spell they could prepare without a spellbook), and then set about working through their spellbook. This involved casting the spells they wanted to prepare, up until the last few triggering words or gestures that would cause the effect of the spell to manifest. They would commit these final parts of the spell to memory. When it came to casting the spell, the wizard would say the final few words and make the required gestures, and bring out relevant material components from their component pouch. The completed spell would be ripped out of their mind, to manifest its effect in the world. We see that the casting of a spell is a highly mentally-draining activity—it rips the spell out of the caster’s memory!—not to be undertaken lightly. Thus it is natural that a wizard would learn to use a crossbow for basic damage-dealing. Magic is not something that comes very naturally to the wizard, to be deployed in combat as readily as the fighter swings their sword. They are not a superhero or video game character, “pew pew”ing their way to victory. This is a very cool starting point upon which to roleplay an academic spellcaster, not really available outside of tabletop games. I see it as a distinction between magical abilities and real magic.

Secondly, consider clerics. Most of the remarks in the previous paragraph apply, suitably reworked to be in terms of requesting certain abilities from the deity to whom the cleric is devoted. Additionally, there is the downgrading of the importance of the cleric’s healing magic in 5th edition. Characters can heal themselves by taking short and long rests. Previously, natural healing was very slow, so a cleric would need to convert all their remaining magic to healing spells at the end of the day, and hope that it was enough to bring the party up to fighting shape. Again, this made the party of adventurers seem less like superheroes or video game characters. Magic had a special, important and unique role, that couldn’t be replaced by the abilities of other classes.

There are some rules in the back of the DMG—“Slow Natural Healing”, “Healing Kit Dependency”, “Lingering Wounds”—which can be used to make healing magic more important. I’m not sure how well they would work without changes to the cleric class.

I would like to find ways to restore the feel and flavour of Vancian clerics and wizards to 5th edition, without sacrificing the improvements that have been made that let other party members do cool stuff too. I hope it is possible to keep magic cool and unique without making it dominate the game. It would be easy to forbid the use of arcane foci, and say that material component pouches run out if the party do not visit a suitable marketplace often enough. This would not have a significant mechanical effect, and could enhance roleplaying possibilities. I am not sure how I could deal with the other issues I’ve discussed without breaking the game.

The second thing I would like to discuss is bounded accuracy. Under this design principle, the modifiers to dice rolls grow much more slowly. The gain of hit points remains unbounded. Under third edition, it was mechanically impossible for a low-level monster to land a hit on a higher-level adventurer, rendering them totally useless even in overwhelming numbers. With bounded accuracy, it’s always possible for a low-level monster to hit a PC, even if they do insigificant damage. That means that multiple low-level monsters pose a threat.

This change opens up many roleplaying opportunities by keeping low-level character abilities relevant, as well as monster types that can remain involves in stories without giving them implausible new abilities so they don’t fall far behind the PCs. However, I’m a little worried that it might make high level player characters feel a lot less powerful to play. I want to cease a be a fragile adventurer and become a world-changing hero at later levels, rather than forever remain vulnerable to the things that I was vulnerable to at the start of the game. This desire might just be the result of the video games which I played growing up. In the JRPGs I played and in Diablo II, enemies in earlier areas of the map were no threat at all once you’d levelled up by conquering higher-level areas. My concerns about bounded accuracy might just be that it clashes with my own expectations of how fantasy heroes work. A good DM might be able to avoid these worries entirely.

The final thing I’d like to discuss is the various simplifications to the rules of 5th edition, when it is compared with 3rd edition and Pathfinder. Attacks of opportunity are only provoked when leaving a threatened square; you can go ahead and cast a spell when in melee with someone. There is a very short list of skills, and party members are much closer to each other in skills, now that you can’t pump more and more ranks into one or two abilities. Feats as a whole are an optional rule.

At first I was worried about these simplifications. I thought that they might make character building and tactics in combat a lot less fun. However, I am now broadly in favour of all of these changes, for two reasons. Firstly, they make the game so much more accessible, and make it far more viable to play without relying on a computer program to fill in the boxes on your character sheet. In my 5th edition group, two of us have played 3rd edition games, and the other four have never played any tabletop games before. But nobody has any problems figuring out their modifiers because it is always simply your ability bonus or penalty, plus your proficiency bonus if relevant. And advantage and disadvantage is so much more fun than getting an additional plus or minus two. Secondly, these simplifications downplay the importance of the maths, which means it is far less likely to be broken. It is easier to ensure that a smaller core of rules is balanced than it is to keep in check a larger mass of rules, constantly being supplemented by more and more addon books containing more and more feats and prestige classes. That means that players make their characters cool by roleplaying them in interesting ways, not making them cool by coming up with ability combos and synergies in advance of actually sitting down to play. Similarly, DMs can focus on flavouring monsters, rather than writing up longer stat blocks.

I think that this last point reflects what I find most worthwhile about tabletop RPGs. I like characters to encounter cool NPCs and cool situations, and then react in cool ways. I don’t care that much about character creation. (I used to care more about this, but I think it was mainly because of interesting options for magic items, which hasn’t gone away.) The most important thing is exercising group creativity while actually playing the game, rather than players and DMs having to spend a lot of time preparing the maths in advance of playing. Fifth edition enables this by preventing the rules from getting in the way, because they’re broken or overly complex. I think this is why I love Exalted: stunting is vital, and there is social combat. I hope to be able to work out a way to restore Vancian magic, but even without that, on balance, fifth edition seems like a better way to do group storytelling about fantasy heroes. Hopefully I will have an opportunity to DM a 5th edition campaign. I am considering disallowing all homebrew and classes and races from supplemental books. Stick to the well-balanced core rules, and do everything else by means of roleplaying and flavour. This is far less gimmicky, if more work for unimaginative players (such as myself!).

Some further interesting reading:

Posted Mon 13 Mar 2017 23:37:02 UTC

On Friday night I attended a talk by Sherry Turkle called “Reclaiming Conversation: The Power of Talk in a Digital Age”. Here are my notes.


Turkle is an anthropologist who interviews people from different generations about their communication habits. She has observed cross-generational changes thanks to (a) the proliferation of instant messaging apps such as WhatsApp and Facebook Messenger; and (b) fast web searching from smartphones.

Her main concern is that conversation is being trivialised. Consider six or seven college students eating a meal together. Turkle’s research has shown that the etiquette among such a group has shifted such that so long as at least three people are engaged in conversation, others at the table feel comfortable turning their attention to their smartphones. But then the topics of verbal conversation will tend away from serious issues – you wouldn’t talk about your mother’s recent death if anyone at the table was texting.

There are also studies that purport to show that the visibility of someone’s smartphone causes them to take a conversation less seriously. The hypothesis is that the smartphone is a reminder of all the other places they could be, instead of with the person they are with.

A related cause of the trivialisation of conversation is that people are far less willing to make themselves emotionally vulnerable by talking about serious matters. People have a high degree of control over the interactions that take place electronically (they can think about their reply for much longer, for example). Texting is not open-ended in the way a face-to-face conversation is. People are unwilling to give up this control, so they choose texting over talking.

What is the upshot of these two respects in which conversation is being trivialised? Firstly, there are psycho-social effects on individuals, because people are missing out on opportunities to build relationships. But secondly, there are political effects. Disagreeing about politics immediately makes a conversation quite serious, and people just aren’t having those conversations. This contributes to polarisation.

Note that this is quite distinct from the problems of fake news and the bubbling effects of search engine algorithms, including Facebook’s news feed. It would be much easier to tackle fake news if people talked about it with people around them who would be likely to disagree with them.

Turkle understands connection as a capacity for solitude and also for conversation. The drip feed of information from the Internet prevents us from using our capacity for solitude. But then we fail to develop a sense of self. Then when we finally do meet other people in real life, we can’t hear them because we just use them to try to establish a sense of self.

Turkle wants us to be more aware of the effects that our smartphones can have on conversations. People very rarely take their phone out during a conversation because they want to escape from that conversation. Instead, they think that the phone will contribute to that conversation, by sharing some photos, or looking up some information online. But once the phone has come out, the conversation almost always takes a turn for the worse. If we were more aware of this, we would have access to deeper interactions.

A further respect in which the importance of conversation is being downplayed is in the relationships between teachers and students. Students would prefer to get answers by e-mail than build a relationship with their professors, but of course they are expecting far too much of e-mail, which can’t teach them in the way interpersonal contact can.

All the above is, as I said, cross-generational. Something that is unique to millenials and below is that we seek validation for the way that we feel using social media. A millenial is not sure how they feel until they send a text or make a broadcast (this makes them awfully dependent on others). Older generations feel something, and then seek out social interaction (presumably to share, but not in the social media sense of ‘share’).

What does Turkle think we can do about all this? She had one positive suggestion and one negative suggestion. In response to student or colleague e-mails asking for something that ought to be discussed face-to-face, reply “I’m thinking.” And you’ll find they come to you. She doesn’t want anyone to write “empathy apps” in response to her findings. For once, more tech is definitely not the answer.

Turkle also made reference to the study reported here and here and here.

Posted Tue 07 Feb 2017 02:52:57 UTC

The Sorcerer’s Code

This is a well-written biographical piece on Stallman.

Posted Tue 31 Jan 2017 16:37:31 UTC

There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference.

I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things.

When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian’s packaging is just another branch of development.

Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git.

One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.

Posted Mon 09 Jan 2017 20:14:52 UTC

Burkeman: Why time management is ruining our lives

Over the past semester I’ve been trying to convince one graduate student and one professor in my department to use Inbox Zero to get a better handle on their e-mail inboxes. The goal is not to be more productive. The two of them get far more academic work done than I do. However, both of them are far more stressed than I am. And in the case of the graduate student, I have to add items to my own to-do list to chase up e-mails that I’ve sent him, which only spreads this stress and tension around.

The graduate student sent me this essay by Oliver Burkeman about how these techniques can backfire, creating more stress, tension and anxiety. It seems to me that this happens when we think of these techniques as having anything to do with productivity. Often people will say “use this technique and you’ll be less stressed, more productive, and even more productive because you’re less stressed.” Why not just say “use this technique and you’ll be less anxious and stressed”? This is a refusal to treat lower anxiety as merely a means to some further end. People can autonomously set their own ends, and they’ll probably do a better job of this when they’re less anxious. Someone offering a technique to help with their sense of being overwhelmed need not tell them what to do with their new calm.

It might be argued that this response to Burkeman fails to address the huge sense of obligation that an e-mail inbox can generate. Perhaps the only sane response to this infinite to-do list is to let it pile up. If we follow a technique like Inbox Zero, don’t we invest our inbox with more importance than it has? Like a lot of areas of life, the issue is that the e-mails that will advance truly valuable projects and relationships, projects of both ourselves and of others, are mixed in with reams of stuff that doesn’t matter. We face this situation whenever we go into a supermarket, or wonder what to do during an upcoming vacation. In all these situations, we have a responsibility to learn how to filter the important stuff out, just as we have a responsibility to avoid reading celebrity gossip columns when we are scanning through a newspaper. Inbox Zero is a technique to do that filtering in the case of e-mail. Just letting our inbox pile up is an abdication of responsibility, rather than an intelligent response to a piece of technology that most of the world abuses.

Posted Sat 31 Dec 2016 08:34:29 UTC

Programming by poking: why MIT stopped teaching SICP

Perhaps there is a case for CS programs keeping pace with workplace technological changes (in addition to developments in the academic field of CS), but it seems sad to deprive undergrads of deeper knowledge about language design.

Posted Sun 18 Dec 2016 21:34:15 UTC

A new postdoc student arrived at our department this semester, and after learning that he uses GNU/Linux for all his computing, I invited him along to TFUG. During some of our meetings people asked “how could I do X on my GNU/Linux desktop?” and, jokingly, the postdoc would respond “the answer to your question is ‘do you really need to do that?’” Sometimes the more experienced GNU/Linux users at the table would respond to questions by suggesting that the user should simply give up on doing X, and the postdoc would slap his thigh and laugh and say “see? I told you that’s the answer!”

The phenomenon here is that people who have at some point made a commitment to at least try to use GNU/Linux for all their computing quickly find that they have come to value using GNU/Linux more than they value engaging in certain activities that only work well/at all under a proprietary operating system. I think that this is because they get used to being treated with respect by their computer. And indeed, one of the reasons I’ve almost entirely given up on computer gaming is that computer games are non-free software. “Are you sure you need to do that?” starts sounding like a genuine question rather than simply a polite way of saying that what someone wants to do can’t be achieved.

I suggest that this is a blessing in disguise. The majority of the things that you can only do under a proprietary operating system are things that it would be good for you if you were to switch them out for other activities. I’m not suggesting that switching to a GNU/Linux is a good way to give up on the entertainment industry. It’s a good way of moderating your engagement with the entertainment industry. Rather than logging onto Netflix, you might instead pop in a DVD of a movie. You can still engage with contemporary popular culture, but the technical barriers give you an opportunity to moderate your consumption: once you’ve finished watching the movie, the software won’t try to get you to watch something else by making a calculation as to what you’re most likely to assent to watching next based on what you’ve watched before. For this behaviour of the Netflix software is just another example of non-free software working against its user’s interests: watching a movie is good for you, but binge-watching a TV series probably isn’t. In cases like this, living in the world of Free Software makes it easier to engage with media healthily.

Posted Wed 28 Sep 2016 15:25:09 UTC Tags:
Posted Wed 21 Sep 2016 00:02:42 UTC Tags:

When it rains in Tucson, people are able to take an unusually carefree attitude towards it. Although the storm is dramatic, and the amount of water means that the streets turn to rivers, everyone knows that it will be over in a few hours and the heat will return (and indeed, that’s why drain provision is so paltry).

In other words, despite the arresting thunderclaps, the weather is not threatening. By contrast, when there is a storm in Britain, one feels a faint primordial fear that one won’t be able to find shelter after the storm, in the cold and sodden woods and fields. Here, that threat just isn’t present. I think that’s what makes us feel so free to move around in the rain.

I rode my bike back from the gym in my $5 plastic shoes. The rain hitting my body was cold, but the water splashing up my legs and feet was warm thanks of the surface of the road—except for one area where the road was steep enough that the running water had already taken away all lingering heat.

Posted Wed 17 Aug 2016 03:28:13 UTC Tags:

I maintain Debian packages for several projects which are hosted on GitHub. I have a master packaging branch containing both upstream’s code, and my debian/ subdirectory containing the packaging control files. When upstream makes a new release, I simply merge their release tag into master: git merge 1.2.3 (after reviewing the diff!).

Packaging things for Debian turns out to be a great way to find small bugs that need to be fixed, and I end up forwarding a lot of patches upstream. Since the projects are on GitHub, that means forking the repo and submitting pull requests. So I end up with three remotes:

origin
the Debian git server
upstream
upstream’s GitHub repo from which I’m getting the release tags
fork
my GitHub fork of upstream’s repo, where I’m pushing bugfix branches

I can easily push individual branches to particular remotes. For example, I might say git push -u fork fix-gcc-6. However, it is also useful to have a command that pushes everything to the places it should be: pushes bugfix branches to fork, my master packaging branch to origin, and definitely doesn’t try to push anything to upstream (recently an upstream project gave me push access because I was sending so many patches, and then got a bit annoyed when I pushed a series of Debian release tags to their GitHub repo by mistake).

I spent quite a lot of time reading git-config(1) and git-push(1), and came to the conclusion that there is no combination of git settings and a push command that do the right thing in all cases. Candidates, and why they’re insufficient:

git push --all
I thought about using this with the remote.pushDefault and branch.*.pushRemote configuration options. The problem is that git push --all pushes to only one remote, and it selects it by looking at the current branch. If I ran this command for all remotes, it would push everything everywhere.
git push <remote> : for each remote
This is the “matching push strategy”. It will push all branches that already exist on the remote with the same name. So I thought about running this for each remote. The problem is that I typically have different master branchs on different remotes. The fork and upstream remotes have upstream’s master branch, and the origin remote has my packaging branch.

I wrote a perl script implementing git push-all, which does the right thing. As you will see from the description at the top of the script, it uses remote.pushDefault and branch.*.pushRemote to determine where it should push, falling back to pushing to the remote the branch is tracking. If won’t push something when all three of these are unspecified, and more generally, it won’t create new remote branches except in the case where the branch-specific setting branch.*.pushRemote has been specified. Magit makes it easy to set remote.pushDefault and branch.*.pushRemote.

I have this in my ~/.mrconfig:

git_push = git push-all

so that I can just run mr push to ensure that all of my work has been sent where it needs to be (see myrepos).

#!/usr/bin/perl

# git-push-all -- intelligently push most branches

# Copyright (C) 2016 Sean Whitton
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at
# your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# Prerequisites:

# The Git::Wrapper, Config::GitLike, and List::MoreUtils perl
# libraries.  On a Debian system,
#     apt-get install libgit-wrapper-perl libconfig-gitlike-perl \
#         liblist-moreutils-perl

# Description:

# This script will try to push all your branches to the places they
# should be pushed, with --follow-tags.  Specifically, for each branch,
#
# 1. If branch.pushRemote is set, push it there
#
# 2. Otherwise, if remote.pushDefault is set, push it there
#
# 3. Otherwise, if it is tracking a remote branch, push it there
#
# 4. Otherwise, exit non-zero.
#
# If a branch is tracking a remote that you cannot push to, be sure to
# set at least one of branch.pushRemote and remote.pushDefault.

use strict;
use warnings;
no warnings "experimental::smartmatch";

use Git::Wrapper;
use Config::GitLike;
use List::MoreUtils qw{ uniq apply };

my $git = Git::Wrapper->new(".");
my $config = Config::GitLike->new( confname => 'config' );
$config->load_file('.git/config');

my @branches = apply { s/[ \*]//g } $git->branch;
my @allBranches = apply { s/[ \*]//g } $git->branch({ all => 1 });
my $pushDefault = $config->get( key => "remote.pushDefault" );

my %pushes;

foreach my $branch ( @branches ) {
    my $pushRemote = $config->get( key => "branch.$branch.pushRemote" );
    my $tracking = $config->get( key => "branch.$branch.remote" );

    if ( defined $pushRemote ) {
        print "I: pushing $branch to $pushRemote (its pushRemote)\n";
        push @{ $pushes{$pushRemote} }, $branch;
    # don't push unless it already exists on the remote: this script
    # avoids creating branches
    } elsif ( defined $pushDefault
              && "remotes/$pushDefault/$branch" ~~ @allBranches ) {
        print "I: pushing $branch to $pushDefault (the remote.pushDefault)\n";
        push @{ $pushes{$pushDefault} }, $branch;
    } elsif ( !defined $pushDefault && defined $tracking ) {
        print "I: pushing $branch to $tracking (probably to its tracking branch)\n";
        push @{ $pushes{$tracking} }, $branch;
    } else {
        die "E: couldn't find anywhere to push $branch";
    }
}

foreach my $remote ( keys %pushes ) {
    my @branches = @{ $pushes{$remote} };
    system "git push --follow-tags $remote @branches";
    exit 1 if ( $? != 0 );
}
Posted Sat 06 Aug 2016 22:45:16 UTC