Posts from 2005–2010 temporarily unavailable.

Something I’ve found myself doing as the pandemic rolls on is picking out and (re-)reading through sections of the GNU Emacs manual and the GNU Emacs Lisp reference manual. This has got me (too) interested in some of the recent history of Emacs development, and I did some digging into archives of emacs-devel from 2008 (15M mbox) regarding the change to turn Transient Mark mode on by default and set mark-even-if-inactive to true by default in Emacs 23.1.

It’s not always clear which objections to turning on Transient Mark mode by default take into account the mark-even-if-inactive change. I think that turning on Transient Mark mode along with mark-even-if-inactive is a good default. The question that remains is whether the disadvantages of Transient Mark mode are significant enough that experienced Emacs users should consider altering Emacs’ default behaviour to mitigate them. Here’s one popular blog arguing for some mitigations.

How might Transient Mark mode be disadvantageous?

The suggestion is that it makes using the mark for navigation rather than for acting on regions less convenient:

  1. setting a mark just so you can jump back to it (i) is a distinct operation you have to think of separately; and (ii) requires two keypresses, C-SPC C-SPC, rather than just one keypress

  2. using exchange-point-and-mark activates the region, so to use it for navigation you need to use either C-u C-x C-x or C-x C-x C-g, neither of which are convenient to type, or else it will be difficult to set regions at the place you’ve just jumped to because you’ll already have one active.

There are two other disadvantages that people bring up which I am disregarding. The first is that it makes it harder for new users to learn useful ways in which to use the mark when it’s deactivated. This happened to me, but it can be mitigated without making any behavioural changes to Emacs. The second is that the visual highlighting of the region can be distracting. So far as I can tell, this is only a problem with exchange-point-and-mark, and it’s subsumed by the problem of that command actually activating the region. The rest of the time Emacs’ automatic deactivation of the region seems sufficient.

How might disabling Transient Mark mode be disadvantageous?

When Transient Mark mode is on, many commands will do something usefully different when the mark is active. The number of commands in Emacs which work this way is only going to increase now that Transient Mark mode is the default.

If you disable Transient Mark mode, then to use those features you need to temporarily activate Transient Mark mode. This can be fiddly and/or require a lot of keypresses, depending on exactly where you want to put the region.

Without being able to see the region, it might be harder to know where it is. Indeed, this is one of the main reasons for wanting Transient Mark mode to be the default, to avoid confusing new users. I don’t think this is likely to affect experienced Emacs users often, however, and on occasions when more precision is really needed, C-u C-x C-x will make the region visible. So I’m not counting this as a disadvantage.

How might we mitigate these two sets of disadvantages?

Here are the two middle grounds I’m considering.

Mitigation #1: Transient Mark mode, but hack C-x C-x behaviour

(defun spw/exchange-point-and-mark (arg)
  "Exchange point and mark, but reactivate mark a bit less often.

Specifically, invert the meaning of ARG in the case where
Transient Mark mode is on but the region is inactive."
  (interactive "P")
  (exchange-point-and-mark
   (if (and transient-mark-mode (not mark-active))
       (not arg)
     arg)))
(global-set-key [remap exchange-point-and-mark] 'spw/exchange-point-and-mark)

We avoid turning Transient Mark mode off, but mitigate the second of the two disadvantages given above.

I can’t figure out why it was thought to be a good idea to make C-x C-x reactivate the mark and require C-u C-x C-x to use the action of exchanging point and mark as a means of navigation. There needs to a binding to reactivate the mark, but in roughly ten years of having Transient Mark mode turned on, I’ve found that the need to reactivate the mark doesn’t come up often, so the shorter and longer bindings seem the wrong way around. Not sure what I’m missing here.

Mitigation #2: disable Transient Mark mode, but enable it temporarily more often

(setq transient-mark-mode nil)
(defun spw/remap-mark-command (command &optional map)
  "Remap a mark-* command to temporarily activate Transient Mark mode."
  (let* ((cmd (symbol-name command))
         (fun (intern (concat "spw/" cmd)))
         (doc (concat "Call `"
                      cmd
                      "' and temporarily activate Transient Mark mode.")))
    (fset fun `(lambda ()
                 ,doc
                 (interactive)
                 (call-interactively #',command)
                 (activate-mark)))
    (if map
        (define-key map (vector 'remap command) fun)
      (global-set-key (vector 'remap command) fun))))

(dolist (command '(mark-word
                   mark-sexp
                   mark-paragraph
                   mark-defun
                   mark-page
                   mark-whole-buffer))
  (spw/remap-mark-command command))
(with-eval-after-load 'org
  (spw/remap-mark-command 'org-mark-subtree org-mode-map))

;; optional
(global-set-key "\M-=" (lambda () (interactive) (activate-mark)))
;; resettle the previous occupant
(global-set-key "\C-cw" 'count-words-region)

Here we remove both of the disadvantages of Transient Mark mode given above, and mitigate the main disadvantage of not activating Transient Mark mode by making it more convenient to activate it temporarily.

For example, this enables using C-M-SPC C-M-SPC M-( to wrap the following two function arguments in parentheses. And you can hit M-h a few times to mark some blocks of text or code, then operate on them with commands like M-% and C-/ which behave differently when the region is active.1

Comparing these mitigations

Both of these mitigations handle the second of the two disadvantages of Transient Mark mode given above. What remains, then, is

  1. under the effects of mitigation #1, how much of a barrier to using marks for navigational purposes is it to have to press C-SPC C-SPC instead of having a single binding, C-SPC, for all manual mark setting2

  2. under the effects of mitigation #2, how much of a barrier to taking advantage of commands which act differently when the region is active is it to have to temporarily enable Transient Mark mode with C-SPC C-SPC, M-= or one of the mark-* commands?

These are unknowns.3 So I’m going to have to experiment, I think, to determine which mitigation to use, if either. In particular, I don’t know whether it’s really significant that setting a mark for navigational purposes and for region marking purposes are distinct operations under mitigation #1.

My plan is to start with mitigation #2 because that has the additional advantage of allowing me to confirm or disconfirm my belief that not being able to see where the region is will only rarely get in my way.


  1. The idea of making the mark-* commands activate the mark comes from an emacs-devel post by Stefan Monnier in the archives linked above.
  2. One remaining possibility I’m not considering is mitigation #1 plus binding something else to do the same as C-SPC C-SPC. I don’t believe there are any easily rebindable keys which are easier to type than typing C-SPC twice. And this does not deal with the two distinct mark-setting operations problem.
  3. Another way to look at this is the question of which of setting a mark for navigational purposes and activating a mark should get C-SPC and which should get C-SPC C-SPC.
Posted Sat May 30 23:55:25 2020 Tags:

I realised this week that for some years I have been applying Inbox Zero indiscriminately to all e-mail that I receive, but this does not make sense, and has some downsides.

My version of Inbox Zero works very well when applied to mail addressed directly to me, and for mail to certain mailing lists, where each post to the list might as well be addressed directly to me, in addition to the list. However, I also receive by e-mail the following things to which I now believe Inbox Zero should not be applied:

  • discussion lists like debian-devel, notmuch@notmuchmail.org, etc.

  • mail to aliases like ftpmaster@debian.org (except when that mail is in reply to mail written by me from that address)

  • automated notifications received via Debian team mailing lists, where I’m not solely responsible for the Debian package in question, such as notifications received via the Debian Perl Group’s mailing list

  • RSS feed articles supplied by my (years old but still going strong) rss2email cronjob.

I believe that applying Inbox Zero to these sorts of things is not only incorrect but is actually harming my engagement with these mediums. Let us distinguish

  1. processing e-mail – this means applying the Inbox Zero decision procedure to incoming messages, at set times during the day (I do it once, around 4pm)

  2. browsing and sometimes catching up RSS articles and list mail – looking through unread items, replying if I think I have something useful to say, leaving things for later, and occasionally marking all as read if I’ve not had time for that group of lists lately.

These should be kept apart. The easy case to see is why you shouldn’t apply the browsing/catching up approach to e-mail which should rather be processed – that’s just the original wisdom of Inbox Zero. And clearly you don’t want ever to have to resort to just marking all mail addressed directly to you as read.

What goes wrong, then, if you misapply the processing mentality to e-mail which should, rather, be browsed/caught up? Well, the core of Inbox Zero is deciding whether to read and reply to something right now, add it to your todo list to handle later, or decide the mail cannot be dealt with quickly but is not important enough to go on that todo list. If you apply this to RSS articles and discussion list mail, then the third option is basically ruled out, because almost nothing on mailing lists or on blogs that I follow is important enough to go on my list of Real Tasks. But then you’re faced with either reading the article/post right now, or discarding it. There is no option to leave the item marked as unread and then maybe come back to it, or postpone discarding it until you’ve decided to catch up the group. Applying Inbox Zero to discussion lists and RSS feeds creates a false sense of urgency.

I’ve realised that my implicit response to this has been reluctance to subscribe to new mailing lists and feeds, because I don’t have things set up to allow me to read them in a leisurely way. But then I’m missing out on discussions and writing that might be relevant and beneficial to me if I could only approach them when I’m in a frame of mind other than “time to get my inbox down to zero”.

This also means that subscribing to new mailing lists just in order to post something has an unreasonably high cost: introducing all mail on those lists into my Inbox Zero processing window.

So, what’s needed is to make the virtual folder views which I use to read new mail correspond cleanly to the distinction between mail to be processed and mail to be browsed. I.e. for each virtual folder it should be clear whether mail there is to be processed or to be browsed. Then I can continue to use my processing views once per day, and access the browsing views at leisure. (Indeed, I’ve created a keybinding to cycle through the new browsing views, and another to catch them up.)

As I mentioned, some list mail ought to be processed. For example, I want to process rather than browse the debian-policy mailing list, as one of the Policy Editors. With notmuch, it’s easy to include this mail in the relevant processing views (I’ve long had one processing view for weekdays and another for weekends).

Additionally, I’ve used a bit of Emacs Lisp to create a dynamic “uncategorised unread” view which catches mail to be browsed which (i) doesn’t show up in one of the other virtual folders for mail to be browsed, and (ii) doesn’t show up in one of the processing views. So now subscribing to a mailing list is cheap: mail will end up in the uncategorised view, and I can decide whether to leave it there for a low traffic list; create a new virtual folder for the list (trivial as it’s all just more Emacs Lisp, no actual moving of files required); or unsubscribe.

I’ve been experiencing nostalgia for my time in secondary school when my friends and I had all sorts of interesting stuff flowing into our mailstores, from RSS feeds to each other’s blogs where we’d post both our own content and links elsewhere, RSS feeds to stranger’s blogs, and e-mails sent to each other with links. We didn’t have to worry about processing anything back then, as our e-mail accounts were used for intellectual enrichment but not the completion of tasks. I think I can recapture something of that with my new virtual folders for browsing.

Posted Sun May 3 21:09:28 2020 Tags:

Before uploading stuff to Debian, I build in a clean chroot, and then run piuparts, autopkgtest and lintian. For some of my packages this can take around an hour on my laptop, which is fairly old. Normally I don’t mind waiting, but sometimes I want to put my laptop away, and then it would be good for things to be faster. It occurred to me that I could make use of my builds.sr.ht account to run these tests on more powerful hardware.

This build manifest seems to work:

# BEGIN CONFIGURABLE
sources:
  - https://salsa.debian.org/perl-team/modules/packages/libgit-annex-perl.git
environment:
  source: libgit-annex-perl
  quilt:  auto
# END CONFIGURABLE

image: debian/unstable
packages:
  - autopkgtest
  - devscripts
  - dgit
  - lintian
  - piuparts
  - sbuild
tasks:
  - setup: |
      cd $source
      source_version=$(dpkg-parsechangelog -SVersion)
      echo "source_version=$source_version" >>~/.buildenv
      git deborig || origtargz
      sudo sbuild-createchroot --command-prefix=eatmydata --include=eatmydata unstable /srv/chroot/unstable-amd64-sbuild
      sudo sbuild-adduser $USER
  - build: |
      cd $source
      dgit --quilt=$quilt sbuild -d unstable --no-run-lintian
  - lintian: |
      lintian ${source}_${source_version}_multi.changes
  - piuparts: |
      sudo piuparts --no-eatmydata --schroot unstable-amd64-sbuild ${source}_${source_version}_multi.changes
  - autopkgtest: |
      autopkgtest ${source}_${source_version}_multi.changes -- schroot unstable-amd64-sbuild

And here’s my script.

Posted Fri Apr 3 18:47:07 2020 Tags:

Last summer I read chromatic’s Modern Perl, and was recommended to default to using Moo or Moose to define classes, rather than writing code to bless things into objecthood myself. At the time the project I was working on needed to avoid any dependencies outside of the Perl core, so I made a mental note of the advice, but didn’t learn how to use Moo or Moose. I do remember feeling like I was typing out a lot of boilerplate, and wishing I could use Moo or Moose to reduce that.

In recent weeks I’ve been working on a Perl distribution which can freely use non-core dependencies from CPAN, and so right from the start I used Moo to define my classes. It seemed like a no-brainer because it’s more declarative; it didn’t seem like there could be any disadvantages.

At one point, when writing a new class, I got stuck. I needed to call one of the object’s methods immediately after instantiation of the object. BUILDARGS is, roughly, the constructor for Moo/Moose classes, so I started there, but you don’t have access to the new object during BUILDARGS, so you can’t simply call its methods on it. So what I needed to do was change my design around so as to be more conformant to the Moo/Moose view of the world, such that the work of the method call could get done at the right time. I mustn’t have been in a frame of mind for that sort of thinking at the time because what I ended up doing was dropping Moo from the package and writing a constructor which called the method on the new object, after blessing the hash, but before returning a hashref to the caller.

This was my first experience of having the call to bless() not be the last line of my constructor, and I believe that this simple dislocation helped significantly improved my grip on core Perl 5 classes and objects: the point is that they’re not declarative—they’re collections of functionality to operate on encapsulated data, where the instantiation of that data, too, is a piece of functionality. I had been thinking about classes too declaratively, and this is why writing out constructors and accessors felt like boilerplate. Now writing those out feels like carefully setting down precisely what functionality for operating on the encapsulated data I want to expose. I also find core Perl 5 OO quite elegant (in fact I find pretty much everything about Perl 5 highly elegant, except of course for its dereferencing syntax; not sure why this opinion is so unpopular).

I then came across the Cor proposal and followed a link to this recent talk criticising Moo/Moose. The speaker, Tadeusz Sośnierz, argues that Moo/Moose implicitly encourages you to have an accessor for each and every piece of the encapsulated data in your class, which is bad OO. Sośnierz pointed out that if you take care to avoid generating all these accessors, while still having Moo/Moose store the arguments to the constructor provided by the user in the right places, you end up back with a new kind of boilerplate, which is Moo/Moose-specific, and arguably worse than what’s involved in defining core Perl 5 classes. So, he asks, if we are going to take care to avoid generating too many accessors, and thereby end up with boilerplate, what are we getting out of using Moo/Moose over just core Perl 5 OO? There is some functionality for typechecking and method signatures, and we have the ability to use roles instead of multiple-inheritance.

After watching Sośnierz talk, I have been rethinking about whether I should follow Modern Perl’s advice to default to using Moo/Moose to define new classes, because I want to avoid the problem of too many accessors. Considering the advantages of Moo/Moose Sośnierz ends up with at the end of his talk: I find the way that Perl provides parameters to subroutines and methods intuitive and flexible, and don’t see the need to build typechecking into that process—just throw some exceptions with croak() if the types aren’t right, before getting on with the business logic of the subroutine or method. Roles are a different matter. These are certainly an improvement on multiple inheritance. But there is Role::Tiny that you can use instead of Moo/Moose.

So for the time being it seems I should go back to blessing hashes, and that I should also get to grips with Role::Tiny. I don’t have a lot of experience with OO design, so can certainly imagine changing my mind about things like Perlish typechecking and subroutine signatures (I also don’t understand, yet, why some people find the convention of prefixing private methods and attributes with an underscore not to be sufficient—Cor wants to add attribute and method privacy to Perl). However, it seems sensible to avoid using things like Moo/Moose until I can be very clear in my own mind about what advantages using them is getting me. Bad OO with Moo/Moose seems worse than occasionally simplistic, occasionally tedious, but correct OO with the Perl 5 core.

Posted Tue Feb 11 16:23:38 2020 Tags:

There hasn’t been much activity lately, but no shortage of interesting and hopefully-accessible Debian Policy work. Do write to debian-policy@lists.debian.org if you’d like to participate but are struggling to figure out how.

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Posted Mon Sep 2 23:04:36 2019 Tags:

Debian Policy started off the Debian 11 “bullseye” release cycle with the release of Debian Policy 4.4.0.0. Please consider helping us fix more bugs and prepare more releases (whether or not you’re at DebCamp19!).

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Posted Sat Jul 20 13:37:39 2019 Tags:

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, in Debian testing, unstable and buster-backports at the time of writing).

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u
    % cd ~/tmp/t2u
    % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \
        debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \
        https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23
    

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.


  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.
Posted Tue Jul 9 21:49:41 2019

There has been very little activity in recent weeks (preparing the Debian buster release is more urgent than the Policy Manual for most contributors), so the list of bugs I posted in February is still valid.

Posted Thu May 30 00:35:20 2019 Tags:

Here’s are some of the bugs against the Debian Policy Manual. Please consider getting involved.

Consensus has been reached and help is needed to write a patch

#542288 Versions for native packages, NMU’s, and binary only uploads

#556015 Clarify requirements for linked doc directories

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#770440 policy should mention systemd timers

#902612 Packages should not touch users’ home directories

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs

#786470 [copyright-format] Add an optional “License-Grant” field

#910548 base-files - please consider adding /usr/share/common-licenses/Unic…

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

Merged for the next release

#897217 Vcs-Hg should support -b too

#917431 virtual packages: logind, default-logind

#920355 Permit branch specifications (”-b”) in Mercurial Vcs-Hg headers

#921963 add link to https://jenkins.debian.net/userContent/debian-policy in…

Posted Tue Feb 26 23:46:17 2019 Tags:

Here’s are some of the bugs against the Debian Policy Manual. Please consider getting involved.

Consensus has been reached and help is needed to write a patch

#770440 policy should mention systemd timers

#773557 Avoid unsafe RPATH/RUNPATH

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

#902612 Packages should not touch users’ home directories

#905453 Policy does not include a section on NEWS.Debian files

#906286 repository-format sub-policy

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs

#645696 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#910548 base-files - please consider adding /usr/share/common-licenses/Unic…

#917431 virtual packages: logind, default-logind

#920355 Permit branch specifications (”-b”) in Mercurial Vcs-Hg headers

Posted Sun Jan 27 04:51:07 2019 Tags: