Frans Pop, Debian Day, Suicide


June 27, 2022

Russ Allbery

Review: Light from Uncommon Stars

Review: Light from Uncommon Stars, by Ryka Aoki

Publisher: Tor
Copyright: 2021
ISBN: 1-250-78907-9
Format: Kindle
Pages: 371

Katrina Nguyen is an young abused transgender woman. As the story opens, she's preparing to run away from home. Her escape bag is packed with meds, clothes, her papers, and her violin. The note she is leaving for her parents says that she's going to San Francisco, a plausible lie. Her actual destination is Los Angeles, specifically the San Gabriel Valley, where a man she met at a queer youth conference said he'd give her a place to sleep.

Shizuka Satomi is the Queen of Hell, the legendary uncompromising violin teacher responsible for six previous superstars, at least within the limited world of classical music. She's wealthy, abrasive, demanding, and intimidating, and unbeknownst to the rest of the world she has made a literal bargain with Hell. She has to deliver seven souls, seven violin players who want something badly enough that they'll bargain with Hell to get it. Six have already been delivered in spectacular fashion, but she's running out of time to deliver the seventh before her own soul is forfeit. Tamiko Grohl, an up-and-coming violinist from her native Los Angeles, will hopefully be the seventh.

Lan Tran is a refugee and matriarch of a family who runs Starrgate Donut. She and her family didn't flee another unstable or inhospitable country. They fled the collapsing Galactic Empire, securing their travel authorization by promising to set up a tourism attraction. Meanwhile, she's careful to give cops free donuts and to keep their advanced technology carefully concealed.

The opening of this book is unlikely to be a surprise in general shape. Most readers would expect Katrina to end up as Satomi's student rather than Tamiko, and indeed she does, although not before Katrina has a very difficult time. Near the start of the novel, I thought "oh, this is going to be hurt/comfort without a romantic relationship," and it is. But it then goes beyond that start into a multifaceted story about complexity, resilience, and how people support each other.

It is also a fantastic look at the nuance and intricacies of being or supporting a transgender person, vividly illustrated by a story full of characters the reader cares about and without the academic abstruseness that often gets in the way. The problems with gender-blindness, the limitations of honoring someone's gender without understanding how other people do not, the trickiness of privilege, gender policing as a distraction and alienation from the rest of one's life, the complications of real human bodies and dysmorphia, the importance of listening to another person rather than one's assumptions about how that person feels — it's all in here, flowing naturally from the story, specific to the characters involved, and never belabored. I cannot express how well-handled this is. It was a delight to read.

The other wonderful thing Aoki does is set Satomi up as the almost supernaturally competent teacher who in a sense "rescues" Katrina, and then invert the trope, showing the limits of Satomi's expertise, the places where she desperately needs human connection for herself, and her struggle to understand Katrina well enough to teach her at the level Satomi expects of herself. Teaching is not one thing to everyone; it's about listening, and Katrina is nothing like Satomi's other students. This novel is full of people thinking they finally understand each other and realizing there is still more depth that they had missed, and then talking through the gap like adults.

As you can tell from any summary of this book, it's an odd genre mash-up. The fantasy part is a classic "magician sells her soul to Hell" story; there are a few twists, but it largely follows genre expectations. The science fiction part involving Lan is unfortunately weaker and feels more like a random assortment of borrowed Star Trek tropes than coherent world-building. Genre readers should not come to this story expecting a well-thought-out science fiction universe or a serious attempt to reconcile metaphysics between the fantasy and science fiction backgrounds. It's a quirky assortment of parts that don't normally go together, defy easy classification, and are often unexplained. I suspect this was intentional on Aoki's part given how deeply this book is about the experience of being transgender.

Of the three primary viewpoint characters, I thought Lan's perspective was the weakest, and not just because of her somewhat generic SF background. Aoki uses her as a way to talk about the refugee experience, describing her as a woman who brings her family out of danger to build a new life. This mostly works, but Lan has vastly more power and capabilities than a refugee would normally have. Rather than the typical Asian refugee experience in the San Gabriel valley, Lan is more akin to a US multimillionaire who for some reason fled to Vietnam (relative to those around her, Lan is arguably even more wealthy than that). This is also a refugee experience, but it is an incredibly privileged one in a way that partly undermines the role that she plays in the story.

Another false note bothered me more: I thought Tamiko was treated horribly in this story. She plays a quite minor role, sidelined early in the novel and appearing only briefly near the climax, and she's portrayed quite negatively, but she's clearly hurting as deeply as the protagonists of this novel. Aoki gives her a moment of redemption, but Tamiko gets nothing from it. Unlike every other injured and abused character in this story, no one is there for Tamiko and no one ever attempts to understand her. I found that profoundly sad. She's not an admirable character, but neither is Satomi at the start of the book. At least a gesture at a future for Tamiko would have been appreciated.

Those two complaints aside, though, I could not put this book down. I was able to predict the broad outline of the plot, but the specifics were so good and so true to characters. Both the primary and supporting cast are unique, unpredictable, and memorable.

Light from Uncommon Stars has a complex relationship with genre. It is squarely in the speculative fiction genre; the plot would not work without the fantasy and (more arguably) the science fiction elements. Music is magical in a way that goes beyond what can be attributed to metaphor and subjectivity. But it's also primarily character story deeply rooted in the specific location of the San Gabriel valley east of Los Angeles, full of vivid descriptions (particularly of food) and day-to-day life. As with the fantasy and science fiction elements, Aoki does not try to meld the genre elements into a coherent whole. She lets them sit side by side and be awkward and charming and uneven and chaotic. If you're the sort of SF reader who likes building a coherent theory of world-building rules, you may have to turn that desire off to fully enjoy this book.

I thought this book was great. It's not flawless, but like its characters it's not trying to be flawless. In places it is deeply insightful and heartbreakingly emotional; in others, it's a glorious mess. It's full of cooking and food, YouTube fame, the disappointments of replicators, video game music, meet-cutes over donuts, found family, and classical music drama. I wish we'd gotten way more about the violin repair shop and a bit less warmed-over Star Trek, but I also loved it exactly the way it was. Definitely the best of the 2022 Hugo nominees that I've read so far.

Content warning for child abuse, rape, self-harm, and somewhat explicit sex work. The start of the book is rather heavy and horrific, although the author advertises fairly clearly (and accurately) that things will get better.

Rating: 9 out of 10

27 June, 2022 03:11AM

June 26, 2022

Review: Feet of Clay

Review: Feet of Clay, by Terry Pratchett

Series: Discworld #19
Publisher: Harper
Copyright: October 1996
Printing: February 2014
ISBN: 0-06-227551-8
Format: Mass market
Pages: 392

Feet of Clay is the 19th Discworld novel, the third Watch novel, and probably not the best place to start. You could read only Guards! Guards! and Men at Arms before this one, though, if you wanted.

This story opens with a golem selling another golem to a factory owner, obviously not caring about the price. This is followed by two murders: an elderly priest, and the curator of a dwarven bread museum. (Dwarf bread is a much-feared weapon of war.) Meanwhile, assassins are still trying to kill Watch Commander Vimes, who has an appointment to get a coat of arms. A dwarf named Cheery Littlebottom is joining the Watch. And Lord Vetinari, the ruler of Ankh-Morpork, has been poisoned.

There's a lot going on in this book, and while it's all in some sense related, it's more interwoven than part of a single story. The result felt to me like a day-in-the-life episode of a cop show: a lot of character development, a few largely separate plot lines so that the characters have something to do, and the development of a few long-running themes that are neither started nor concluded in this book. We check in on all the individual Watch members we've met to date, add new ones, and at the end of the book everyone is roughly back to where they were when the book started.

This is, to be clear, not a bad thing for a book to do. It relies on the reader already caring about the characters and being invested in the long arc of the series, but both of those are true of me, so it worked. Cheery is a good addition, giving Pratchett an opportunity to explore gender nonconformity with a twist (all dwarfs are expected to act the same way regardless of gender, which doesn't work for Cheery) and, even better, giving Angua more scenes. Angua is among my favorite Watch characters, although I wish she'd gotten more of a resolution for her relationship anxiety in this book.

The primary plot is about golems, which on Discworld are used in factories because they work nonstop, have no other needs, and do whatever they're told. Nearly everyone in Ankh-Morpork considers them machinery. If you've read any Discworld books before, you will find it unsurprising that Pratchett calls that belief into question, but the ways he gets there, and the links between the golem plot and the other plot threads, have a few good twists and turns.

Reading this, I was reminded vividly of Orwell's discussion of Charles Dickens:

It seems that in every attack Dickens makes upon society he is always pointing to a change of spirit rather than a change of structure. It is hopeless to try and pin him down to any definite remedy, still more to any political doctrine. His approach is always along the moral plane, and his attitude is sufficiently summed up in that remark about Strong's school being as different from Creakle's "as good is from evil." Two things can be very much alike and yet abysmally different. Heaven and Hell are in the same place. Useless to change institutions without a "change of heart" — that, essentially, is what he is always saying.

If that were all, he might be no more than a cheer-up writer, a reactionary humbug. A "change of heart" is in fact the alibi of people who do not wish to endanger the status quo. But Dickens is not a humbug, except in minor matters, and the strongest single impression one carries away from his books is that of a hatred of tyranny.

and later:

His radicalism is of the vaguest kind, and yet one always knows that it is there. That is the difference between being a moralist and a politician. He has no constructive suggestions, not even a clear grasp of the nature of the society he is attacking, only an emotional perception that something is wrong, all he can finally say is, "Behave decently," which, as I suggested earlier, is not necessarily so shallow as it sounds. Most revolutionaries are potential Tories, because they imagine that everything can be put right by altering the shape of society; once that change is effected, as it sometimes is, they see no need for any other. Dickens has not this kind of mental coarseness. The vagueness of his discontent is the mark of its permanence. What he is out against is not this or that institution, but, as Chesterton put it, "an expression on the human face."

I think Pratchett is, in that sense, a Dickensian writer, and it shows all through Discworld. He does write political crises (there is one in this book), but the crises are moral or personal, not ideological or structural. The Watch novels are often concerned with systems of government, but focus primarily on the popular appeal of kings, the skill of the Patrician, and the greed of those who would maneuver for power. Pratchett does not write (at least so far) about the proper role of government, the impact of Vetinari's policies (or even what those policies may be), or political theory in any deep sense. What he does write about, at great length, is morality, fairness, and a deeply generous humanism, all of which are central to the golem plot.

Vimes is a great protagonist for this type of story. He's grumpy, cynical, stubborn, and prejudiced, and we learn in this book that he's a descendant of the Discworld version of Oliver Cromwell. He can be reflexively self-centered, and he has no clear idea how to use his newfound resources. But he behaves decently towards people, in both big and small things, for reasons that the reader feels he could never adequately explain, but which are rooted in empathy and an instinctual sense of fairness. It's fun to watch him grumble his way through the plot while making snide comments about mysteries and detectives.

I do have to complain a bit about one of those mysteries, though. I would have enjoyed the plot around Vetinari's poisoning more if Pratchett hadn't mercilessly teased readers who know a bit about French history. An allusion or two would have been fun, but he kept dropping references while having Vimes ignore them, and I found the overall effect both frustrating and irritating. That and a few other bits, like Angua's uncommunicative angst, fell flat for me. Thankfully, several other excellent scenes made up for them, such as Nobby's high society party and everything about the College of Heralds. Also, Vimes's impish PDA (smartphone without the phone, for those younger than I am) remains absurdly good commentary on the annoyances of portable digital devices despite an original publication date of 1996.

Feet of Clay is less focused than the previous Watch novels and more of a series book than most Discworld novels. You're reading about characters introduced in previous books with problems that will continue into subsequent books. The plot and the mysteries are there to drive the story but seem relatively incidental to the characterization. This isn't a complaint; at this point in the series, I'm in it for the long haul, and I liked the variation. As usual, Pratchett is stronger for me when he's not overly focused on parody. His own characters are as good as the material he's been parodying, and I'm happy to see them get a book that's not overshadowed by another material.

If you've read this far in the series, or even in just the Watch novels, recommended.

Followed by Hogfather in publication order and, thematically, by Jingo.

Rating: 8 out of 10

26 June, 2022 03:56AM

June 25, 2022

Jamie McClelland

Deleting your period app won't bring back Roe v Wade

In some ways it feels like 2016 all over again.

I’m seeing panic-stricken calls for everyone to delete their period apps, close their Facebook accounts, de-Google their cell phones and, generally speaking, turn their entire online lives upside down to avoid the techno-surveillance dragnet unleashed by the overturning of Roe v. Wade.

I’m sympathetic and generally agree that many of us should do most of those things on any given day. But, there is a serious problem with this cycle of repression and panic: it’s very bad for organizing.

In our rush to give people concrete steps they can take to feel safer, we’re fueling a frenzy of panic and fear, which seriously inhibits activism.

Now is the time to remind people that, over the last 20 years, a growing movement of organizers and technologists have been building user-driven, privacy-respecting, consentful technology platforms as well as organizations and communities to develop them.

We have an entire eco system of:

All of these projects need our love and support over the long haul. Please help spread the word - rather then just deleting an app, let’s encourage people to join an organziation or try out a new kind of technology that will serve us down the road when we may need it even more then today.

25 June, 2022 10:27PM

Ryan Kavanagh

Routable network addresses with OpenIKED and systemd-networkd

I’ve been using OpenIKED for some time now to configure my VPN. One of its features is that it can dynamically assign addresses on the internal network to clients, and clients can assign these addresses and routes to interfaces. However, these interfaces must exist before iked can start. Some months ago I switched my Debian laptop’s configuration from the traditional ifupdown to systemd-networkd. It took me some time to figure out how to have systemd-networkd create dummy interfaces on which iked can install addresses, but also not interfere with iked by trying to manage these interfaces. Here is my working configuration.

First, I have systemd create the interface dummy1 by creating a systemd.netdev(5) configuration file at /etc/systemd/network/20-dummy1.netdev:


Then I tell systemd not to manage this interface by creating a systemd.network(5) configuration file at /etc/systemd/network/20-dummy1.network:


Restarting systemd-networkd causes these interfaces to get created, and we can then check their status using networkctl(8):

$ systemctl restart systemd-networkd.service
$ networkctl
  1 lo       loopback carrier     unmanaged
  2 enp2s0f0 ether    off         unmanaged
  3 enp5s0   ether    off         unmanaged
  4 dummy1   ether    degraded    configuring
  5 dummy3   ether    degraded    configuring
  6 sit0     sit      off         unmanaged
  8 wlp3s0   wlan     routable    configured
  9 he-ipv6  sit      routable    configured

8 links listed.

Finally, I configure my flows in /etc/iked.conf, making sure to assign the received address to the interface dummy1.

ikev2 'hades' active esp \
        from dynamic to \
        peer hades.rak.ac \
        srcid '/CN=asteria.rak.ac' \
        dstid '/CN=hades.rak.ac' \
        request address \
        iface dummy1

Restarting openiked and checking the status of the interface reveals that it has been assigned an address on the internal network and that it is routable:

$ systemctl restart openiked.service
$ networkctl status dummy1
â—� 4: dummy1
                     Link File: /usr/lib/systemd/network/99-default.link
                  Network File: /etc/systemd/network/20-dummy1.network
                          Type: ether
                          Kind: dummy
                         State: routable (configured)
                  Online state: online
                        Driver: dummy
              Hardware Address: 22:50:5f:98:a1:a9
                           MTU: 1500
                         QDisc: noqueue
  IPv6 Address Generation Mode: eui64
          Queue Length (Tx/Rx): 1/1
                 Route Domains: .
             Activation Policy: up
           Required For Online: yes
             DHCP6 Client DUID: DUID-EN/Vendor:0000ab11aafa4f02d6ac68d40000

I’d be happy to hear if there are simpler or more idiomatic ways to configure this under systemd.

25 June, 2022 11:41AM

June 24, 2022

hackergotchi for Kees Cook

Kees Cook

finding binary differences

As part of the continuing work to replace 1-element arrays in the Linux kernel, it’s very handy to show that a source change has had no executable code difference. For example, if you started with this:

struct foo {
    unsigned long flags;
    u32 length;
    u32 data[1];

void foo_init(int count)
    struct foo *instance;
    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
    instance = kmalloc(bytes, GFP_KERNEL);

And you changed only the struct definition:

-    u32 data[1];
+    u32 data[];

The bytes calculation is going to be incorrect, since it is still subtracting 1 element’s worth of space from the desired count. (And let’s ignore for the moment the open-coded calculation that may end up with an arithmetic over/underflow here; that can be solved separately by using the struct_size() helper or the size_mul(), size_add(), etc family of helpers.)

The missed adjustment to the size calculation is relatively easy to find in this example, but sometimes it’s much less obvious how structure sizes might be woven into the code. I’ve been checking for issues by using the fantastic diffoscope tool. It can produce a LOT of noise if you try to compare builds without keeping in mind the issues solved by reproducible builds, with some additional notes. I prepare my build with the “known to disrupt code layout” options disabled, but with debug info enabled:

$ OUT=gcc
$ make $KBF O=$OUT allmodconfig
$ ./scripts/config --file $OUT/.config \
$ make $KBF O=$OUT olddefconfig

Then I build a stock target, saving the output in “before”. In this case, I’m examining drivers/scsi/megaraid/:

$ make -jN $KBF O=$OUT drivers/scsi/megaraid/
$ mkdir -p $OUT/before
$ cp $OUT/drivers/scsi/megaraid/*.o $OUT/before/

Then I patch and build a modified target, saving the output in “after”:

$ vi the/source/code.c
$ make -jN $KBF O=$OUT drivers/scsi/megaraid/
$ mkdir -p $OUT/after
$ cp $OUT/drivers/scsi/megaraid/*.o $OUT/after/

And then run diffoscope:

$ diffoscope $OUT/before/ $OUT/after/

If diffoscope output reports nothing, then we’re done. 🥳

Usually, though, when source lines move around other stuff will shift too (e.g. WARN macros rely on line numbers, so the bug table may change contents a bit, etc), and diffoscope output will look noisy. To examine just the executable code, the command that diffoscope used is reported in the output, and we can run it directly, but with possibly shifted line numbers not reported. i.e. running objdump without --line-numbers:

$ ARGS="--disassemble --demangle --reloc --no-show-raw-insn --section=.text"
$ for i in $(cd $OUT/before && echo *.o); do
        echo $i
        diff -u <(objdump $ARGS $OUT/before/$i | sed "0,/^Disassembly/d") \
                <(objdump $ARGS $OUT/after/$i  | sed "0,/^Disassembly/d")

If I see an unexpected difference, for example:

-    c120:      movq   $0x0,0x800(%rbx)
+    c120:      movq   $0x0,0x7f8(%rbx)

Then I'll search for the pattern with line numbers added to the objdump output:

$ vi <(objdump --line-numbers $ARGS $OUT/after/megaraid_sas_fp.o)

I'd search for "0x0,0x7f8", find the source file and line number above it, open that source file at that position, and look to see where something was being miscalculated:

$ vi drivers/scsi/megaraid/megaraid_sas_fp.c +329

Once tracked down, I'd start over at the "patch and build a modified target" step above, repeating until there were no differences. For example, in the starting example, I'd also need to make this change:

-    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
+    size_t bytes = sizeof(*instance) + sizeof(u32) * count;

Though, as hinted earlier, better yet would be:

-    size_t bytes = sizeof(*instance) + sizeof(u32) * (count - 1);
+    size_t bytes = struct_size(instance, data, count);

But sometimes adding the helper usage will add binary output differences since they're performing overflow checking that might saturate at SIZE_MAX. To help with patch clarity, those changes can be done separately from fixing the array declaration.

© 2022, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

24 June, 2022 08:11PM by kees

Reproducible Builds

Supporter spotlight: Hans-Christoph Steiner of the F-Droid project

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the fifth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC, the Google Open Source Security Team (GOSST) and Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix.

Today, however, we will be talking with Hans-Christoph Steiner from the F-Droid project.

Chris: Welcome, Hans! Could you briefly tell me about yourself?

Hans: Sure. I spend most of my time trying to private communications software usable by everyone, designing interactive software with a focus on human perceptual capabilities and building networks with free software. I’ve been involved in Debian since approximately 2008 and became an official Debian developer in 2011. In the little time left over from that, I sometimes compose music with computers from my home in Austria.

Chris: For those who have not heard of it before, what is the F-Droid project?

Hans: F-Droid is a community-run app store that provides free software applications for Android phones. First and foremost our goal is to represent our users. In particular, we review all of the apps that we distribute, and these reviews not only check for whether something is definitely free software or not, we additionally look for ‘ethical’ problems with applications as well — issues that we call ‘Anti-Features’. Since the project began in 2010, F-Droid now offers almost 4,000 free-software applications.

F-Droid is also a ‘app store kit’ as well, providing all the tools that are needed to operate an free app store of your own. It also includes complete build and release tools for managing the process of turning app source code into published builds.

Chris: How exactly does F-Droid differ from the Google Play store? Why might someone use F-Droid over Google Play?

Hans: One key difference to the Google Play Store is that F-Droid does not ship proprietary software by default. All apps shipped from f-droid.org are built from source on our own builders. This is partly because F-Droid is backed by the free software community; that is, people who have engaged in the free software community long before Android was conceived, and, in particular, share many — if not all — of its values. Using F-Droid will therefore feel very familiar to anyone familiar with a modern Linux distribution.

Chris: How do you describe reproducibility from the F-Droid point of view?

Hans: All centralised software repositories are extremely tempting targets for exploitation by malicious third parties, and the kinds of personal or otherwise sensitive data on our phones only increase this risk. In F-Droid’s case, not only could the software we distribute be theoretically replaced with nefarious copies, the build infrastructure that generates that software could be compromised as well.

F-Droid having reproducible builds is extremely important as it allows us to verify that our build systems have not been compromised and distributing malware to our users. In particular, if an independent build infrastructure can produce precisely the same results from a build, then we can be reasonably sure that they haven’t been compromised. Technically-minded users can also validate their builds on their own systems too, further increasing trust in our build infrastructure. (This can be performed using fdroid verify command.)

Our signature & trust scheme means that F-Droid can verify that an app is 100% free software whilst still using the developer’s original .APK file. More details about this may be found in our reproducibility documentation and on the page about our Verification Server.

Chris: How do you see F-Droid fitting into the rest of the modern security ecosystem?

Hans: Whilst F-Droid inherits all of the social benefits of free software, F-Droid takes special care to respect your privacy as well — we don’t attempt to track your device in any way. In particular, you don’t need an account to use the F-Droid client, and F-Droid doesn’t send any device-identifying information to our servers… other than its own version number.

What is more, we mark all apps in our repository that track you, and users can choose to hide any apps that has been tagged with a specific Anti-Feature in the F-Droid settings. Furthermore, any personal data you decide to give us (such as your email address when registering for a forum account) goes no further than us as well, and we pledge that it will never be used for anything beyond merely maintaining your account.

Chris: What would ‘fully reproducible’ mean to F-Droid? What it would look like if reproducibility was a ‘solved problem’? Or, rather, what would be your ‘ultimate’ reproducibility goals?

Hans: In an ideal world, every application submitted to F-Droid would not only build reproducibly, but would come with a cryptographically-signed signature from the developer as well. Then we would only distribute an compiled application after a build had received a number of matching signatures from multiple, independent third parties. This would mean that our users were not placing their trust solely in software developers’ machines, and wouldn’t be trusting our centralised build servers as well.

Chris: What are the biggest blockers to reaching this state? Are there any key steps or milestones to get there?

Hans: Time is probably the main constraint to reaching this goal. Not only do we need system administrators on an ongoing basis but we also need to incorporate reproducibly checks into our Continuous Integration (CI) system. We are always looking for new developers to join our effort, as well as learning about how to better bring them up to speed.

Separate to this, we often experience blockers with reproducibility-related bugs in the Android build tooling. Luckily, upstreams do ultimately address these issues, but in some cases this has taken three or four years to reach end-users and developers. Unfortunately, upstream is not chiefly concerned with the security aspects of reproducibility builds; they care more about how it can minimise and optimise download size and speed.

Chris: Are you tracking any statistics about reproducibility in F-Droid over time? If so, can you share them? Does F-Droid track statistics about its own usage?

Hans: To underline a topic touched on above, F-Droid is dedicated to preserving the privacy of its users; we therefore do not record usage statistics. This is, of course, in contrast to other application stores.

However, we are in a position to track whether packages in F-Droid are reproducible or not. As in: what proportion of APKs in F-Droid have been independently verified over time? Unfortunately, due to time constraints, we do not yet automatically publish charts for this.

We do publish some raw data that is related, though, and we naturally welcome contributions of visualizations based on any and all of our data. The “All our APIs” page on our wiki is a good place to start for someone wanting to contribute, everything about reproducible F-Droid apps is available in JSON format, what’s missing are apps or dashboards making use of the available raw data.

Chris: Many thanks for this interview, Hans, and for all of your work on F-Droid and elsewhere. If someone wanted to get in touch or learn more about the project, where might someone go?

Hans: The best way to find out about F-Droid is, of course, the main F-Droid homepage, but we are also on Twitter @fdroidorg. We have many avenues to participate and to learn more! We have an About page on our website and a thriving forum. We also have part of our documentation specifically dedicated to reproducible builds.

For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

24 June, 2022 10:00AM

Russ Allbery

Review: A Dead Djinn in Cairo

Review: A Dead Djinn in Cairo, by P. Djèlí Clark

Publisher: Tordotcom
Copyright: May 2016
Format: Kindle
Pages: 47

Fatma el-Sha'arawi is a special investigator with the Egyptian Ministry of Alchemy, Enchantments, and Supernatural Entities in an alternate 1912 Egypt. In Fatma's world, the mystic al-Jahiz broke through to the realm of the djinn in the late 1800s, giving Egypt access to magic and the supernatural and the djinn access to Egypt. It is now one of the great powers of the world, able to push off the Europeans and control its own politics.

This is a Tor.com original novelette, so you can read it on-line for free or drop $2 on a Kindle version for convenience. It's the first story in the "Dead Djinn" universe, in which Clark has also written a novella and a novel (the latter of which won the Nebula Award for best novel in 2022).

There are three things here I liked. Fatma is a memorable character, both for her grumpy demeanor as a rare female investigator having to put up with a sexist pig of a local police liaison, and for her full British attire (including a bowler hat) and its explanation. (The dynamics felt a bit modern for a story set in 1912, but not enough to bother me.) The setting is Arabian-inspired fantasy, which is a nice break from the normal European or Celtic stuff. And there are interesting angels (Fatma: "They're not really angels"), which I think have still-underused potential, particularly when they can create interesting conflicts with Coptic Christianity and Islam. Clark's version are energy creatures of some sort inside semi-mechanical bodies with visuals that reminded me strongly of Diablo III (which in this context is a compliment). I'm interested to learn more about them, although I hope there's more going on than the disappointing explanation we get at the end of this story.

Other than those elements, there's not much here. As hinted by the title, the story is structured as a police investigation and Fatma plays the misfit detective. But there's no real mystery; the protagonists follow obvious clue to obvious clue to obvious ending. The plot structure is strictly linear and never surprised me. Aasim is an ass, which gives Fatma something to react to but never becomes real characterization. The world-building is the point, but most of it is delivered in infodumps, and the climax is a kind-of-boring fight where the metaphysics are explained rather than discovered.

I'm possibly being too harsh. There's space for novelettes that tell straightforward stories without the need for a twist or a sting. But I admit I found this boring. I think it's because it's not tight enough to be carried by the momentum of a simple plot, and it's also not long enough for either the characters or the setting to breathe and develop. The metaphysics felt rushed and the characterization cramped. I liked Siti and the dynamic between Siti and Fatma at the end of the story, but there wasn't enough of it.

As a world introduction, it does its job, and the non-European fantasy background is interesting enough that I'd be willing to read more, even without the incentive of reading all award winning novels. But "A Dead Djinn in Cairo" doesn't do more than its job. It might be worth skipping (I'll have to read the subsequent works to know for certain), but it won't take long to read and the price is right.

Followed by The Haunting of Tram Car 015.

Rating: 6 out of 10

24 June, 2022 04:11AM

June 22, 2022

John Goerzen

I Finally Found a Solid Debian Tablet: The Surface Go 2

I have been looking for a good tablet for Debian for… well, years. I want thin, light, portable, excellent battery life, and a servicable keyboard.

For a while, I tried a Lenovo Chromebook Duet. It meets the hardware requirements, well sort of. The problem is with performance and the OS. I can run Debian inside the ChromeOS Linux environment. That works, actually pretty well. But it is slow. Terribly, terribly, terribly slow. Emacs takes minutes to launch. apt-gets also do. It has barely enough RAM to keep its Chrome foundation happy, let alone a Linux environment also. But basically it is too slow to be servicable. Not just that, but I ran into assorted issues with having it tied to a Google account – particularly being unable to login unless I had Internet access after an update. That and my growing concern over Google’s privacy practices led me sort of write it off.

I have a wonderful System76 Lemur Pro that I’m very happy with. Plenty of RAM, a good compromise size between portability and screen size at 14.1″, and so forth. But a 10″ goes-anywhere it’s not.

I spent quite a lot of time looking at thin-and-light convertible laptops of various configurations. Many of them were quite expensive, not as small as I wanted, or had dubious Linux support. To my surprise, I wound up buying a Surface Go 2 from the Microsoft store, along with the Type Cover. They had a pretty good deal on it since the Surface Go 3 is out; the highest-processor model of the Go 2 is roughly similar to the Go 3 in terms of performance.

There is an excellent linux-surface project out there that provides very good support for most Surface devices, including the Go 2 and 3.

I put Debian on it. I had a fair bit of hassle with EFI, and wound up putting rEFInd on it, which mostly solved those problems. (I did keep a Windows partition, and if it comes up for some reason, the easiest way to get it back to Debian is to use the Windows settings tool to reboot into advanced mode, and then select the appropriate EFI entry to boot from there.)

Researching on-screen keyboards, it seemed like Gnome had the most mature. So I wound up with Gnome (my other systems are using KDE with tiling, but I figured I’d try Gnome on it.) Almost everything worked without additional tweaking, the one exception being the cameras. The cameras on the Surfaces are a known point of trouble and I didn’t bother to go to all the effort to get them working.

With 8GB of RAM, I didn’t put ZFS on it like I do on other systems. Performance is quite satisfactory, including for Rust development. Battery life runs about 10 hours with light use; less when running a lot of cargo builds, of course.

The 1920×1280 screen is nice at 10.5″. Gnome with Wayland does a decent job of adjusting to this hi-res configuration.

I took this as my only computer for a trip from the USA to Germany. It was a little small at times; though that was to be expected. It let me take a nicely small bag as a carryon, and being light, it was pleasant to carry around in airports. It served its purpose quite well.

One downside is that it can’t be powered by a phone charger like my Chromebook Duet can. However, I found a nice slim 65W Anker charger that could charge it and phones simultaneously that did the job well enough (I left the Microsoft charger with the proprietary connector at home).

The Surface Go 2 maxes out at a 128GB SSD. That feels a bit constraining, especially since I kept Windows around. However, it also has a micro SD slot, so you can put LUKS and ext4 on that and use it as another filesystem. I popped a micro SD I had lying around into there and that felt a lot better storage-wise. I could also completely zap Windows, but that would leave no way to get firmware updates and I didn’t really want to do that. Still, I don’t use Windows and that could be an option also.

All in all, I’m pretty pleased with it. Around $600 for a fully-functional Debian tablet, with a keyboard is pretty nice.

I had been hoping for months that the Pinetab would come back into stock, because I’d much rather support a Linux hardware vendor, but for now I think the Surface Go series is the most solid option for a Linux tablet.

22 June, 2022 11:46PM by John Goerzen

hackergotchi for Daniel Pocock

Daniel Pocock

Using the Debian trademark for good

At the Software Freedom Institute, we were a little bit shocked to receive the trademark so quickly, it almost caught us off guard.

Nonetheless, after a few days careful contemplation, it became clear in my mind how I should use the powers that come with the trademark.

Using whatever legal authority this trademark gives me within the jurisdiction of Switzerland, I'm making the following executive orders:

  1. To avoid confusion with the outcome of the recent Debian elections, I will not be using the title Debian Project Leader and I will permit Jonathan Carter to use this title when he visits Switzerland. Nonetheless, I am reserving Director of Debianism for myself.
  2. The Debian CoC is violating my trademark. It is a micky-mouse instrument that merely purports to impersonate the codes of genuine professional bodies. It is hereby declared to be null and void.
  3. With the CoC out of the way, I can now get down to serious business. Linus Torvalds and Dr Richard Stallman are granted the title of Honorary Debian Developer.
  4. The following phrase in the Debian Diversity Statement is revised: The Debian Project welcomes and encourages participation by everyone will now become The Debian Project welcomes and encourages participation by everyone, including Linus Torvalds, Dr Richard Stallman and all the other people we gagged, banned, censored, defamed and ostracized over the years.
  5. The definition of Debian Developer includes everybody who has a copyright interest in any Debian release, past or present.
  6. Anybody who meets the above definition of Debian Developer has a right to use the name Debian in domain names.

Now that is out of the way I have some regular work to do fixing bugs.

Linus Torvalds, Daniel Pocock, Debian, DebConf

22 June, 2022 03:00PM

June 21, 2022

Free Software Fellowship

hackergotchi for Daniel Pocock

Daniel Pocock

Your authorization to use the Debian trademark in domain names

green light, traffic lights

Personally, I've been doing things with Debian and free software for almost thirty years. I was really shocked when I heard that Debian funds were being used to try and shut down independent, volunteer-run web sites publishing news about Debian itself.

I had a closer look at the situation myself and realized that nobody has registered a Debian trademark in Switzerland. Therefore, the Software Freedom Institute submitted an application for the mark.

The application was submitted on 14 May 2022 and granted on 8 June 2022.

Software Freedom Institute SA immediately published a statement authorizing legitimate use of the trademark in domain names.

It appears really bizarre that some rogue members of Debian have collaborated for months with an expensive lawyer and yet none of them bothered to ensure they were holding a registration in Switzerland before filing their attacks at WIPO. The Swiss Institute for Intellectual Property charges a fee of just CHF 550 to register a trademark. That is less than what Debian pays for two hours with their lawyer. Einstein himself used to work there but you don't need to be Einstein to realize who got better value for their money in this case.

Everybody who has contributed to Debian and free software has at least some legitimate interest in using the name of the projects they have worked on. For a lawyer to steal a domain name from a volunteer, under article 4(b)(ix)(2) of the UDRP, the lawyer has to assert that the volunteer has absolutely no rights or legitimate interests in the name of their work product. The assertion that we, the real developers, have no legitimate interest in the name of a project whatsover is an incredible act of disrespect from some incredibly arrogant toffs who look down their noses at us. I sincerely believe that any complaints filed on that basis are a fraud and will be rejected by the WIPO panel.

Software Freedom Institute will not be using the mark to distribute any substitute products. We use Debian and we extend it for our users, as we are authorized and encouraged to do by the Debian Social Contract. It is the same Debian, better.

The registration of this mark is not a random act of domain squatting. Free Software has been my entire life and I have as much legitimacy as any other Debian Developer to register a trademark.

Please print or save a copy of the authorization and then go straight to your prefered domain registrar and order your own debian.something domain today.

21 June, 2022 06:15PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - June 2022

As planned, we held our second local Debian meeting of the year last Sunday. We met at the lovely Eastern Bloc (an artists' hacklab) to work on Debian (and other stuff!), chat and socialise.

Although there were fewer people than at our last meeting1, we still did a lot of work!

I worked on fixing a bunch of bugs in Clojure packages2, LeLutin worked on podman and packaged libinfluxdb-http-perl and anarcat worked on internetarchive, trocla and moneta. Olivier also came by and worked on debugging his Kali install.

We are planning to have our next meeting at the end of August. If you are interested, the best way to stay in touch is either to subscribe to our mailing list or to join our IRC channel (#debian-quebec on OFTC). Events are also posted on Quebec's Agenda du libre.

Many thanks to Debian for providing us a budget to rent the venue for the day and for the pizza! Here is a nice picture anarcat took of (one of) the glasses of porter we had afterwards, at the next door brewery:

A glass of English Porter from Silo Brewery

  1. Summer meetings are always less populous and it also happened to be Father's Day... 

  2. #1012824, #1011856, #1011837, #1011844, #1011864 and #1011967

21 June, 2022 05:37PM by Louis-Philippe Véronneau

hackergotchi for Steve Kemp

Steve Kemp

Writing a simple TCL interpreter in golang

Recently I was reading Antirez's piece TCL the Misunderstood again, which is a nice defense of the utility and value of the TCL language.

TCL is one of those scripting languages which used to be used a hell of a lot in the past, for scripting routers, creating GUIs, and more. These days it quietly lives on, but doesn't get much love. That said it's a remarkably simple language to learn, and experiment with.

Using TCL always reminds me of FORTH, in the sense that the syntax consists of "words" with "arguments", and everything is a string (well, not really, but almost. Some things are lists too of course).

A simple overview of TCL would probably begin by saying that everything is a command, and that the syntax is very free. There are just a couple of clever rules which are applied consistently to give you a remarkably flexible environment.

To get started we'll set a string value to a variable:

  set name "Steve Kemp"
  => "Steve Kemp"

Now you can output that variable:

  puts "Hello, my name is $name"
  => "Hello, my name is Steve Kemp"

OK, it looks a little verbose due to the use of set, and puts is less pleasant than print or echo, but it works. It is readable.

Next up? Interpolation. We saw how $name expanded to "Steve Kemp" within the string. That's true more generally, so we can do this:

 set print pu
 set me    ts

 $print$me "Hello, World"
 => "Hello, World"

There "$print" and "$me" expanded to "pu" and "ts" respectively. Resulting in:

 puts "Hello, World"

That expansion happened before the input was executed, and works as you'd expect. There's another form of expansion too, which involves the [ and ] characters. Anything within the square-brackets is replaced with the contents of evaluating that body. So we can do this:

 puts "1 + 1 = [expr 1 + 1]"
 => "1 + 1 = 2"

Perhaps enough detail there, except to say that we can use { and } to enclose things that are NOT expanded, or executed, at parse time. This facility lets us evaluate those blocks later, so you can write a while-loop like so:

 set cur 1
 set max 10

 while { expr $cur <= $max } {
       puts "Loop $cur of $max"
       incr cur

Anyway that's enough detail. Much like writing a FORTH interpreter the key to implementing something like this is to provide the bare minimum of primitives, then write the rest of the language in itself.

You can get a usable scripting language with only a small number of the primitives, and then evolve the rest yourself. Antirez also did this, he put together a small TCL interpreter in C named picol:

Other people have done similar things, recently I saw this writeup which follows the same approach:

So of course I had to do the same thing, in golang:

My code runs the original code from Antirez with only minor changes, and was a fair bit of fun to put together.

Because the syntax is so fluid there's no complicated parsing involved, and the core interpreter was written in only a few hours then improved step by step.

Of course to make a language more useful you need I/O, beyond just writing to the console - and being able to run the list-operations would make it much more useful to TCL users, but that said I had fun writing it, it seems to work, and once again I added fuzz-testers to the lexer and parser to satisfy myself it was at least somewhat robust.

Feedback welcome, but even in quiet isolation it's fun to look back at these "legacy" languages and recognize their simplicity lead to a lot of flexibility.

21 June, 2022 09:52AM

John Goerzen

Lessons of Social Media from BBSs

In the recent article The Internet Origin Story You Know Is Wrong, I was somewhat surprised to see the argument that BBSs are a part of the Internet origin story that is often omitted. Surprised because I was there for BBSs, and even ran one, and didn’t really consider them part of the Internet story myself. I even recently enjoyed a great BBS documentary and still didn’t think of the connection on this way.

But I think the argument is a compelling one.

In truth, the histories of Arpanet and BBS networks were interwoven—socially and materially—as ideas, technologies, and people flowed between them. The history of the internet could be a thrilling tale inclusive of many thousands of networks, big and small, urban and rural, commercial and voluntary. Instead, it is repeatedly reduced to the story of the singular Arpanet.

Kevin Driscoll goes on to highlight the social aspects of the “modem world”, how BBSs and online services like AOL and CompuServe were ways for people to connect. And yet, AOL members couldn’t easily converse with CompuServe members, and vice-versa. Sound familiar?

Today’s social media ecosystem functions more like the modem world of the late 1980s and early 1990s than like the open social web of the early 21st century. It is an archipelago of proprietary platforms, imperfectly connected at their borders. Any gateways that do exist are subject to change at a moment’s notice. Worse, users have little recourse, the platforms shirk accountability, and states are hesitant to intervene.

Yes, it does. As he adds, “People aren’t the problem. The problem is the platforms.”

A thought-provoking article, and I think I’ll need to buy the book it’s excerpted from!

21 June, 2022 01:52AM by John Goerzen

Debian Community News

Amnesty International & Debian Day suicides comparison

Gaëtan Mootoo, Amnesty, suicide

When Gaëtan Mootoo, an extraordinarily gifted Amnesty International employee, decided to take his own life, he did it in his workplace, the Amnesty office in Paris.

Debian doesn't have a workplace. Nonetheless, when Frans Pop chose to take his own life, he sent a formal email announcing his resignation from Debian at 9:41pm on the evening before the Debian anniversary, Debian Day, 16 August 2010. For a virtual organization, for all intents and purposes, this was the closest you get to taking your life in the office.

Amnesty and Debian have both had a number of tragic deaths that could have been avoided. Out of respect for these victims and all the workers, paid and unpaid, we feel it is imperative to compare these deaths.

In Amnesty, seven senior executives resigned. No Debian Project Leader ever resigned. Amnesty called independent, external experts to examine each death. Debian established a community team, in other words, a bunch of yes men and the leader's girlfriend.

Amnesty published the expert reports on their web site for the whole world to see. Debian has hidden the Frans Pop suicide in the debian-private (leaked) gossip network. Now they are trying to shut down the Debian Community News web site to hide the evidence about Lucy Wayland and other inconvenient disclosures. This is the complete opposite of Amnesty's transparency in these matters.

The Debian Social Contract, point no. 3 tells us We will not hide problems. Can you imagine any problem more serious than a volunteer planning a suicide for Debian Day?

Gaëtan Mootoo (Amnesty) and Frans Pop (Debian)

The Amnesty report on Gaëtan Mootoo gives us a few words from his suicide note:

I would have wanted to write you a longer letter, but I no longer have the energy, I’m very tired...

For a number of years but mostly since the end of 2014, I haven’t been feeling very good, though have spoken about it to nobody. In addition, there has been so much work and though I made a request for help, that wasn’t possible. I love what I do and I want to do it properly. I feel that I could no longer go on in this way, hence this decision...

James Laddie QC, author of the report, writes It is clear from the above that Gaëtan’s work played a major part in his ultimate decision.

Here are some comments from Frans Pop in 2007:

So, what has made me decide to leave the project. It's a combination of just plain emotional stress over the whole Sven Luther issue, frustration with the inability of the project to deal with that and with some other issues, and frustration with the fact that a fair number of members of the project seem to feel that as long as you don't upload packages with trojans, pretty much anything is OK.

and these are the last words he sent on debian-private before his death:

It's time to say goodbye. I don't want to say too much about it, except that I've been planning this for a long time.

A few days later, his parents told us, in comments echoing the words of James Laddie QC,

Yesterday morning our son Frans Pop has died. He took his own life, in a well-considered, courageous, and considerate manner. During the last years his main concern was his work for Debian.

The sick bastards on debian-private trivialized the significance of those comments. In dozens of emails about the suicide, not one person mentioned that Frans resigned the night before Debian Day.

Frans Pop, Debian, suicide

Rosalind (Roz) McGregor (Amnesty) and Lucy Wayland (Debian)

These cases are far more complicated.

One in four suicide victims leave a note. We can thank Gaëtan Mootoo and Frans Pop for telling us what was on their mind. Neither McGregor nor Wayland left any written communications with evidence about what was on their mind.

Notes from the inquest suggest that McGregor may have experienced psychosis. Nonetheless, the British health professionals had not attempted to formally diagnose her condition. Funding for mental health is notoriously neglected in the UK. The expert report published by Amnesty notes that she had been in a stressful period due to meetings of the UN's Human Rights Council.

In the case of Wayland, the coroner ruled the death was an accident, not suicide. We wish to emphasize this point, Wayland called for help. On the other hand, Wayland's alcohol dependence is a form of self harm, a distant cousin of the kitchen knife that Rosalind McGregor used to end her life.

McGregor had experienced stress in the workplace and Wayland, being part of the Debian "Community", had been exposed to stress because of Molly de Blanc and the Debian Christmas lynchings in December 2018.

Meetings of the Human Rights Council are scheduled three times per year. The workers can anticipate those meetings and plan their time around them. The Christmas period is a time of rest and Molly de Blanc deliberately intruded on it. This created an extra burden for Debian volunteers who started 2019 feeling stressed so the Mollies can "feel safe" in their bubble.

Rosalind McGregor, Amnesty, suicide

21 June, 2022 12:00AM

June 20, 2022

Iustin Pop

Experiment: A week of running

My sports friends know that I wasn’t able to really run in many, many years, due to a recurring injury that was not fully diagnosed and which, after many sessions with the doctor, ended up with OK-ish state for day-to-day life but also with these words: “Maybe, running is just not for you?”

The year 2012 was my “running year”. I went to a number of races, wrote blog posts, then slowly started running only rarely, then a few years later I was really only running once in a while, and coupled with a number of bad ideas of the type “lets run today after a long break, but a lot”, I started injuring my foot.

Add a few more years, some more kilograms on my body, a one event of jumping with a kid on my shoulders and landing on my bad foot, and the setup was complete.

Doctor visits, therapy, slow improvements, but not really solving the problem. 6 months breaks, small attempts at running, pain again, repeat, pain again, etc. It ended up with me acknowledging that yes, maybe running is not for me, and I should really give it up.

Incidentally, in 2021, as part of me trying to improve my health/diet, I tried some thing that is not important for this post and for the first time in a long time, I was fully, 100%, pain free in my leg during day-to-day activities. Huh, maybe this is not purely related to running? From that point on, my foot became, very slowly, better. I started doing short runs (2-3km), especially on holidays where I can’t bike, and if I was careful, it didn’t go too bad. But I knew I can’t run, so these were rare events.

In April this year, on vacation, I run a couple of times - 20km distance. In May, 12km. Then, there was a Garmin Badge I really wanted, so against my good judgement, I did a run/walk (2:1 ratio) the previous weekend, and to my surprise, no unwanted side-effect. And I got an idea: what if I do short run/walks an entire week? When does my foot “break”?

I mean, by now I knew that a short (3-4, maybe 5km) run that has pauses doesn’t negatively impact my foot. What about the 2nd one? Or the 3rd one? When does it break? Is it distance, or something else?

The other problem was - when to run? I mean, on top of hybrid work model. When working from home, all good, but when working from the office? So the other, somewhat more impossible task for me, was to wake up early and run before 8 AM. Clearly destined to fail!

But, the following day (Monday), I did wake up and 3km. Then Tuesday again, 3.3km (and later, one hour of biking). Wed - 3.3km. Thu - 4.40km, at 4:1 ratio (2m:30s). Friday, 3.7km (4:1), plus a very long for me (112km) bike ride.

By this time, I was physically dead. Not my foot, just my entire body. On Saturday morning, Training Peaks said my form is -52, and it starts warning below -15. I woke up late and groggy, and I had to extra motivate myself to go for the last, 5.3km run, to round up the week.

On Friday and Saturday, my problem leg did start to… how to say, remind me it is problematic? But not like previously, no waking in the morning with a stiff tendon. No, just… not fully happy. And, to my surprise, correlated again with my consumption of problematic food (I was getting hungrier and hungrier, and eating too much of things I should keep an eye on).

At this point, with the week behind me:

  • am ultra-surprised that my foot is not in pieces (yet?)
  • am still pretty tired (form: -48), but I did manage to run again after a day of pause from running (and my foot is still OK-ish).
  • am confused as to what are really my problems…
  • am convinced that I have some way of running a bit, if I take it careful (which is hard!)
  • am really, really hungry; well, not anymore, I ate like a pig for the last two days.
  • beat my all-time Garmin record for “weekly intensity minutes” (1174, damn, 1 more minute and would have been rounder number)…

Did my experiment make me wiser? Not really. Happier? Yes, 100%. I plan to buy some new running clothes, my current ones are really old.

But did I really understand how my body function? A loud no. Sigh.

The next challenge will be, how to manage my time across multiple sports (and work, and family, and other hobbies). Still, knowing that I can anytime go for 25-35 minutes of running, without preparation, is very reassuring.

Freedom, health and injury-free sports to everyone!

20 June, 2022 08:17PM

Antoine Beaupré

Matrix notes

I have some concerns about Matrix (the protocol, not the movie that came out recently, although I do have concerns about that as well). I've been watching the project for a long time, and it seems more a promising alternative to many protocols like IRC, XMPP, and Signal.

This review may sound a bit negative, because it focuses on those concerns. I am the operator of an IRC network and people keep asking me to bridge it with Matrix. I have myself considered just giving up on IRC and converting to Matrix. This space is a living document exploring my research of that problem space. The TL;DR: is that no, I'm not setting up a bridge just yet, and I'm still on IRC.

This article was written over the course of the last three months, but I have been watching the Matrix project for years (my logs seem to say 2016 at least). The article is rather long. It will likely take you half an hour to read, so copy this over to your ebook reader, your tablet, or dead trees, and lean back and relax as I show you around the Matrix. Or, alternatively, just jump to a section that interest you, most likely the conclusion.

Introduction to Matrix

Matrix is an "open standard for interoperable, decentralised, real-time communication over IP. It can be used to power Instant Messaging, VoIP/WebRTC signalling, Internet of Things communication - or anywhere you need a standard HTTP API for publishing and subscribing to data whilst tracking the conversation history".

It's also (when compared with XMPP) "an eventually consistent global JSON database with an HTTP API and pubsub semantics - whilst XMPP can be thought of as a message passing protocol."

According to their FAQ, the project started in 2014, has about 20,000 servers, and millions of users. Matrix works over HTTPS but over a special port: 8448.

Security and privacy

I have some concerns about the security promises of Matrix. It's advertised as a "secure" with "E2E [end-to-end] encryption", but how does it actually work?

Data retention defaults

One of my main concerns with Matrix is data retention, which is a key part of security in a threat model where (for example) an hostile state actor wants to surveil your communications and can seize your devices.

On IRC, servers don't actually keep messages all that long: they pass them along to other servers and clients as fast as they can, only keep them in memory, and move on to the next message. There are no concerns about data retention on messages (and their metadata) other than the network layer. (I'm ignoring the issues with user registration, which is a separate, if valid, concern.) Obviously, an hostile server could log everything passing through it, but IRC federations are normally tightly controlled. So, if you trust your IRC operators, you should be fairly safe. Obviously, clients can (and often do, even if OTR is configured!) log all messages, but this is generally not the default. Irssi, for example, does not log by default. IRC bouncers are more likely to log to disk, of course, to be able to do what they do.

Compare this to Matrix: when you send a message to a Matrix homeserver, that server first stores it in its internal SQL database. Then it will transmit that message to all clients connected to that server and room, and to all other servers that have clients connected to that room. Those remote servers, in turn, will keep a copy of that message and all its metadata in their own database, by default forever. On encrypted rooms those messages are encrypted, but not their metadata.

There is a mechanism to expire entries in Synapse, but it is not enabled by default. So one should generally assume that a message sent on Matrix is never expired.

GDPR in the federation

But even if that setting was enabled by default, how do you control it? This is a fundamental problem of the federation: if any user is allowed to join a room (which is the default), those user's servers will log all content and metadata from that room. That includes private, one-on-one conversations, since those are essentially rooms as well.

In the context of the GDPR, this is really tricky: who is the responsible party (known as the "data controller") here? It's basically any yahoo who fires up a home server and joins a room.

In a federated network, one has to wonder whether GDPR enforcement is even possible at all. But in Matrix in particular, if you want to enforce your right to be forgotten in a given room, you would have to:

  1. enumerate all the users that ever joined the room while you were there
  2. discover all their home servers
  3. start a GDPR procedure against all those servers

I recognize this is a hard problem to solve while still keeping an open ecosystem. But I believe that Matrix should have much stricter defaults towards data retention than right now. Message expiry should be enforced by default, for example. (Note that there are also redaction policies that could be used to implement part of the GDPR automatically, see the privacy policy discussion below on that.)

Also keep in mind that, in the brave new peer-to-peer world that Matrix is heading towards, the boundary between server and client is likely to be fuzzier, which would make applying the GDPR even more difficult.

In fact, maybe Synapse should be designed so that there's no configurable flag to turn off data retention. A bit like how most system loggers in UNIX (e.g. syslog) come with a log retention system that typically rotate logs after a few weeks or month. Historically, this was designed to keep hard drives from filling up, but it also has the added benefit of limiting the amount of personal information kept on disk in this modern day. (Arguably, syslog doesn't rotate logs on its own, but, say, Debian GNU/Linux, as an installed system, does have log retention policies well defined for installed packages, and those can be discussed. And "no expiry" is definitely a bug.

Matrix.org privacy policy

When I first looked at Matrix, five years ago, Element.io was called Riot.im and had a rather dubious privacy policy:

We currently use cookies to support our use of Google Analytics on the Website and Service. Google Analytics collects information about how you use the Website and Service.


This helps us to provide you with a good experience when you browse our Website and use our Service and also allows us to improve our Website and our Service.

When I asked Matrix people about why they were using Google Analytics, they explained this was for development purposes and they were aiming for velocity at the time, not privacy (paraphrasing here).

They also included a "free to snitch" clause:

If we are or believe that we are under a duty to disclose or share your personal data, we will do so in order to comply with any legal obligation, the instructions or requests of a governmental authority or regulator, including those outside of the UK.

Those are really broad terms, above and beyond what is typically expected legally.

Like the current retention policies, such user tracking and ... "liberal" collaboration practices with the state set a bad precedent for other home servers.

Thankfully, since the above policy was published (2017), the GDPR was "implemented" (2018) and it seems like both the Element.io privacy policy and the Matrix.org privacy policy have been somewhat improved since.

Notable points of the new privacy policies:

  • the "federation" section actually outlines that "Federated homeservers and Matrix clients which respect the Matrix protocol are expected to honour these controls and redaction/erasure requests, but other federated homeservers are outside of the span of control of Element, and we cannot guarantee how this data will be processed"
  • 2.6: users under the age of 16 should not use the matrix.org service
  • 2.10: Upcloud, Mythic Beast, Amazon, and CloudFlare possibly have access to your data (it's nice to at least mention this in the privacy policy: many providers don't even bother admitting to this kind of delegation)
  • Element 2.2.1: mentions many more third parties (Twilio, Stripe, Quaderno, LinkedIn, Twitter, Google, Outplay, PipeDrive, HubSpot, Posthog, Sentry, and Matomo (phew!) used when you are paying Matrix.org for hosting

I'm not super happy with all the trackers they have on the Element platform, but then again you don't have to use that service. Your favorite homeserver (assuming you are not on Matrix.org) probably has their own Element deployment, hopefully without all that garbage.

Overall, this is all a huge improvement over the previous privacy policy, so hats off to the Matrix people for figuring out a reasonable policy in such a tricky context. I particularly like this bit:

We will forget your copy of your data upon your request. We will also forward your request to be forgotten onto federated homeservers. However - these homeservers are outside our span of control, so we cannot guarantee they will forget your data.

It's great they implemented those mechanisms and, after all, if there's an hostile party in there, nothing can prevent them from using screenshots to just exfiltrate your data away from the client side anyways, even with services typically seen as more secure, like Signal.

As an aside, I also appreciate that Matrix.org has a fairly decent code of conduct, based on the TODO CoC which checks all the boxes in the geekfeminism wiki.

Metadata handling

Overall, privacy protections in Matrix mostly concern message contents, not metadata. In other words, who's talking with who, when and from where is not well protected. Compared to a tool like Signal, which goes through great lengths to anonymize that data with features like private contact discovery, disappearing messages, sealed senders, and private groups, Matrix is definitely behind. (Note: there is an issue open about message lifetimes in Element since 2020, but it's not at even at the MSC stage yet.)

This is a known issue (opened in 2019) in Synapse, but this is not just an implementation issue, it's a flaw in the protocol itself. Home servers keep join/leave of all rooms, which gives clear text information about who is talking to. Synapse logs may also contain privately identifiable information that home server admins might not be aware of in the first place. Those log rotation policies are separate from the server-level retention policy, which may be confusing for a novice sysadmin.

Combine this with the federation: even if you trust your home server to do the right thing, the second you join a public room with third-party home servers, those ideas kind of get thrown out because those servers can do whatever they want with that information. Again, a problem that is hard to solve in any federation.

To be fair, IRC doesn't have a great story here either: any client knows not only who's talking to who in a room, but also typically their client IP address. Servers can (and often do) obfuscate this, but often that obfuscation is trivial to reverse. Some servers do provide "cloaks" (sometimes automatically), but that's kind of a "slap-on" solution that actually moves the problem elsewhere: now the server knows a little more about the user.

Overall, I would worry much more about a Matrix home server seizure than a IRC or Signal server seizure. Signal does get subpoenas, and they can only give out a tiny bit of information about their users: their phone number, and their registration, and last connection date. Matrix carries a lot more information in its database.

Amplification attacks on URL previews

I (still!) run an Icecast server and sometimes share links to it on IRC which, obviously, also ends up on (more than one!) Matrix home servers because some people connect to IRC using Matrix. This, in turn, means that Matrix will connect to that URL to generate a link preview.

I feel this outlines a security issue, especially because those sockets would be kept open seemingly forever. I tried to warn the Matrix security team but somehow, I don't think this issue was taken very seriously. Here's the disclosure timeline:

  • January 18: contacted Matrix security
  • January 19: response: already reported as a bug
  • January 20: response: can't reproduce
  • January 31: timeout added, considered solved
  • January 31: I respond that I believe the security issue is underestimated, ask for clearance to disclose
  • February 1: response: asking for two weeks delay after the next release (1.53.0) including another patch, presumably in two weeks' time
  • February 22: Matrix 1.53.0 released
  • April 14: I notice the release, ask for clearance again
  • April 14: response: referred to the public disclosure

There are a couple of problems here:

  1. the bug was publicly disclosed in September 2020, and not considered a security issue until I notified them, and even then, I had to insist

  2. no clear disclosure policy timeline was proposed or seems established in the project (there is a security disclosure policy but it doesn't include any predefined timeline)

  3. I wasn't informed of the disclosure

  4. the actual solution is a size limit (10MB, already implemented), a time limit (30 seconds, implemented in PR 11784), and a content type allow list (HTML, "media" or JSON, implemented in PR 11936), and I'm not sure it's adequate

  5. (pure vanity:) I did not make it to their Hall of fame

I'm not sure those solutions are adequate because they all seem to assume a single home server will pull that one URL for a little while then stop. But in a federated network, many (possibly thousands) home servers may be connected in a single room at once. If an attacker drops a link into such a room, all those servers would connect to that link all at once. This is an amplification attack: a small amount of traffic will generate a lot more traffic to a single target. It doesn't matter there are size or time limits: the amplification is what matters here.

It should also be noted that clients that generate link previews have more amplification because they are more numerous than servers. And of course, the default Matrix client (Element) does generate link previews as well.

That said, this is possibly not a problem specific to Matrix: any federated service that generates link previews may suffer from this.

I'm honestly not sure what the solution is here. Maybe moderation? Maybe link previews are just evil? All I know is there was this weird bug in my Icecast server and I tried to ring the bell about it, and it feels it was swept under the rug. Somehow I feel this is bound to blow up again in the future, even with the current mitigation.


In Matrix like elsewhere, Moderation is a hard problem. There is a detailed moderation guide and much of this problem space is actively worked on in Matrix right now. A fundamental problem with moderating a federated space is that a user banned from a room can rejoin the room from another server. This is why spam is such a problem in Email, and why IRC networks have stopped federating ages ago (see the IRC history for that fascinating story).

The mjolnir bot

The mjolnir moderation bot is designed to help with some of those things. It can kick and ban users, redact all of a user's message (as opposed to one by one), all of this across multiple rooms. It can also subscribe to a federated block list published by matrix.org to block known abusers (users or servers). Bans are pretty flexible and can operate at the user, room, or server level.

Matrix people suggest making the bot admin of your channels, because you can't take back admin from a user once given.

The command-line tool

There's also a new command line tool designed to do things like:

  • System notify users (all users/users from a list, specific user)
  • delete sessions/devices not seen for X days
  • purge the remote media cache
  • select rooms with various criteria (external/local/empty/created by/encrypted/cleartext)
  • purge history of theses rooms
  • shutdown rooms

This tool and Mjolnir are based on the admin API built into Synapse.

Rate limiting

Synapse has pretty good built-in rate-limiting which blocks repeated login, registration, joining, or messaging attempts. It may also end up throttling servers on the federation based on those settings.

Fundamental federation problems

Because users joining a room may come from another server, room moderators are at the mercy of the registration and moderation policies of those servers. Matrix is like IRC's +R mode ("only registered users can join") by default, except that anyone can register their own homeserver, which makes this limited.

Server admins can block IP addresses and home servers, but those tools are not easily available to room admins. There is an API (m.room.server_acl in /devtools) but the it is not reliable (thanks Austin Huang for the clarification).

Matrix has the concept of guest accounts, but it is not used very much, and virtually no client or homeserver supports it. This contrasts with the way IRC works: by default, anyone can join an IRC network even without authentication. Some channels require registration, but in general you are free to join and look around (until you get blocked, of course).

I have heard anecdotal evidence that "moderating bridges is hell", and I can imagine why. Moderation is already hard enough on one federation, when you bridge a room with another network, you inherit all the problems from that network but without the entire abuse control tools from the original network's API...

Room admins

Matrix, in particular, has the problem that room administrators (which have the power to redact messages, ban users, and promote other users) are bound to their Matrix ID which is, in turn, bound to their home servers. This implies that a home server administrators could (1) impersonate a given user and (2) use that to hijack the room. So in practice, the home server is the trust anchor for rooms, not the user themselves.

That said, if server B administrator hijack user joe on server B, they will hijack that room on that specific server. This will not (necessarily) affect users on the other servers, as servers could refuse parts of the updates or ban the compromised account (or server).

It does seem like a major flaw that room credentials are bound to Matrix identifiers, as opposed to the E2E encryption credentials. In an encrypted room even with fully verified members, a compromised or hostile home server can still take over the room by impersonating an admin. That admin (or even a newly minted user) can then send events or listen on the conversations.

This is even more frustrating when you consider that Matrix events are actually signed and therefore have some authentication attached to them, acting like some sort of Merkle tree (as it contains a link to previous events). That signature, however, is made from the homeserver PKI keys, not the client's E2E keys, which makes E2E feel like it has been "bolted on" later.


While Matrix has a strong advantage over Signal in that it's decentralized (so anyone can run their own homeserver,), I couldn't find an easy way to run a "multi-primary" setup, or even a "redundant" setup (even if with a single primary backend), short of going full-on "replicate PostgreSQL and Redis data", which is not typically for the faint of heart.

How this works in IRC

On IRC, it's quite easy to setup redundant nodes. All you need is:

  1. a new machine (with it's own public address with an open port)

  2. a shared secret (or certificate) between that machine and an existing one on the network

  3. a connect {} block on both servers

That's it: the node will join the network and people can connect to it as usual and share the same user/namespace as the rest of the network. The servers take care of synchronizing state: you do not need about replicating a database server.

(Now, experienced IRC people will know there's a catch here: IRC doesn't have authentication built in, and relies on "services" which are basically bots that authenticate users (I'm simplifying, don't nitpick). If that service goes down, the network still works, but then people can't authenticate, and they can start doing nasty things like steal people's identity if they get knocked offline. But still: basic functionality still works: you can talk in rooms and with users that are on the reachable network.)

User identities

Matrix is more complicated. Each "home server" has its own identity namespace: a specific user (say @anarcat:matrix.org) is bound to that specific home server. If that server goes down, that user is completely disconnected. They could register a new account elsewhere and reconnect, but then they basically lose all their configuration: contacts, joined channels are all lost.

(Also notice how the Matrix IDs don't look like a typical user address like an email in XMPP. They at least did their homework and got the allocation for the scheme.)


Users talk to each other in "rooms", even in one-to-one communications. (Rooms are also used for other things like "spaces", they're basically used for everything, think "everything is a file" kind of tool.) For rooms, home servers act more like IRC nodes in that they keep a local state of the chat room and synchronize it with other servers. Users can keep talking inside a room if the server that originally hosts the room goes down. Rooms can have a local, server-specific "alias" so that, say, #room:matrix.org is also visible as #room:example.com on the example.com home server. Both addresses refer to the same room underlying room.

(Finding this in the Element settings is not obvious though, because that "alias" are actually called a "local address" there. So to create such an alias (in Element), you need to go in the room settings' "General" section, "Show more" in "Local address", then add the alias name (e.g. foo), and then that room will be available on your example.com homeserver as #foo:example.com.)

So a room doesn't belong to a server, it belongs to the federation, and anyone can join the room from any serer (if the room is public, or if invited otherwise). You can create a room on server A and when a user from server B joins, the room will be replicated on server B as well. If server A fails, server B will keep relaying traffic to connected users and servers.

A room is therefore not fundamentally addressed with the above alias, instead ,it has a internal Matrix ID, which basically a random string. It has a server name attached to it, but that was made just to avoid collisions. That can get a little confusing. For example, the #fractal:gnome.org room is an alias on the gnome.org server, but the room ID is !hwiGbsdSTZIwSRfybq:matrix.org. That's because the room was created on matrix.org, but the preferred branding is gnome.org now.

As an aside, rooms, by default, live forever, even after the last user quits. There's an admin API to delete rooms and a tombstone event to redirect to another one, but neither have a GUI yet. The latter is part of MSC1501 ("Room version upgrades") which allows a room admin to close a room, with a message and a pointer to another room.


Discovering rooms can be tricky: there is a per-server room directory, but Matrix.org people are trying to deprecate it in favor of "Spaces". Room directories were ripe for abuse: anyone can create a room, so anyone can show up in there. It's possible to restrict who can add aliases, but anyways directories were seen as too limited.

In contrast, a "Space" is basically a room that's an index of other rooms (including other spaces), so existing moderation and administration mechanism that work in rooms can (somewhat) work in spaces as well. This enables a room directory that works across federation, regardless on which server they were originally created.

New users can be added to a space or room automatically in Synapse. (Existing users can be told about the space with a server notice.) This gives admins a way to pre-populate a list of rooms on a server, which is useful to build clusters of related home servers, providing some sort of redundancy, at the room -- not user -- level.

Home servers

So while you can workaround a home server going down at the room level, there's no such thing at the home server level, for user identities. So if you want those identities to be stable in the long term, you need to think about high availability. One limitation is that the domain name (e.g. matrix.example.com) must never change in the future, as renaming home servers is not supported.

The documentation used to say you could "run a hot spare" but that has been removed. Last I heard, it was not possible to run a high-availability setup where multiple, separate locations could replace each other automatically. You can have high performance setups where the load gets distributed among workers, but those are based on a shared database (Redis and PostgreSQL) backend.

So my guess is it would be possible to create a "warm" spare server of a matrix home server with regular PostgreSQL replication, but that is not documented in the Synapse manual. This sort of setup would also not be useful to deal with networking issues or denial of service attacks, as you will not be able to spread the load over multiple network locations easily. Redis and PostgreSQL heroes are welcome to provide their multi-primary solution in the comments. In the meantime, I'll just point out this is a solution that's handled somewhat more gracefully in IRC, by having the possibility of delegating the authentication layer.


If you do not want to run a Matrix server yourself, it's possible to delegate the entire thing to another server. There's a server discovery API which uses the .well-known pattern (or SRV records, but that's "not recommended" and a bit confusing) to delegate that service to another server. Be warned that the server still needs to be explicitly configured for your domain. You can't just put:

{ "m.server": "matrix.org:443" }

... on https://example.com/.well-known/matrix/server and start using @you:example.com as a Matrix ID. That's because Matrix doesn't support "virtual hosting" and you'd still be connecting to rooms and people with your matrix.org identity, not example.com as you would normally expect. This is also why you cannot rename your home server.

The server discovery API is what allows servers to find each other. Clients, on the other hand, use the client-server discovery API: this is what allows a given client to find your home server when you type your Matrix ID on login.


The high availability discussion brushed over the performance of Matrix itself, but let's now dig into that.

Horizontal scalability

There were serious scalability issues of the main Matrix server, Synapse, in the past. So the Matrix team has been working hard to improve its design. Since Synapse 1.22 the home server can horizontally to multiple workers (see this blog post for details) which can make it easier to scale large servers.

Other implementations

There are other promising home servers implementations from a performance standpoint (dendrite, Golang, entered beta in late 2020; conduit, Rust, beta; others), but none of those are feature-complete so there's a trade-off to be made there. Synapse is also adding a lot of feature fast, so it's an open question whether the others will ever catch up. (I have heard that Dendrite might actually surpass Synapse in features within a few years, which would put Synapse in a more "LTS" situation.)


Matrix can feel slow sometimes. For example, joining the "Matrix HQ" room in Element (from matrix.debian.social) takes a few minutes and then fails. That is because the home server has to sync the entire room state when you join the room. There was promising work on this announced in the lengthy 2021 retrospective, and some of that work landed (partial sync) in the 1.53 release already. Other improvements coming include sliding sync, lazy loading over federation, and fast room joins. So that's actually something that could be fixed in the fairly short term.

But in general, communication in Matrix doesn't feel as "snappy" as on IRC or even Signal. It's hard to quantify this without instrumenting a full latency test bed (for example the tools I used in the terminal emulators latency tests), but even just typing in a web browser feels slower than typing in a xterm or Emacs for me.

Even in conversations, I "feel" people don't immediately respond as fast. In fact, this could be an interesting double-blind experiment to make: have people guess whether they are talking to a person on Matrix, XMPP, or IRC, for example. My theory would be that people could notice that Matrix users are slower, if only because of the TCP round-trip time each message has to take.


Some courageous person actually made some tests of various messaging platforms on a congested network. His evaluation was basically:

  • Briar: uses Tor, so unusable except locally
  • Matrix: "struggled to send and receive messages", joining a room takes forever as it has to sync all history, "took 20-30 seconds for my messages to be sent and another 20 seconds for further responses"
  • XMPP: "worked in real-time, full encryption, with nearly zero lag"

So that was interesting. I suspect IRC would have also fared better, but that's just a feeling.

Other improvements to the transport layer include support for websocket and the CoAP proxy work from 2019 (targeting 100bps links), but both seem stalled at the time of writing. The Matrix people have also announced the pinecone p2p overlay network which aims at solving large, internet-scale routing problems. See also this talk at FOSDEM 2022.


Onboarding and workflow

The workflow for joining a room, when you use Element web, is not great:

  1. click on a link in a web browser
  2. land on (say) https://matrix.to/#/#matrix-dev:matrix.org
  3. offers "Element", yeah that's sounds great, let's click "Continue"
  4. land on https://app.element.io/#/room%2F%23matrix-dev%3Amatrix.org and then you need to register, aaargh

As you might have guessed by now, there is a specification to solve this, but web browsers need to adopt it as well, so that's far from actually being solved. At least browsers generally know about the matrix: scheme, it's just not exactly clear what they should do with it, especially when the handler is just another web page (e.g. Element web).

In general, when compared with tools like Signal or WhatsApp, Matrix doesn't fare so well in terms of user discovery. I probably have some of my normal contacts that have a Matrix account as well, but there's really no way to know. It's kind of creepy when Signal tells you "this person is on Signal!" but it's also pretty cool that it works, and they actually implemented it pretty well.

Registration is also less obvious: in Signal, the app confirms your phone number automatically. It's friction-less and quick. In Matrix, you need to learn about home servers, pick one, register (with a password! aargh!), and then setup encryption keys (not default), etc. It's a lot more friction.

And look, I understand: giving away your phone number is a huge trade-off. I don't like it either. But it solves a real problem and makes encryption accessible to a ton more people. Matrix does have "identity servers" that can serve that purpose, but I don't feel confident sharing my phone number there. It doesn't help that the identity servers don't have private contact discovery: giving them your phone number is a more serious security compromise than with Signal.

There's a catch-22 here too: because no one feels like giving away their phone numbers, no one does, and everyone assumes that stuff doesn't work anyways. Like it or not, Signal forcing people to divulge their phone number actually gives them critical mass that means actually a lot of my relatives are on Signal and I don't have to install crap like WhatsApp to talk with them.

5 minute clients evaluation

Throughout all my tests I evaluated a handful of Matrix clients, mostly from Flathub because almost none of them are packaged in Debian.

Right now I'm using Element, the flagship client from Matrix.org, in a web browser window, with the PopUp Window extension. This makes it look almost like a native app, and opens links in my main browser window (instead of a new tab in that separate window), which is nice. But I'm tired of buying memory to feed my web browser, so this indirection has to stop. Furthermore, I'm often getting completely logged off from Element, which means re-logging in, recovering my security keys, and reconfiguring my settings. That is extremely annoying.

Coming from Irssi, Element is really "GUI-y" (pronounced "gooey"). Lots of clickety happening. To mark conversations as read, in particular, I need to click-click-click on all the tabs that have some activity. There's no "jump to latest message" or "mark all as read" functionality as far as I could tell. In Irssi the former is built-in (alt-a) and I made a custom /READ command for the latter:

/ALIAS READ script exec \$_->activity(0) for Irssi::windows

And yes, that's a Perl script in my IRC client. I am not aware of any Matrix client that does stuff like that, except maybe Weechat, if we can call it a Matrix client, or Irssi itself, now that it has a Matrix plugin (!).

As for other clients, I have looked through the Matrix Client Matrix (confusing right?) to try to figure out which one to try, and, even after selecting Linux as a filter, the chart is just too wide to figure out anything. So I tried those, kind of randomly:

  • Fractal
  • Mirage
  • Nheko
  • Quaternion

Unfortunately, I lost my notes on those, I don't actually remember which one did what. I still have a session open with Mirage, so I guess that means it's the one I preferred, but I remember they were also all very GUI-y.

Maybe I need to look at weechat-matrix or gomuks. At least Weechat is scriptable so I could continue playing the power-user. Right now my strategy with messaging (and that includes microblogging like Twitter or Mastodon) is that everything goes through my IRC client, so Weechat could actually fit well in there. Going with gomuks, on the other hand, would mean running it in parallel with Irssi or ... ditching IRC, which is a leap I'm not quite ready to take just yet.

Oh, and basically none of those clients (except Nheko and Element) support VoIP, which is still kind of a second-class citizen in Matrix. It does not support large multimedia rooms, for example: Jitsi was used for FOSDEM instead of the native videoconferencing system.


This falls a little aside the "usability" section, but I didn't know where to put this... There's a few Matrix bots out there, and you are likely going to be able to replace your existing bots with Matrix bots. It's true that IRC has a long and impressive history with lots of various bots doing various things, but given how young Matrix is, there's still a good variety:

  • maubot: generic bot with tons of usual plugins like sed, dice, karma, xkcd, echo, rss, reminder, translate, react, exec, gitlab/github webhook receivers, weather, etc
  • opsdroid: framework to implement "chat ops" in Matrix, connects with Matrix, GitHub, GitLab, Shell commands, Slack, etc
  • matrix-nio: another framework, used to build lots more bots like:
    • hemppa: generic bot with various functionality like weather, RSS feeds, calendars, cron jobs, OpenStreetmaps lookups, URL title snarfing, wolfram alpha, astronomy pic of the day, Mastodon bridge, room bridging, oh dear
    • devops: ping, curl, etc
    • podbot: play podcast episodes from AntennaPod
    • cody: Python, Ruby, Javascript REPL
    • eno: generic bot, "personal assistant"
  • mjolnir: moderation bot
  • hookshot: bridge with GitLab/GitHub
  • matrix-monitor-bot: latency monitor

One thing I haven't found an equivalent for is Debian's MeetBot. There's an archive bot but it doesn't have topics or a meeting chair, or HTML logs.

Working on Matrix

As a developer, I find Matrix kind of intimidating. The specification is huge. The official specification itself looks somewhat digestable: it's only 6 APIs so that looks, at first, kind of reasonable. But whenever you start asking complicated questions about Matrix, you quickly fall into the Matrix Spec Change specification (which, yes, is a separate specification). And there are literally hundreds of MSCs flying around. It's hard to tell what's been adopted and what hasn't, and even harder to figure out if your specific client has implemented it.

(One trendy answer to this problem is to "rewrite it in rust": Matrix are working on implementing a lot of those specifications in a matrix-rust-sdk that's designed to take the implementation details away from users.)

Just taking the latest weekly Matrix report, you find that three new MSCs proposed, just last week! There's even a graph that shows the number of MSCs is progressing steadily, at 600+ proposals total, with the majority (300+) "new". I would guess the "merged" ones are at about 150.

That's a lot of text which includes stuff like 3D worlds which, frankly, I don't think you should be working on when you have such important security and usability problems. (The internet as a whole, arguably, doesn't fare much better. RFC600 is a really obscure discussion about "INTERFACING AN ILLINOIS PLASMA TERMINAL TO THE ARPANET". Maybe that's how many MSCs will end up as well, left forgotten in the pits of history.)

And that's the thing: maybe the Matrix people have a different objective than I have. They want to connect everything to everything, and make Matrix a generic transport for all sorts of applications, including virtual reality, collaborative editors, and so on.

I just want secure, simple messaging. Possibly with good file transfers, and video calls. That it works with existing stuff is good, and it should be federated to remove the "Signal point of failure". So I'm a bit worried with the direction all those MSCs are taking, especially when you consider that clients other than Element are still struggling to keep up with basic features like end-to-end encryption or room discovery, never mind voice or spaces...


Overall, Matrix is somehow in the space XMPP was a few years ago. It has a ton of features, pretty good clients, and a large community. It seems to have gained some of the momentum that XMPP has lost. It may have the most potential to replace Signal if something bad would happen to it (like, I don't know, getting banned or going nuts with cryptocurrency)...

But it's really not there yet, and I don't see Matrix trying to get there either, which is a bit worrisome.

Looking back at history

I'm also worried that we are repeating the errors of the past. The history of federated services is really fascinating:. IRC, FTP, HTTP, and SMTP were all created in the early days of the internet, and are all still around (except, arguably, FTP, which was removed from major browsers recently). All of them had to face serious challenges in growing their federation.

IRC had numerous conflicts and forks, both at the technical level but also at the political level. The history of IRC is really something that anyone working on a federated system should study in detail, because they are bound to make the same mistakes if they are not familiar with it. The "short" version is:

  • 1988: Finish researcher publishes first IRC source code
  • 1989: 40 servers worldwide, mostly universities
  • 1990: EFnet ("eris-free network") fork which blocks the "open relay", named Eris - followers of Eris form the A-net, which promptly dissolves itself, with only EFnet remaining
  • 1992: Undernet fork, which offered authentication ("services"), routing improvements and timestamp-based channel synchronisation
  • 1994: DALnet fork, from Undernet, again on a technical disagreement
  • 1995: Freenode founded
  • 1996: IRCnet forks from EFnet, following a flame war of historical proportion, splitting the network between Europe and the Americas
  • 1997: Quakenet founded
  • 1999: (XMPP founded)
  • 2001: 6 million users, OFTC founded
  • 2002: DALnet peaks at 136,000 users
  • 2003: IRC as a whole peaks at 10 million users, EFnet peaks at 141,000 users
  • 2004: (Facebook founded), Undernet peaks at 159,000 users
  • 2005: Quakenet peaks at 242,000 users, IRCnet peaks at 136,000 (Youtube founded)
  • 2006: (Twitter founded)
  • 2009: (WhatsApp, Pinterest founded)
  • 2010: (TextSecure AKA Signal, Instagram founded)
  • 2011: (Snapchat founded)
  • ~2013: Freenode peaks at ~100,000 users
  • 2016: IRCv3 standardisation effort started (TikTok founded)
  • 2021: Freenode self-destructs, Libera chat founded
  • 2022: Libera peaks at 50,000 users, OFTC peaks at 30,000 users

(The numbers were taken from the Wikipedia page and Netsplit.de. Note that I also include other networks launch in parenthesis for context.)

Pretty dramatic, don't you think? Eventually, somehow, IRC became irrelevant for most people: few people are even aware of it now. With less than a million users active, it's smaller than Mastodon, XMPP, or Matrix at this point.1 If I were to venture a guess, I'd say that infighting, lack of a standardization body, and a somewhat annoying protocol meant the network could not grow. It's also possible that the decentralised yet centralised structure of IRC networks limited their reliability and growth.

But large social media companies have also taken over the space: observe how IRC numbers peak around the time the wave of large social media companies emerge, especially Facebook (2.9B users!!) and Twitter (400M users).

Where the federated services are in history

Right now, Matrix, and Mastodon (and email!) are at the "pre-EFnet" stage: anyone can join the federation. Mastodon has started working on a global block list of fascist servers which is interesting, but it's still an open federation. Right now, Matrix is totally open, but matrix.org publishes a (federated) block list of hostile servers (#matrix-org-coc-bl:matrix.org, yes, of course it's a room).

Interestingly, Email is also in that stage, where there are block lists of spammers, and it's a race between those blockers and spammers. Large email providers, obviously, are getting closer to the EFnet stage: you could consider they only accept email from themselves or between themselves. It's getting increasingly hard to deliver mail to Outlook and Gmail for example, partly because of bias against small providers, but also because they are including more and more machine-learning tools to sort through email and those systems are, fundamentally, unknowable. It's not quite the same as splitting the federation the way EFnet did, but the effect is similar.

HTTP has somehow managed to live in a parallel universe, as it's technically still completely federated: anyone can start a web server if they have a public IP address and anyone can connect to it. The catch, of course, is how you find the darn thing. Which is how Google became one of the most powerful corporations on earth, and how they became the gatekeepers of human knowledge online.

I have only briefly mentioned XMPP here, and my XMPP fans will undoubtedly comment on that, but I think it's somewhere in the middle of all of this. It was co-opted by Facebook and Google, and both corporations have abandoned it to its fate. I remember fondly the days where I could do instant messaging with my contacts who had a Gmail account. Those days are gone, and I don't talk to anyone over Jabber anymore, unfortunately. And this is a threat that Matrix still has to face.

It's also the threat Email is currently facing. On the one hand corporations like Facebook want to completely destroy it and have mostly succeeded: many people just have an email account to register on things and talk to their friends over Instagram or (lately) TikTok (which, I know, is not Facebook, but they started that fire).

On the other hand, you have corporations like Microsoft and Google who are still using and providing email services — because, frankly, you still do need email for stuff, just like fax is still around — but they are more and more isolated in their own silo. At this point, it's only a matter of time they reach critical mass and just decide that the risk of allowing external mail coming in is not worth the cost. They'll simply flip the switch and work on an allow-list principle. Then we'll have closed the loop and email will be dead, just like IRC is "dead" now.

I wonder which path Matrix will take. Could it liberate us from these vicious cycles?

  1. According to Wikipedia, there are currently about 500 distinct IRC networks operating, on about 1,000 servers, serving over 250,000 users. In contrast, Mastodon seems to be around 5 million users, Matrix.org claimed at FOSDEM 2021 to have about 28 million globally visible accounts, and Signal lays claim to over 40 million souls. XMPP claims to have "millions" of users on the xmpp.org homepage but the FAQ says they don't actually know. On the proprietary silo side of the fence, this page says

    • Facebook: 2.9 billion users
    • WhatsApp: 2B
    • Instagram: 1.4B
    • TikTok: 1B
    • Snapchat: 500M
    • Pinterest: 480M
    • Twitter: 397M

    Notable omission from that list: Youtube, with its mind-boggling 2.6 billion users...

    Those are not the kind of numbers you just "need to convince a brother or sister" to grow the network...

20 June, 2022 08:00PM

Niels Thykier

wrap-and-sort with experimental support for comments in devscripts/2.22.2

In the devscripts package currently in Debian testing (2.22.2), wrap-and-sort has opt-in support for preserving comments in deb822 control files such as debian/control and debian/tests/control. Currently, this is an opt-in feature to provide some exposure without breaking anything.

To use the feature, add --experimental-rts-parser to the command line. A concrete example being (adjust to your relevant style):

wrap-and-sort --experimental-rts-parser -tabk

Please provide relevant feedback to #820625 if you have any. If you experience issues, please remember to provide the original control file along with the concrete command line used.

As hinted above, the option is a temporary measure and will be removed again once the testing phase is over, so please do not put it into scripts or packages. For the same reason, wrap-and-sort will emit a slightly annoying warning when using the option.

Enjoy. 🙂

20 June, 2022 08:00PM by Niels Thykier

John Goerzen

Pipe Issue Likely a Kernel Bug

Saturday, I wrote in Pipes, deadlocks, and strace annoyingly fixing them about an issue where a certain pipeline seems to have a deadlock. I described tracing it into kernel code. Indeed, it appears to be kernel bug 212295, which has had a patch for over a year that has never been merged.

After continuing to dig into the issue, I eventually reported it as a bug in ZFS. One of the ZFS people connected this to an older issue my searching hadn’t uncovered.

rincebrain summarized:

I believe, if I understand the bug correctly, it only triggers if you F_SETPIPE_SZ when the writer has put nonzero but not a full unit’s worth in yet, which is why the world isn’t on fire screaming about this – you need to either have a very slow but nonzero or otherwise very strange write pattern to hit it, which is why it doesn’t come up in, say, the CI or most of my testbeds, but my poor little SPARC (440 MHz, 1c1t) and Raspberry Pis were not so fortunate.

You might recall in Saturday’s post that I explained that Filespooler reads a few bytes from the gpg/zstdcat pipeline before spawning and connecting it to zfs receive. I think this is the critical piece of the puzzle; it makes it much more likely to encounter the kernel bug. zfs receive is calls F_SETPIPE_SZ when it starts. Let’s look at how this could be triggered:

In the pre-Filespooler days, the gpg|zstdcat|zfs pipeline was all being set up at once. There would be no data sent to zfs receive until gpg had initialized and begun to decrypt the data, and then zstdcat had begun to decompress it. Those things almost certainly took longer than zfs receive’s initialization, meaning that usually F_SETPIPE_SZ would have been invoked before any data entered the pipe.

After switching to Filespooler, the particular situation here has Filespooler reading somewhere around 100 bytes from the gpg|zstdcat part of the pipeline before ever invoking zfs receive. zstdcat generally emits more than 100 bytes at a time. Therefore, when Filespooler invokes zfs receive and hooks the pipeline up to it, it has a very high chance of there already being data in the pipeline when zfs receive uses F_SETPIPE_SZ. This means that the chances of encountering the conditions that trigger the particular kernel bug are also elevated.

ZFS is integrating a patch to no longer use F_SETPIPE_SZ in zfs receive. I have applied that on my local end to see what happens, and hopefully in a day or two will know for sure if it resolves things.

In the meantime, I hope you enjoyed this little exploration. It resulted in a new bug report to Rust as well as digging up an existing kernel bug. And, interestingly, no bugs in filespooler. Sometimes the thing that changed isn’t the source of the bug!

20 June, 2022 04:31PM by John Goerzen

Jamie McClelland

A very liberal spam assassin rule

I just sent myself a test message via Powerbase (a hosted CiviCRM project for community organizers) and it didn’t arrive. Wait, nope, there it is in my junk folder with a spam score of 6!

X-Spam-Status: Yes, score=6.093 tagged_above=-999 required=5
	tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
	T_SCC_BODY_TEXT_LINE=-0.01] autolearn=no autolearn_force=no

What just happened?

A careful look at the scores suggest that the KAM_WEBINAR and KAM_WEBINAR2 rules killed me. I’ve never heard of them (this email came through a system I’m not administering). So, I did some searching and found a page with the rules:

header   __KAM_WEBINAR1 From =~ /education|career|manage|learning|webinar|project|efolder/i
header   __KAM_WEBINAR2 Subject =~ /last chance|increase productivity|workplace morale|payroll dept|trauma.training|case.study|issues|follow.up|service.desk|vip.(lunch|breakfast)|manage.your|private.business|professional.checklist|customers.safer|great.timesaver|prep.course|crash.course|hunger.to.learn|(keys|tips).(to|for).smarter/i
header   __KAM_WEBINAR3 Subject =~ /webinar|strateg|seminar|owners.meeting|webcast|our.\d.new|sales.video/i
body     __KAM_WEBINAR4 /executive.education|contactid|register now|\d+.minute webinar|management.position|supervising.skills|discover.tips|register.early|take.control|marketing.capabilit|drive.more.sales|leveraging.cloud|solution.provider|have.a.handle|plan.to.divest|being.informed|upcoming.webinar|spearfishing.email|increase.revenue|industry.podcast|\d+.in.depth.tips|early.bird.offer|pmp.certified|lunch.briefing/i

describe KAM_WEBINAR Spam for webinars
score    KAM_WEBINAR 3.5

describe KAM_WEBINAR2 Spam for webinars
score    KAM_WEBINAR2 3.5

For those of you who don’t care to parse those regular expressions, here’s a summary:

  • There are four tests. If you fail 3 or more, you get 3.5 points, if you fail 4 you get another 3.5 points (my email failed all 4).
  • Here is how I failed them:
    • The from address can’t have a bunch of words, including “project.” My from address includes my organization’s name: The Progressive Technology Project.
    • The subject line cannot include a number of strings, including “last chance.” My subject line was “Last change to register for our webinar.”
    • The subject line cannot include a number of other strings, including “webinar” (and also webcast and even strategy). My subject line was “Last chance to register for our webinar.”
    • The body of the message cannot include a bunch of strings, including “register now.” Well, you won’t be suprised to know that my email contained the string “Register now.”

Hm. I’m glad I can now fix our email, but this doesn’t work so well for people with a name that includes “project” that like to organize webinars for which you have to register.

20 June, 2022 12:27PM

June 19, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#38: Faster Feedback Systems

Engineers build systems. Good engineers always stress and focus efficiency of these systems.

Two recent examples of engineering thinking follow. One was in a video / podcast interview with Martin Thompson (who is a noted high-performance code expert) I came across recently. The overall focus of the hour-long interview is on ‘managing software complexity’. Around minute twenty-two, the conversation turns to feedback loops and systems, and a strong preference for simple and fast systems for more immediate feedback. An important topic indeed.

The second example connects to this and permeates many tweets and other writings by Erik Bernhardsson. He had an earlier 2017 post on ‘Optimizing for iteration speed’, as well as a 17 May 2022 tweet on minimizing feedback loop size, another 28 Mar 2022 tweet reply on shorter feedback loops, then a 14 Feb 2022 post on problems with slow feedback loops, as well as a 13 Jan 2022 post on a priority for tighter feedback loops, and lastly a 23 Jul 2021 post on fast feedback cycles. You get the idea: Erik really digs faster feedback loops. Nobody likes to wait: immediatecy wins each time.

A few years ago, I had touched on this topic with two posts on how to make (R) package compilation (and hence installation) faster. One idea (which I still use whenever I must compile) was in post #11 on caching compilation. Another idea was in post #13: make it faster by not doing it, in this case via binary installation which skip the need for compilation (and which is what I aim for with, say, CI dependencies). Several subsequent posts can be found by scrolling down the r^4 blog section: we stressed the use of the amazing Rutter PPA ‘c2d4u’ for CRAN binaries (often via Rocker containers, the (post #28) promise of RSPM, and the (post #29) awesomeness of bspm. And then in the more recent post #34 from last December we got back to a topic which ties all these things together: Dependencies. We quoted Mies van der Rohe: Less is more. Especially when it comes to dependencies as these elongate the feedback loop and thereby delay feedback.

Our most recent post #37 on r2u connects these dots. Access to a complete set of CRAN binaries with full-dependency resolution accelerates use and installation. This of course also covers testing and continuous integration. Why wait minutes to recompile the same packages over and over when you can install the full Tidyverse in 18 seconds or the brms package and all it needs in 13 seconds as shown in the two gifs also on the r2u documentation site.

You can even power up the example setup of the second gif via this gitpod link giving you a full Ubuntu 22.04 session in your browser to try this: so go forth and install something from CRAN with ease! The benefit of a system such our r2u CRAN binaries is clear: faster feedback loops. This holds whether you work with few or many dependencies, tiny or tidy. Faster matters, and feedback can be had sooner.

And with the title of this post we now get a rallying cry to advocate for faster feedback systems: “FFS”.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 June, 2022 03:46PM

Debian Community News

Thiemo Seufer & Debian deaths: examining accidents and suicides

Thiemo Seufer, Debian

There have been two headline suicides in Debian: the suicide of the founder, Ian Murdock and the Frans Pop suicide that appears to have been planned for the anniversary of Debian's founding, Debian Day.

The significance of Frans' actions can not be understated. His life and death intertwined with the project itself. This raises the most serious questions: have there been other suicides or incidents of self harm? What impact does a suicide like that have on organization culture? Even though it was hidden on debian-private, everybody knew something about it.

Therefore, there is a strong case for having a fresh look at other deaths even though doing so may cause some pain for their friends and family.

Not long before the suicide of Frans Pop, we heard that Thiemo Seufer died in a car accident. We were told it was a collision with another vehicle.

Subject: Thiemo Seufer
Date: Fri, 26 Dec 2008 19:20:03 +0100
From: Martin Michlmayr <tbm@cyrius.com>
To: debian-private@lists.debian.org
CC: aurelien@aurel32.net

I'm sorry to inform you that Thiemo Seufer died in a car accident
this morning.  I was told that a big, fast moving car collided with
his car, forcing his car from the high way.

Thiemo was the lead maintainer of our MIPS ports and a great
person.  His death is a great loss to us all.

In sadness,
Martin Michlmayr

Moreover, this death occurred in the period spanning the Sven Luther lynching and the Debian Day suicide. Various people wrote messages on the debian-private (leaked) gossip network revealing that they had problems with emotions and sleep.

Stress and related mental health issues are a definite factor in accidents, just as they are factors in suicides.

Seufer's accident was in Germany and therefore any facts about the accident and cause of death are considered private. In many countries, the coroner publishes a public report but in Germany, the coroner gives the details to the family and the family can decide which facts they are comfortable sharing.

It would be very offensive to speculate that Seufer's death was another Debian suicide. Nonetheless, in the interests of a healthy community, it would be useful to rule it out and understand the accident fully by asking somebody to review and maybe publish the coroner's report.

Suicide is not the only situation we need to contemplate in any Debian-related deaths. Some accidents are entirely spontaneous but many accidents are associated with environmental factors. For example, people who work a night shift are three hundred percent more likely to have a car crash. There is anecdotal evidence that many Debian Developers are up at night answering long email chains.

We read some social media posts suggesting that Seufer was in his car and his car was hit from behind by another vehicle. This type of accident is not uncommon. From an insurance perspective, the other driver is entirely at fault unless they can prove otherwise. Nonetheless, we found that Seufer has been working on Christmas Day and we begin to consider scenarios where he may have fallen asleep at an intersection. As the accident was on one of the shortest days of the year in cold weather, if the accident occurred in the very early hours of the morning or any time before sunrise, it is quite possible the other vehicle saw a green light and didn't see Seufer's car stationary.

It may be impossible to confirm if Seufer had fallen asleep, on the other hand, the coroner's report may tell us if the other driver had seen a green light approaching the intersection.

We don't wish to speculate about Seufer's competence as a driver, we only seek to understand if there is a possibility that this accident was intertwined with Debian workloads.

In the weeks before his death, Debian had been discussing and voting on two general resolutions. These discussions eroded time for normal rest and increased the risk of accidents for many developers.

19 June, 2022 11:00AM

John Goerzen

Pipes, deadlocks, and strace annoyingly fixing them

This is a complex tale I will attempt to make simple(ish). I’ve (re)learned more than I cared to about the details of pipes, signals, and certain system calls – and the solution is still elusive.

For some time now, I have been using NNCP to back up my files. These backups are sent to my backup system, which effectively does this to process them (each ZFS send is piped to a shell script that winds up running this):

gpg -q -d | zstdcat -T0 | zfs receive -u -o readonly=on "$STORE/$DEST"

This processes tens of thousands of zfs sends per week. Recently, having written Filespooler, I switched to sending the backups using Filespooler over NNCP. Now fspl (the Filespooler executable) opens the file for each stream and then connects it to what amounts to this pipeline:

bash -c 'gpg -q -d 2>/dev/null | zstdcat -T0' | zfs receive -u -o readonly=on "$STORE/$DEST"

Actually, to be more precise, it spins up the bash part of it, reads a few bytes from it, and then connects it to the zfs receive.

And this works well — almost always. In something like 1/1000 of the cases, it deadlocks, and I still don’t know why. But I can talk about the journey of trying to figure it out (and maybe some of you will have some ideas).

Filespooler is written in Rust, and uses Rust’s Command system. Effectively what happens is this:

  1. The fspl process has a File handle, which after forking but before invoking bash, it dup2’s to stdin.
  2. The connection between bash and zfs receive is a standard Unix pipe.

I cannot get the problem to duplicate when I run the entire thing under strace -f. So I am left trying to peek at it from the outside. What happens if I try to attach to each component with strace -p?

  • bash is blocking in wait4(), which is expected.
  • gpg is blocking in write().
  • If I attach to zstdcat with strace -p, then all of a sudden the deadlock is cleared and everything resumes and completes normally.
  • Attaching to zfs receive with strace -p causes no output at all from strace for a few seconds, then zfs just writes “cannot receive incremental stream: incomplete stream” and exits with error code 1.

So the plot thickens! Why would connecting to zstdcat and zfs receive cause them to actually change behavior? strace works by using the ptrace system call, and ptrace in a number of cases requires sending SIGSTOP to a process. In a complicated set of circumstances, a system call may return EINTR when a SIGSTOP is received, with the idea that the system call should be retried. I can’t see, from either zstdcat or zfs, if this is happening, though.

So I thought, “how about having Filespooler manually copy data from bash to zfs receive in a read/write loop instead of having them connected directly via a pipe?” That is, there would be two pipes going there: one where Filespooler reads from the bash command, and one where it writes to zfs. If nothing else, I could instrument it with debugging.

And so I did, and I found that when it deadlocked, it was deadlocking on write — but with no discernible pattern as to where or when. So I went back to directly connected.

In analyzing straces, I found a Rust bug which I reported in which it is failing to close the read end of a pipe in the parent post-fork. However, having implemented a workaround for this, it doesn’t prevent the deadlock so this is orthogonal to the issue at hand.

Among the two strange things here are things returning to normal when I attach strace to zstdcat, and things crashing when I attach strace to zfs. I decided to investigate the latter.

It turns out that the ZFS code that is reading from stdin during zfs receive is in the kernel module, not userland. Here is the part that is triggering the “imcomplete stream” error:

                int err = zfs_file_read(fp, (char *)buf + done,
                    len - done, &resid);
                if (resid == len - done) {
                         * Note: ECKSUM or ZFS_ERR_STREAM_TRUNCATED indicates
                         * that the receive was interrupted and can
                         * potentially be resumed.
                        err = SET_ERROR(ZFS_ERR_STREAM_TRUNCATED);

resid is an output parameter with the number of bytes remaining from a short read, so in this case, if the read produced zero bytes, then it sets that error. What’s zfs_file_read then?

It boils down to a thin wrapper around kernel_read(). This winds up calling __kernel_read(), which calls read_iter on the pipe, which is pipe_read(). That’s where I don’t have the knowledge to get into the weeds right now.

So it seems likely to me that the problem has something to do with zfs receive. But, what, and why does it only not work in this one very specific situation, and only so rarely? And why does attaching strace to zstdcat make it all work again? I’m indeed puzzled!

Update 2022-06-20: See the followup post which identifies this as likely a kernel bug and explains why this particular use of Filespooler made it easier to trigger.

19 June, 2022 03:46AM by John Goerzen

Free Software Fellowship

Wikileaks, Dickileaks & Ubuntu Underage girl

The UK has taken the next step towards extraditing Julian Assange to the US. This is a sad day for journalism.

With all the sex crimes revealed in Australia during 2021, we can't help feeling Assange has been a scapegoat.

Then again, we couldn't help thinking there is nothing new here.

How quickly people forget the St Kilda Schoolgirl.

Revealed: St Kilda Schoolgirl is NOT from St Kilda

As the girl was only 16, the press got into the habit of calling her the St Kilda Schoolgirl. In fact, she is not from St Kilda. She only got that name because of her relationships with the St Kilda Football Club.

The news we have to share today is that Julian Assange (Wikileaks) and the St Kilda Schoolgirl (Dickileaks) are both from the same place: FRANKSTON.

Identity exposed

After the girl turned 18, the media in both Australia and the UK started using her real name, Kim (Kimberley) Duthie. She has been married and some media now use her married name, Kimberley Ametoglou.

She was releasing a tell-all book in 2021. It is like the debian-private (leaked) gossip network, but for football.

We feel the use of her real name, after she turned 18, is an important point. The same thing happened in the case of Megan Stammers, the 15 year old who ran away with her maths teacher and Anja Xhakani, the 16 year old who was subsequently employed by her Ubuntu boyfriend in Albania.

As these women are all over 18 and their stories have a public interest dimension, there is some controversy over whether it is appropriate to identify them but nonetheless, the world has chosen to do so.

Dickileaks recap

The St Kilda Schoolgirl met players from Australia's top football league at a footy clinic. She was 16, he was 24, a lot like the girl who met Elio Qoshi when he was a Fedora Ambassador.

Anyway, the relationship didn't go to plan so she hacked nude photos from his computer and shared them online. These are nude photos of male footballers. It begs the question: do we call her the underage girl or the underage feminist? Probably both. For better or worse, she has done revenge porn in reverse.

Football management became involved. In other words, the player's manager slept with the girl too, they shared a hotel room on Valentine's Day.

Police became involved. In other words, the senior constable investigating these affairs also slept with the 16 year old girl.

Another eighty police officers were investigated for looking up her internal files on the super secure police computer network.

Anybody who doubts Australia has a rape problem should be over it by now.

The St Kilda Schoolgirl posted a single beach photo on social media and blew them all away.

While the US has taken great interest in Julian Assange, they would prefer that the St Kilda Schoolgirl remains down under so that no FBI agents will accidentally be seduced.

Here is the photo that has already been widely shared by the Australian and British media. If you wear a uniform, please keep it fastened.

Kim Duthie, Kimberley Ametoglou, beach photo

19 June, 2022 12:30AM

June 17, 2022

Dima Kogan

Ricoh GR IIIx 802.11 reverse engineering

I just got a fancy new camera: Ricoh GR IIIx. It's pretty great, and I strongly recommend it to anyone that wants a truly pocketable camera with fantastic image quality and full manual controls. One annoyance is the connectivity. It does have both Bluetooth and 802.11, but the only official method of using them is some dinky closed phone app. This is silly. I just did some reverse-engineering, and I now have a functional shell script to download the last few images via 802.11. This is more convenient than plugging in a wire or pulling out the memory card. Fortunately, Ricoh didn't bend over backwards to make the reversing difficult, so to figure it out I didn't even need to download the phone app, and sniff the traffic.

When you turn on the 802.11 on the camera, it says stuff about essid and password, so clearly the camera runs its own access point. Not ideal, but it's good-enough. I connected, and ran nmap to find hosts and open ports: only port 80 on is open. Pointing curl at it yields some error, so I need to figure out the valid endpoints. I downloaded the firmware binary, and tried to figure out what's in it:

dima@shorty:/tmp$ binwalk fwdc243b.bin

3036150       0x2E53F6        Cisco IOS microcode, for "8"
3164652       0x3049EC        Certificate in DER format (x509 v3), header length: 4, sequence length: 5412
5472143       0x537F8F        Copyright string: "Copyright ("
6128763       0x5D847B        PARity archive data - file number 90
10711634      0xA37252        gzip compressed data, maximum compression, from Unix, last modified: 2022-02-15 05:47:23
13959724      0xD5022C        MySQL ISAM compressed data file Version 11
24829873      0x17ADFB1       MySQL MISAM compressed data file Version 4
24917663      0x17C369F       MySQL MISAM compressed data file Version 4
24918526      0x17C39FE       MySQL MISAM compressed data file Version 4
24921612      0x17C460C       MySQL MISAM compressed data file Version 4
24948153      0x17CADB9       MySQL MISAM compressed data file Version 4
25221672      0x180DA28       MySQL MISAM compressed data file Version 4
25784158      0x1896F5E       Cisco IOS microcode, for "\"
26173589      0x18F6095       MySQL MISAM compressed data file Version 4
28297588      0x1AFC974       MySQL ISAM compressed data file Version 6
28988307      0x1BA5393       MySQL ISAM compressed data file Version 3
28990184      0x1BA5AE8       MySQL MISAM index file Version 3
29118867      0x1BC5193       MySQL MISAM index file Version 3
29449193      0x1C15BE9       JPEG image data, JFIF standard 1.01
29522133      0x1C278D5       JPEG image data, JFIF standard 1.08
29522412      0x1C279EC       Copyright string: "Copyright ("
29632931      0x1C429A3       JPEG image data, JFIF standard 1.01
29724094      0x1C58DBE       JPEG image data, JFIF standard 1.01

The gzip chunk looks like what I want:

dima@shorty:/tmp$ tail -c+10711635 fwdc243b.bin> /tmp/tst.gz

dima@shorty:/tmp$ < /tmp/tst.gz gunzip | file -

/dev/stdin: ASCII cpio archive (SVR4 with no CRC)

dima@shorty:/tmp$ < /tmp/tst.gz gunzip > tst.cpio

OK, we have some .cpio thing. It's plain-text. I grep around it in, looking for GET and POST and such, and I see various URI-looking things at /v1/..... Grepping for that I see

dima@shorty:/tmp$ strings tst.cpio | grep /v1/

GET /v1/debug/revisions
GET /v1/ping
GET /v1/photos
GET /v1/props
PUT /v1/params/device
PUT /v1/params/lens
PUT /v1/params/camera
GET /v1/liveview
GET /v1/transfers
POST /v1/device/finish
POST /v1/device/wlan/finish
POST /v1/lens/focus
POST /v1/camera/shoot
POST /v1/camera/shoot/compose
POST /v1/camera/shoot/cancel
GET /v1/photos/{}/{}
GET /v1/photos/{}/{}/info
PUT /v1/photos/{}/{}/transfer
/v1/changes message received.
/v1/changes issue event.
/v1/changes new websocket connection.
/v1/changes websocket connection closed. reason({})
/v1/transfers, transferState({}), afterIndex({}), limit({})

Jackpot. I pointed curl at most of these, and they do interesting things. Generally they all spit out JSON. /v1/liveview sends out a sequence of JPEG images. The thing I care about is /v1/photos/DIRECTORY/FILE and /v1/photos/DIRECTORY/FILE/info. The result is a script I just wrote to connect to the camera, download N images, and connect back to the original access point:


Kinda crude, but works for now. I'll improve it with time.

After I did this I found an old thread from 2015 where somebody was using an apparently-compatible camera, and wrote a fancier tool:


17 June, 2022 05:04AM by Dima Kogan

June 16, 2022

Debian Community News

Frans Pop & Debian suicide denial

Trigger warning: this page talks about suicide and nothing else

Debian announced the death of Frans Pop in 2010.

There were numerous comments on the debian-private (leaked) gossip network.

We want to focus on one feature of the death. Was it Debian-connected?

One unique feature of the suicide is that Pop sent a resignation note to debian-private on 15 August 2010.

It's time to say goodbye. I don't want to say too much about it, except that I've been planning this for a long time. ... So long, FJP

These are private emails but Chris Lamb, Enrico Zini, Jonathan Wiltshire and Joerg Jaspert declared that privacy means nothing in September 2018. They decided that attacking a volunteer at a time of grief is perfectly acceptable. Sam Hartman and Jonathan Carter have both done their part to force the Frans Pop case into the open where it belongs.

Pop took his life on 20 August 2010. Steve McIntyre received an email from Pop's parents the next day, he shared the following with 1,000 developers on debian-private:

Yesterday morning our son Frans Pop has died. He took his own life, in a well-considered, courageous, and considerate manner. During the last years his main concern was his work for Debian. I would like to ask you to inform those members of the Debian community who knew him well.

A sub-thread emerged, discussing the phrase his main concern was his work for Debian. We share one example:

Subject: Re: Death of Frans Pop
Date: Sat, 21 Aug 2010 13:39:21 +0100
From: Colin Watson <cjwatson@debian.org>
To: debian-private@lists.debian.org

On Sat, Aug 21, 2010 at 01:52:33PM +0200, Ludovic Brenta wrote:
> Steve McIntyre <steve@einval.com> writes:
> > "Yesterday morning our son Frans Pop has died. He took his own life,
> > in a well-considered, courageous, and considerate manner. During the
> > last years his main concern was his work for Debian. I would like to
> > ask you to inform those members of the Debian community who knew him
> > well."
> Does that imply he took his own life *because* of Debian, which was "his
> main concern"?

This is probably the wrong thread for linguistics, but that phrase would
normally just indicate that Debian was his main interest.  In
http://oxforddictionaries.com/view/entry/m_en_gb0169810 under "noun",
this would be sense 2 rather than sense 1.

Colin Watson                                       [cjwatson@debian.org]

A similar reply came from Tim Retout a few minutes later. Both Colin Watson and Tim Retout tried to play down the connection of the death with Debian.

Let's consider the facts:

  • Pop had been involved in the intensely disturbing Debian failures connected to Sven Luther's resignation, 2006/2007
  • Luther and Pop had both been Debian Installer collaborators with Thiemo Seufer who died in what was reported as a high speed collision with another vehicle on the highway. German privacy laws don't reveal anything more about the cause of collision or the official cause of death.
  • Pop's resignation note was written on 15 August, the night before Debian's anniversary. He was clearly thinking about taking his life on the anniversary of Debian's founding, Debian Day.

Debian is now publishing defamation on their bug tracker, their wiki, the keyring Git repository, changelogs, mailing lists and even the main page of the Debian web site. It is clear that Debian wants to force other volunteers through psychological anguish, just like that anguish going through Frans Pop's mind from 16 to 20 August as he decided whether to go through with it or not.

Therefore, we have no choice other than releasing this stuff. Debian Community News will not stand by while more volunteers are pushed over the edge by debianism.

For more details and full copies of the emails, please see the documents submitted to WIPO (revised version now available).

If Debian was sincere in their sympathy for Frans Pop, they would never impose psychological tortures on any other developer. Their refusal to admit or even acknowledge the possibility of a connection with these deaths is the ultimate insult to Frans Pop, Lucy Wayland, other victims and their families.

We extracted a photo of Frans from the old FOSDEM videos, you can watch the full video here.

Frans Pop, Debian, suicide

16 June, 2022 11:30PM

Antoine Beaupré

building Debian packages under qemu with sbuild

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.


I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.


Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests

Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks

Note that some of the commands below (namely the ones depending on sbuild-qemu-boot) assume you are running Debian 12 (bookworm) or later.

  • enter the VM to make test, changes will be discarded (thanks Nick Brown for the sbuild-qemu-boot tip!):

     sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img

    That program is shipped only with bookworm and later, an equivalent command is:

     qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img

    The key argument here is -snapshot.

  • enter the VM to make permanent changes, which will not be discarded:

     sudo sbuild-qemu-boot --read-write /srv/sbuild/qemu/unstable-amd64.img

    Equivalent command:

     sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
  • update the VM (thanks lavamind):

     sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
  • build in a specific VM regardless of the suite specified in the changelog (e.g. UNRELEASED, bookworm-backports, bookworm-security, etc):

     sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"

    Note that you'd also need to pass --autopkgtest-opts if you want autopkgtest to run in the correct VM as well:

     sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"

    You might also need parameters like --ram-size if you customized it above.

And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there.

Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test

When autopkgtest starts a VM, it uses this funky qemu commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp: -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):

  • the shared/ directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor UNIX socket is a qemu control socket (see the QEMU monitor documentation, also nc -U)

In other words, it's possible to access the VM with:

nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2

The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Unification with libvirt

Those images created by autopkgtest can actually be used by libvirt to boot real, fully operational battle stations, sorry, virtual machines. But it needs some tweaking.

First, we need a snapshot image to work with, because we don't want libvirt to work directly on the pristine images created by autopkgtest:

  sudo qemu-img create -f qcow2 -o backing_file=/srv/sbuild/qemu/unstable-autopkgtest-amd64.img,backing_fmt=qcow2  /var/lib/libvirt/images/unstable-autopkgtest-amd64.img 10G
  sudo chown qemu-libvirt '/var/lib/libvirt/images/unstable-autopkgtest-amd64.img'

Then this VM can be adopted fairly normally in virt-manager. Note that it's possible that you can set that up through the libvirt XML as well, but I haven't quite figured it out.

One twist I found is that the "normal" networking doesn't seem to work anymore, possibly because I messed it up with vagrant. Using the bridge doesn't work either out of the box, but that can be fixed with the following sysctl changes:


That trick was found in this good libvirt networking guide.

Finally, networking should work transparently inside the VM now. To share files, autopkgtest expects a 9p filesystem called sbuild-qemu. It might be difficult to get it just right in virt-manager, so here's the XML:

<filesystem type="mount" accessmode="passthrough">
  <source dir="/home/anarcat/dist"/>
  <target dir="sbuild-qemu"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>

The above shares the /home/anarcat/dist folder with the VM. Inside the VM, it will be mounted because there's this /etc/fstab line:

sbuild-qemu /shared 9p trans=virtio,version=9p2000.L,auto,nofail 0 0

By hand, that would be:

mount -t 9p -o trans=virtio,version=9p2000.L sbuild-qemu /shared

I probably forgot something else important here, but surely I will remember to put it back here when I do.

Note that this at least partially overlaps with hosting.

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;


if ($is_cloned_session) {
$self->log("Not cleaning session: cloned chroot in use\n");
} else {
if ($purge_build_deps) {
    # Removing dependencies
} else {
    $self->log("Not removing build depends: as requested\n");

The schroot builder defines that parameter as:

    $self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things

There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems.

There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of:

Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages.

On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage.

None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity.

I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt.

The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:

  • #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
  • #911979: sbuild: fails on chown in autopkgtest-qemu backend
  • #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
  • #911981: autopkgtest: qemu server warns about missing CPU features

So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement

Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild

I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell.

My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie).

Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays.

I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...


Thanks lavamind for the introduction to the sbuild-qemu package.

16 June, 2022 10:38PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo on CRAN: New Upstream

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 991 other packages on CRAN, downloaded over 25 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 476 times according to Google Scholar.

This release brings a second upstream fix by Conrad in the release series 11.*. We once again tested this very rigorously via a complete reverse-depedency check (for which results are always logged here). It so happens that CRAN then had a spurious error when re-checking on upload, and it took a fews days to square this as everybody remains busy – but the release prepared on June 10 is now on CRAN.

The full set of changes (since the last CRAN release follows.

Changes in RcppArmadillo version (2022-06-10)

  • Upgraded to Armadillo release 11.2 (Classic Roast)

    • faster handling of sparse submatrix column views by norm(), accu(), nonzeros()

    • extended randu() and randn() to allow specification of distribution parameters

    • internal refactoring, leading to faster compilation times

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 June, 2022 12:11AM

June 15, 2022

AsioHeaders 1.22.1-1 on CRAN

An updated version of the AsioHeaders package arrived at CRAN yesterday (in one of those pleasant fully-automated uploads and transitions). Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This release brings a new upstream version, following a two-year period without updated. This was tickled by OpenSSL 3.0 header changes as seen in a package using both AsioHeaders and OpenSSL.

Changes in version 1.22.1-1 (2022-06-14)

  • Upgraded to Asio 1.22.1 (Dirk in #7 fixing #6).

Thanks to my CRANberries, there is also a diffstat report relative to the previous release.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 June, 2022 11:45PM

Debian Community News

PsyOps 007: Paul Tagliamonte wanted Debian Press Team to have license to kill

We publish a fresh email from Paul Tagliamonte, the White House staff member who encouraged fellow Debianists to defame Dr Appelbaum. This email is the strongest yet, while most volunteers wanted to remain neutral, Tagliamonte was calling for the Press team to make these hits without consulting the volunteers at all.

Please see the WIPO dossier for more details on the PsyOps exercise

Subject: 	Re: Expulsion of Jacob Appelbaum 
Date: 	Tue, 21 Jun 2016 07:37:43 -0400
From: 	Paul R. Tagliamonte 
To: 	Russell Coker 
CC: 	Debian Private List 

Seems like a thing the Press team could help coordinate.

On Jun 21, 2016 7:27 AM, "Russell Coker" > wrote:


Well the DPL has decided to go public about it after all.

In future I think it would be better to have a plan for these
things. To have an argument on this list about whether information
should be released while the DPL is sending it all to a journalist
isn't the way things should go.

On 21 June 2016 8:57:56 PM AEST, Jonathan Dowland > wrote:
>Please do not CC me, I am subscribed to the list.
>On Tue, Jun 21, 2016 at 08:04:01PM +1000, Russell Coker wrote:
>> Mattia suggests that it's already public that he was expelled for
>anyone who
>> knows how to interpret it.
>> Why can't we just state it outright instead of leaving clues in
>> The social contract says that we won't hide problems. Why are we
>trying to
>> hide this problem?
>I don't think Debian has taken ownership of whatever is written on
>but we certainly aren't hiding it: It's there on the NM page in plain
>There's a big difference between hiding something and making an active
>to draw attention to it.
>The action that has been taken has been for the sake of the safety and
>being of other Debian members, NOT as some kind of white-knight
>stunt. There's no *need* to shout it from the rooftops as that does not
>further the former goal.

Sent from my Samsung Galaxy Note 3 with K-9 Mail.
Lisa Disbrow, David L. Goldfein, Chris Lynch, Paul Tagliamonte, Debian, USDS, Rebellion

How to wreck your organization with PsyOps techniques, from the OSS Simple Sabotage Field Manual

This email was sent on public mailing lists and responses are available there. What was Tagliamonte's real aim in circulating this? Was it an example of boasting about the second expulsion of Dr Norbert Preining?

Subject: History doesn't repeat itself, but it often rhymes
Date: Tue, 22 Feb 2022 12:29:53 -0500
From: Paul Tagliamonte 
To: Debian Devel , debian-project@lists.debian.org

Hello, Debianites,

Allow me, if you will, to talk a bit about something that's been on my mind
a bit over the last handful of years in Debian. It's something that's pretty
widely circulated in particular circles, but I don't think I've seen it on
a Debian list before, so here's some words that I've decided to put together.

I've intentionally not drawn lines to the 'discussions' going on (or the
'discussions' in the past I could point to) to avoid getting dragged into more
thrash, so if you reply, please do try to keep this clear of any specific
argument that you feel this may or may not apply to. This is a more general
note that I think could use some thought from anyone who's interested.

During World War II, the OSS (Office of Strategic Services)[1] distributed a
manual[2] (the Simple Sabotage Field Manual), which was used to train
"citizen-saboteur" resistance fighters, some of whom were told, not to pick up
arms, but to confound the bureaucracy by tying it up with an unmanageable
tangle of "innocent" behavior.

While no one is working within the Debian community member attempting to
subvert us sent from the shady conglomerate of nonfree operating systems by
following this playbook, this playbook is an outstanding illustration of how
some innocent behavior can destroy the effectiveness of an organization.  It's
effective, precisely *because* it's not overly malicious, and these behaviors
-- while harmful -- are explainable or innocent. Section (3) covers this in

Most of the OSS Simple Sabotage Field Manual covers things like breaking
equipment or destroying tanks, but section (11) is "General Interference with
Organizations and Production". I'm just going to focus here.

Let's take a look at section (11):

> (1) Insist on doing everything through "channels." Never permit short-cuts
>     to be taken in order to expedite decisions.
> (2) Make "speeches." Talk as frequently as possible and at great length.
>     Illustrate your "points" by long anecdotes and accounts of personal
>     experiences. Never hesitate to make a few appropriate "patriotic" comments.
> (3) When possible, refer all matters to committees for "further study and
>     consideration." Attempt to make committees as large as possible -- never
>     less than five.
> (4) Bring up irrelevant issues as frequently as possible.
> (5) Haggle over precise wordings of communications, minutes, resolutions.
> (6) Refer back to matters decided upon at the last meeting and attempt to
>     re-open the advisability of that decision.
> (7) Advocate "caution." Be "reasonable" and urge your fellow co-conferees to
>     be "reasonable" and avoid haste which might result in embarrassments or
>     difficulties later on.
> (8) Be worried about the propriety of any decision - raise the question of
>     whether such action as is contemplated lies within the jurisdiction of
>     the group or whether it might conflict with the policy of some higher
>     echelon.

I won't go through each of these point-by-point since everyone reading this is
likely sharp enough to see how this relates to Debian (although I will point
out I find it particularly interesting to replace "patrotic" here with the
Debian-specific-patriotism -- Debianism? -- and re-read some of the more
heated threads)

I have a theory of large organizations I've been thinking a lot about that came
from conversations with a colleague, which is to think about an organization's
"metabolic overhead" -- i.e., the amount of energy that an organization
devotes to intra-organization communication. If you think about a car
manufacturing plant, the "metabolic overhead" is all the time spent on things
like paperwork, communication, planning. It's not possible (or desirable!) for
an organization to have 0% overhead, nor is it desirable (although this one *is*
possible) to spend 100% time on overhead. I think it *may* be possible to get
to above 100% overhead, if workplace contention spills out into drinks after

All of the points in the OSS Simple Sabotage Manual are things designed to
increase the metabolic overhead of an organization, and to force organization
members to spend time *not* doing their core function (like making cars,
running trash pickup or ensuring the city has electricity), but rather, spend
their time litigating amongst themselves as the core function begins to
become harder and harder to maintain. This has the effect of degrading the
output/core function of an organization, without any specific cause
(like a power loss, etc).

I'd ask those who are reading this to consider how this relates to their time
spent in Debian. Is what you find something you're happy about with a hobby
project you're choosing to spend your free time on? Are you taking actions to
be a good participant?

To do a bit of grandstanding myself, do remember that it's not just your time
here -- when we spend significant resources litigating and playing bureaucracy
games, we spend others' time as well. People on the committees you refer matters
to, all project members in the case of a GR, all the Mailing List readers --
and that's all time that is taken from building and maintaining an operating
system. The output becomes degraded. There's no specific acute cause like
a buildd failure.

When I think about how Simple Sabotage works, I find myself unable to shake the
feeling that the best way to combat the organizational dysfunction outlined in
the OSS's Simple Sabotage Manual is to avoid "taking the bait", and to ensure
small, highly empowered teams of do-ers are able to execute. We need to avoid
being dragged into development by consensus -- while understanding that
communication and collaboration are good. We need to ensure that individuals
that continue to exhibit the behaviors contained within the Simple Sabotage
Manual understand the harm that can come from a system of individuals taking
actions like them -- even if their intent is sincere and come from a
constructive, helpful place. In some cases, ignoring the "sabotage"[3] outright
will work, in other cases, perhaps a gentile and respectful private note
letting them know that their suggestion is actively harmful and to consider not
doing it again. Engaging publicly makes things worse, since it will continue to
suck people's time into litigating the "sabotage" (which is, itself, becomes

Taking an expensive action (like referring to a committee, re-opening an
old decision, arguing about the precising wording and associated pedantry,
and questioning the authority of those doing work) should only be done
if the cost outweighs the benefit.

We don't need to be hostile or expel people for doing things outlined in the
OSS Simple Sabotage Manual, since a lot of that behavior is -- at times --
desirable, but I think we do need a *LOT* of self-reflection (from *everyone*
who actively engages with Debian politics) to consider our actions, and
determine how (if at all) we feel that we (as individuals) should change.

Please don't beat eachother up with this calling each other saboteurs and
claiming that everyone's emails are "sabotage", but please do consider
using this mental framework when looking at our discussions from time to

With love!

[1]: Of its many notable members, Julia Child was the first one I think
    of -- yes, that Julia Child!
[2]: https://www.gutenberg.org/files/26184/page-images/26184-images.pdf
[3]: //maybe// not the best word, but I'm using it here for internal consistency

15 June, 2022 11:00PM

June 14, 2022

Free Software Fellowship

Elio Qoshi & Redon Skikuli missing from OSCAL agenda

The Albanian free software conference, OSCAL, takes place in Tirana this weekend. We notice that the controversial founders of the Open Labs hackerspace, Elio Qoshi and Redon Skikuli, are completely absent from the schedule.

Elio Qoshi was named in the Ubuntu underage girl scandal, grooming women for Outreachy, in November 2021. We previously reported that his talk at FOSDEM vanished from the schedule.

Redon Skikuli has been named by both Arjen Kamphuis and Anisa Kuci as one of the sources of harassment. Both of their emails are published in the debian.community WIPO dossier.

Redon Skikuli, Open Labs, Tirana, Albania, OSCAL Elio Qoshi, Mozilla Tech Speaker Redon Skikuli, Open Labs, Tirana, Albania, OSCAL

14 June, 2022 09:45PM

Debian Community News

Dashamir Hoxha & Debian harassment

Thanks to the WIPO legal dossier, we now have more evidence of the source of harassment in Debian.

We previously reported on a Google Summer of Code (GSoC) intern from Bhopal, India, who was not paid the full stipend.

It raises numerous questions: the intern who failed, Deepanshu Gajbhiye, had done more technical work than the Albanian woman who received $6,000 for Outreachy in 2019.

Today we release another fact: Deepanshu's mentor was an Albanian, Dashamir Hoxha. Deepanshu had sent a written complaint about the mentor. The complaint was escalated to the Debian anti-harassment team and they did nothing. We feel the Debian anti-harassment team has protected the Albanians because the Albanians bring pretty young female interns to conferences.

he called following your instructions a garbage. This is not okay.

If the intern follows the instructions from the mentors, why doesn't Google pay the intern?

Subject: 	Getting annoyed by mentor's behaviour
Date: 	Sat, 4 Aug 2018 21:05:49 +0530
From: 	Deepanshu 
To: 	Backstabee


I followed your instruction on [1] to make a final upload to google.
Created script to build a tarball, extract commits and info.txt. Then I
shared the tar file [2] with the mentor. To this, he replied in [3]. By
calling it a garbage. This was his exact reply - `don't write garbage on
it but write something understandable`

I followed your instructions exactly and was going to create a merge
request in intern-work-products. But he called following your
instructions a garbage. This is not okay. He could have told me about
the improvements he would like to see but he chooses to call it garbage.
This has been his behavior for all 3 months. He is not helpful either. I
see other students getting a lot of help from their mentors. Example -
the first merge request in intern-work-products is made by the mentor,
not student which was very nice of her mentor. I followed it as it was
perfect my mentor said its garbage. 

Also, somewhere else mentor is saying -
    ` If some test case fails the student does not get the third payment
      (he has already received the payments for the first and second

He could help me identify the issues But no. He expects me to find them
on my own or get failed. This is definitely not okay. Gsoc is not all
about the money. And passing 1 and 2 evaluation and failing in 3 would
be really bad. 

I am not sure what should I do here. I have been working hard every day
committing 10-12 hours daily trying to the project on my own. I could
have said this on the mailing list but it would only make him angry and
it could another reason to fail me. So I choose to personally email you
to address this issue.

Please suggest what should I do here.


[1] : https://lists.debian.org/debian-outreach/2018/07/msg00060.html
[2] : https://drive.google.com/file/d/1nvbaIObX9wjELwpR4nEpM_bzfQXfiE_Q/view?usp=sharing
[3] : https://lists.debian.org/debian-outreach/2018/08/msg00007.html
Dashamir Hoxha, Debian, Albania, harassment

14 June, 2022 02:00PM

John Goerzen

Really Enjoyed Jason Scott’s BBS Documentary

Like many young programmers of my age, before I could use the Internet, there were BBSs. I eventually ran one, though in my small town there were few callers.

Some time back, I downloaded a copy of Jason Scott’s BBS Documentary. You might know Jason Scott from textfiles.com and his work at the Internet Archive.

The documentary was released in 2005 and spans 8 episodes on 3 DVDs. I’d watched parts of it before, but recently watched the whole series.

It’s really well done, and it’s not just about the technology. Yes, that figures in, but it’s about the people. At times, it was nostalgic to see people talking about things I clearly remembered. Often, I saw long-forgotten pioneers interviewed. And sometimes, such as with the ANSI art scene, I learned a lot about something I was aware of but never really got into back then.

BBSs and the ARPANet (predecessor to the Internet) grew up alongside each other. One was funded by governments and universities; the other, by hobbyists working with inexpensive equipment, sometimes of their own design.

You can download the DVD images (with tons of extras) or watch just the episodes on Youtube following the links on the author’s website.

The thing about BBSs is that they never actually died. Now I’m looking forward to watching the Back to the BBS documentary series about modern BBSs as well.

14 June, 2022 12:13AM by John Goerzen

June 13, 2022

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, May 2022

In May I was assigned 11 hours of work by Freexian's Debian LTS initiative and carried over 13 hours from April. I worked 8 hours, and will carry over the remaining time to June.

I spent some time triaging security issues for Linux, working out which of them were fixed upstream and which actually applied to the versions provided in Debian 9 "stretch". I rebased the Linux 4.9 (linux) package on the latest stable update, but did not make an upload this month. I started backporting several security fixes to 4.9, but those still have to be tested and reviewed.

13 June, 2022 06:30PM

June 12, 2022

Iustin Pop

Somewhat committing to a new sport

Quite a few years ago - 4, to be precise, so in 2018 - I did a couple of SUP trainings, organised by a colleague. That was enjoyable, but not really matching with me (asymmetric paddling, ugh!), so I also did learn some kayaking, which I really love, but that’s way higher overhead - no sea around in Switzerland, and lakes are generally too small. So I basically postponed any more water sports 😞, until sometime in the future when I’ll finally decide what I want to do (and in what setup).

I did a couple of one-off SUP rides in various places (2019, 2021), but I really was out of practice, so it wasn’t really enjoyable. But with family, SUP offers a much easier way to carry a passenger (than a kayak), so slowly I started thinking more about doing it more seriously.

So last week, after much deliberation, bought an inflatable board, paddle and various other accessories, and on Saturday went to try it out, on excellent weather (completely flat) and hot but not overly so. The board choosing in itself was something I like to do (research options), so for a bit I was concerned whether I’m more interested in the gear, or the actual paddling itself…

To my surprise, it went way better than I feared - last time I tried it, paddled 30 minutes on my knees (knee-paddling?!), since I didn’t dare stand up. But this time, I launched and then did stand up, and while very shaky, I didn’t fall in. Neither by myself, nor with an extra passenger 😉

And hour later, and my initial shakiness went away, with the trainings slowly coming back to mind. Another half hour, and - for completely flat water - I felt quite confident. The view was awesome, the weather nice, the water cold enough to be refreshing… and the only question on my mind was - why didn’t I do this 2, 3 years ago? Well, Corona aside.

I forgot how much I love just being on the water. It definitely pays off the cost of going somewhere, unpacking the stuff, pumping up the board (that’s a bit of a sport in itself 😃), because the blue-green-light-blue colour palette is just how things should be:

Small lake, but beautiful view
Small lake, but beautiful view

Well, approximately blue. This being a small lake, it’s more blue-green than proper blue. That’s next level, since bigger lakes mean waves, and more traffic.

Of course, this could also turn up like many other things I tried (a device in a corner that’s not used anymore), but at least for yesterday, I was a happy paddler!

12 June, 2022 09:00PM

Russ Allbery

Review: The Shattered Sphere

Review: The Shattered Sphere, by Roger MacBride Allen

Series: Hunted Earth #2
Publisher: Tor
Copyright: July 1994
Printing: September 1995
ISBN: 0-8125-3016-0
Format: Mass market
Pages: 491

The Shattered Sphere is a direct sequel to The Ring of Charon and spoils everything about the plot of the first book. You don't want to start here. Also be aware that essentially everything you can read about this book will spoil the major plot driver of The Ring of Charon in the first sentence. I'm going to review the book without doing that, but it's unlikely anyone else will try.

The end of the previous book stabilized matters, but in no way resolved the plot. The Shattered Sphere opens five years later. Most of the characters from the first novel are joined by some new additions, and all of them are trying to make sense of a drastically changed and far more dangerous understanding of the universe. Humanity has a new enemy, one that's largely unaware of humanity's existence and able to operate on a scale that dwarfs human endeavors. The good news is that humans aren't being actively attacked. The bad news is that they may be little more than raw resources, stashed in a safe spot for future use.

That is reason enough to worry. Worse are the hints of a far greater danger, one that may be capable of destruction on a scale nearly beyond human comprehension. Humanity may be trapped between a sophisticated enemy to whom human activity is barely more noticeable than ants, and a mysterious power that sends that enemy into an anxious panic.

This series is an easily-recognized example of an in-between style of science fiction. It shares the conceptual bones of an earlier era of short engineer-with-a-wrench stories that are full of set pieces and giant constructs, but Allen attempts to add the characterization that those books lacked. But the technique isn't there; he's trying, and the basics of characterization are present, but with none of the emotional and descriptive sophistication of more recent SF. The result isn't bad, exactly, but it's bloated and belabored. Most of the characterization comes through repetition and ham-handed attempts at inner dialogue.

Slow plotting doesn't help. Allen spends half of a nearly 500 page novel on setup in two primary threads. One is mostly people explaining detailed scientific theories to each other, mixed with an attempt at creating reader empathy that's more forceful than effective. The other is a sort of big dumb object exploration that failed to hold my attention and that turned out to be mostly irrelevant. Key revelations from that thread are revealed less by the actions of the characters than by dumping them on the reader in an extended monologue. The reading goes quickly, but only because the writing is predictable and light on interesting information, not because the plot is pulling the reader through the book. I found myself wishing for an earlier era that would have cut about 300 pages out of this book without losing any of the major events.

Once things finally start happening, the book improves considerably. I grew up reading large-scale scientific puzzle stories, and I still have a soft spot for a last-minute scientific fix and dramatic set piece even if the descriptive detail leaves something to be desired. The last fifty pages are fast-moving and satisfying, only marred by their failure to convince me that the humans were required for the plot. The process of understanding alien technology well enough to use it the right way kept me entertained, but I don't understand why the aliens didn't use it themselves.

I think this book falls between two stools. The scientific mysteries and set pieces would have filled a tight, fast-moving 200 page book with a minimum of characterization. It would have been a throwback to an earlier era of science fiction, but not a bad one. Allen instead wanted to provide a large cast of sympathetic and complex characters, and while I appreciate the continued lack of villains, the writing quality is not sufficient to the task.

This isn't an awful book, but the quality bar in the genre is so much higher now. There are better investments of your reading time available today.

Like The Ring of Charon, The Shattered Sphere reaches a satisfying conclusion but does not resolve the series plot. No sequel has been published, and at this point one seems unlikely to materialize.

Rating: 5 out of 10

12 June, 2022 03:48AM

June 11, 2022

hackergotchi for Junichi Uekawa

Junichi Uekawa

What's in sid?

What's in sid? I wanted to check what version of libc was used in current Debian sid. I am lazy and I used podman to check. However I noticed that apt update is very slow for me, 20kB/s. Wondering if something is wrong.

11 June, 2022 05:27AM by Junichi Uekawa

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Updating a rooted Pixel 3a

A short while after getting a Pixel 3a, I decided to root it, mostly to have more control over the charging procedure. In order to preserve battery life, I like my phone to stop charging at around 75% of full battery capacity and to shut down automatically at around 12%. Some Android ROMs have extra settings to manage this, but LineageOS unfortunately does not.

Android already comes with a fairly complex mechanism to handle the charge cycle, but it is mostly controlled by the kernel and cannot be easily configured by end-users. acc is a higher-level "systemless" interface for the Android kernel battery management, but one needs root to do anything interesting with it. Once rooted, you can use the AccA app instead of playing on the command line to fine tune your battery settings.

Sadly, having a rooted phone also means I need to re-root it each time there is an OS update (typically each week).

Somehow, I keep forgetting the exact procedure to do this! Hopefully, I will be able to use this post as a reference in the future :)

Note that these instructions might not apply to your exact phone model, proceed with caution!

Extract the boot.img file

This procedure mostly comes from the LineageOS documentation on extracting proprietary blobs from the payload.

  1. Download the latest LineageOS image for your phone.

  2. unzip the image to get the payload.bin file inside it.

  3. Clone the LineageOS scripts git repository:

    $ git clone https://github.com/LineageOS/scripts

  4. extract the boot image (requires python3-protobuf):

    $ mkdir extracted-payload $ python3 scripts/update-payload-extractor/extract.py payload.bin --output_dir extracted-payload

You should now have a boot.img file.

Patch the boot image file using Magisk

  1. Upload the boot.img file you previously extracted to your device.

  2. Open Magisk and patch the boot.img file.

  3. Download the patched file back on your computer.

Flash the patched boot image

  1. Enable ADB debug mode on your phone.

  2. Reboot into fastboot mode.

    $ adb reboot fastboot

  3. Flash the patched boot image file:

    $ fastboot flash boot magisk_patched-foo.img

  4. Disable ADB debug mode on your phone.


In an ideal world, you would do this entire process each time you upgrade to a new LineageOS version. Sadly, this creates friction and makes updating much more troublesome.

To simplify things, you can try to flash an old patched boot.img file after upgrading, instead of generating it each time.

In my experience, it usually works. When it does not, the device behaves weirdly after a reboot and things that require proprietary blobs (like WiFi) will stop working.

If that happens:

  1. Download the latest LineageOS version for your phone.

  2. Reboot into recovery (Power + Volume Down).

  3. Click on "Apply Updates"

  4. Sideload the ROM:

    $ adb sideload lineageos-foo.zip

11 June, 2022 04:00AM by Louis-Philippe Véronneau

June 10, 2022

Iustin Pop

Still alive, 2022 version

Still alive, despite the blog being silent for more than a year.

Nothing bad happened, but there was always something more important (or interesting) to do than write a post. And I did say many, many times - “Oh, I should write a post about this thing I just did or learned about�, but I never followed up.

And I was close to forgetting entirely about blogging (ahem, it’s a bit much calling it “blogging�), until someone I follow posted something along the lines “I have this half-written post for many months that I can’t finish, here’s some pictures instead�. And from that followed an interesting discussion, and the similarity between “why I didn’t blog recently� were very interesting, despite different countries, continents/etc.

So yes, I don’t know what happened - beside the chaos that even the end of Covid caused in our lives, and the psychological impact of the Ukraine invasion, but all this is relatively recent - that I couldn’t muster the energy to write posts again.

I even had a half-written post in late June last year, never finished. Sigh. I won’t even bring up open-source work, since I haven’t done that either.

Life. Sometimes things just happen. But yes, I did get many Garmin badges in the last 12 months 🙂 Oh, and “Top Gun: Maverick� is awesome. A movie, but an awesome movie.

See you!

10 June, 2022 09:22PM

hackergotchi for Daniel Pocock

Daniel Pocock

Debian Privacy Hypocrisy

Many leaks have appeared from debian-private recently and nobody will be surprised if there are more. It is interesting to note that many of these emails have a line like this at the bottom:

PS This message shall never be disclosed

In 2016, when people made potentially false accusations against Jacob Appelbaum, he was residing in Germany, where the law clearly says that suspects have a right to privacy that is equivalent to the rights of the victim.

Nonetheless, people were quick to distribute his name on debian-private and shortly afterwards, the Debian Project Leader circumvented German privacy law and took the accusations to an Australian IT journalist.

What people appear to be saying is that when talking about a woman, privacy comes first but when talking about a man, privacy laws don't matter. Yet when the law is applied subjectively like that, it is not the law any more, it is vigilantism.

Groups like the Ku Klux Klan are particularly effective at twisting and turning the law to justify their behavior, just as some people twist and turn the Code of Conduct to indulge in bullying.

Ku Klux Klan

In 2018, I raised the concerns about a volunteer bequesting EUR 150,000 to the FSFE. I have never stated the name of the volunteer. On the other hand, after receiving the cash, FSFE had removed the elections, dramatically changing the nature of the organization that would spend that money. With no more community representatives to look out for that money, the relationship between the cash, the paternity leave and the elections was critical. In an organization that boasts about transparency in its mission statement, those things can be discussed without the name of the volunteer.

On the theme of privacy, the recent case of Australia's Attorney-General opens up a similar conundrum. The Federal Court has published the dossier of the woman who died. We can see that the Attorney-General was 17 years old at the time of the alleged crime. Therefore, I presume that Australia's system of privacy for juvenile offenders would retrospectively give him the same protections as the victim. It is a bizarre thought, an Attorney-General of Australia being smuggled into the children's court to be tried for something he did as a teenager and sentenced under the laws applicable to somebody of that age.

During his time as Minister for Corrective Services, he gave a media interview about juvenile justice:

I would like to see effective and efficient programs delivered to young offenders that are relevant to their reintegration back into society.

After resigning from Parliament, Mr Porter is being reintegrated into society as a defence counsel, his first client having recently robbed a gun shop.

10 June, 2022 11:00AM

Debian Privacy Hypocracy

Many leaks have appeared from debian-private recently and nobody will be surprised if there are more. It is interesting to note that many of these emails have a line like this at the bottom:

PS This message shall never be disclosed

In 2016, when people made potentially false accusations against Jacob Appelbaum, he was residing in Germany, where the law clearly says that suspects have a right to privacy that is equivalent to the rights of the victim.

Nonetheless, people were quick to distribute his name on debian-private and shortly afterwards, the Debian Project Leader circumvented German privacy law and took the accusations to an Australian IT journalist.

What people appear to be saying is that when talking about a woman, privacy comes first but when talking about a man, privacy laws don't matter. Yet when the law is applied subjectively like that, it is not the law any more, it is vigilantism.

Groups like the Ku Klux Klan are particularly effective at twisting and turning the law to justify their behavior, just as some people twist and turn the Code of Conduct to indulge in bullying.

Ku Klux Klan

In 2018, I raised the concerns about a volunteer bequesting EUR 150,000 to the FSFE. I have never stated the name of the volunteer. On the other hand, after receiving the cash, FSFE had removed the elections, dramatically changing the nature of the organization that would spend that money. With no more community representatives to look out for that money, the relationship between the cash, the paternity leave and the elections was critical. In an organization that boasts about transparency in its mission statement, those things can be discussed without the name of the volunteer.

On the theme of privacy, the recent case of Australia's Attorney-General opens up a similar conundrum. The Federal Court has published the dossier of the woman who died. We can see that the Attorney-General was 17 years old at the time of the alleged crime. Therefore, I presume that Australia's system of privacy for juvenile offenders would retrospectively give him the same protections as the victim. It is a bizarre thought, an Attorney-General of Australia being smuggled into the children's court to be tried for something he did as a teenager and sentenced under the laws applicable to somebody of that age.

During his time as Minister for Corrective Services, he gave a media interview about juvenile justice:

I would like to see effective and efficient programs delivered to young offenders that are relevant to their reintegration back into society.

After resigning from Parliament, Mr Porter is being reintegrated into society as a defence counsel, his first client having recently robbed a gun shop.

10 June, 2022 11:00AM

June 09, 2022

Enrico Zini

Updating cbqt for bullseye

Back in 2017 I did work to setup a cross-building toolchain for QT Creator, that takes advantage of Debian's packaging for all the dependency ecosystem.

It ended with cbqt which is a little script that sets up a chroot to hold cross-build-dependencies, to avoid conflicting with packages in the host system, and sets up a qmake alternative to make use of them.

Today I'm dusting off that work, to ensure it works on Debian bullseye.

Resetting QT Creator

To make things reproducible, I wanted to reset QT Creator's configuration.

Besides purging and reinstalling the package, one needs to manually remove:

  • ~/.config/QtProject
  • ~/.cache/QtProject/
  • /usr/share/qtcreator/QtProject which is where configuration is stored if you used sdktool to programmatically configure Qt Creator (see for example this post and see Debian bug #1012561.

Updating cbqt

Easy start, change the distribution for the chroot:

-DIST_CODENAME = "stretch"
+DIST_CODENAME = "bullseye"


Something else does not work:

Test$ qmake-armhf -makefile
Info: creating stash file …/Test/.qmake.stash
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o   …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread
/usr/lib/gcc-cross/arm-linux-gnueabihf/10/../../../../arm-linux-gnueabihf/bin/ld: cannot find -lGLESv2
collect2: error: ld returned 1 exit status
make: *** [Makefile:146: Test] Error 1

I figured that now I also need to set QMAKE_LIBDIR and not just QMAKE_RPATHLINKDIR:

--- a/cbqt
+++ b/cbqt
@@ -241,18 +241,21 @@ include(../common/linux.conf)

+QMAKE_LIBDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
+QMAKE_LIBDIR += {chroot.abspath}/usr/lib/
 QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/

Now it links again:

Test$ qmake-armhf -makefile
Test$ make
/usr/bin/arm-linux-gnueabihf-g++ -Wl,-O1 -Wl,-rpath-link,…/armhf/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/arm-linux-gnueabihf -Wl,-rpath-link,…/armhf/usr/lib/ -o Test main.o mainwindow.o moc_mainwindow.o   -L…/armhf/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/arm-linux-gnueabihf -L…/armhf/usr/lib/ …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Widgets.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Gui.so …/armhf/usr/lib/arm-linux-gnueabihf/libQt5Core.so -lGLESv2 -lpthread

Making it work in Qt Creator

Time to try it in Qt Creator, and sadly it fails:

/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.


I traced it to this bit in armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf (nonrelevant bits deleted):

isEmpty($${target_prefix}.COMPILER_MACROS) {
    msvc {
        # …
    } else: gcc|ghs {
        vars = $$qtVariablesFromGCC($$QMAKE_CXX)
    for (v, vars) {
        # …
        $${target_prefix}.COMPILER_MACROS += $$v
    cache($${target_prefix}.COMPILER_MACROS, set stash)
} else {
    # …

It turns out that qmake is not able to realise that the compiler is gcc, so vars does not get set, nothing is set in COMPILER_MACROS, and qmake fails.

Reproducing it on the command line

When run manually, however, qmake-armhf worked, so it would be good to know how Qt Creator is actually running qmake. Since it frustratingly does not show what commands it runs, I'll have to strace it:

strace -e trace=execve --string-limit=123456 -o qtcreator.trace -f qtcreator

And there it is:

$ grep qmake- qtcreator.trace
1015841 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "-query"], 0x56096e923040 /* 54 vars */) = 0
1015865 execve("/usr/local/bin/qmake-armhf", ["/usr/local/bin/qmake-armhf", "…/Test/Test.pro", "-spec", "arm-linux-gnueabihf", "CONFIG+=debug", "CONFIG+=qml_debug"], 0x7f5cb4023e20 /* 55 vars */) = 0

I run the command manually and indeed I reproduce the problem:

$ /usr/local/bin/qmake-armhf Test.pro -spec arm-linux-gnueabihf CONFIG+=debug CONFIG+=qml_debug
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.

I try removing options until I find the one that breaks it and... now it's always broken! Even manually running qmake-armhf, like I did earlier, stopped working:

$ rm .qmake.stash
$ qmake-armhf -makefile
…/armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/features/toolchain.prf:76: Variable QMAKE_CXX.COMPILER_MACROS is not defined.

Debugging toolchain.prf

I tried purging and reinstalling qtcreator, and recreating the chroot, but qmake-armhf is staying broken. I'll let that be, and try to debug toolchain.prf.

By grepping gcc in the mkspecs directory, I managed to figure out that:

  • The } else: gcc|ghs { test is matching the value(s) of QMAKE_COMPILER
  • QMAKE_COMPILER can have multiple values, separated by space
  • If in armhf/usr/lib/arm-linux-gnueabihf/qt5/mkspecs/arm-linux-gnueabihf/qmake.conf I set QMAKE_COMPILER = gcc arm-linux-gnueabihf-gcc, then things work again.

Sadly, I failed to find reference documentation for QMAKE_COMPILER's syntax and behaviour. I also failed to find why qmake-armhf worked earlier, and I am also failing to restore the system to a situation where it works again. Maybe I dreamt that it worked? I had some manual change laying around from some previous fiddling with things?

Anyway at least now I have the fix:

--- a/cbqt
+++ b/cbqt
@@ -248,7 +248,7 @@ QMAKE_RPATHLINKDIR += {chroot.abspath}/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/arm-linux-gnueabihf
 QMAKE_RPATHLINKDIR += {chroot.abspath}/usr/lib/

-QMAKE_COMPILER          = {chroot.arch_triplet}-gcc
+QMAKE_COMPILER          = gcc {chroot.arch_triplet}-gcc

 QMAKE_CC                = /usr/bin/{chroot.arch_triplet}-gcc

Fixing a compiler mismatch warning

In setting up the kit, Qt Creator also complained that the compiler from qmake did not match the one configured in the kit. That was easy to fix, by pointing at the host system cross-compiler in qmake.conf:

 QMAKE_COMPILER          = {chroot.arch_triplet}-gcc

-QMAKE_CC                = {chroot.arch_triplet}-gcc
+QMAKE_CC                = /usr/bin/{chroot.arch_triplet}-gcc

 QMAKE_LINK_C            = $$QMAKE_CC

-QMAKE_CXX               = {chroot.arch_triplet}-g++
+QMAKE_CXX               = /usr/bin/{chroot.arch_triplet}-g++

 QMAKE_LINK              = $$QMAKE_CXX

Updated setup instructions

Create an armhf environment:

sudo cbqt ./armhf --create --verbose

Create a qmake wrapper that builds with this environment:

sudo ./cbqt ./armhf --qmake -o /usr/local/bin/qmake-armhf

Install the build-dependencies that you need:

# Note: :arch is added automatically to package names if no arch is explicitly specified
sudo ./cbqt ./armhf --install libqt5svg5-dev libmosquittopp-dev qtwebengine5-dev

Build with qmake

Use qmake-armhf instead of qmake and it works perfectly:

qmake-armhf -makefile

Set up Qt Creator

Configure a new Kit in Qt Creator:

  1. Tools/Options, then Kits, then Add
  2. Name: armhf (or anything you like)
  3. In the Qt Versions tab, click Add then set the path of the new Qt to /usr/local/bin/qmake-armhf. Click Apply.
  4. Back in the Kits, select the Qt version you just created in the Qt version field
  5. In Compilers, select the ARM versions of GCC. If they do not appear, install crossbuild-essential-armhf, then in the Compilers tab click Re-detect and then Apply to make them available for selection
  6. Dismiss the dialog with "OK": the new kit is ready

Now you can choose the default kit to build and run locally, and the armhf kit for remote cross-development.

I tried looking at sdktool to automate this step, and it requires a nontrivial amount of work to do it reliably, so these manual instructions will have to do.


This has been done as part of my work with Truelite.

09 June, 2022 10:15AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Fight Club OST

the record packaging

I often listen to soundtracks when I'm concentrating. The Fight Club soundtrack, by the Dust Brothers, is not one I turn to very often. I do love the way it was packaged for vinyl. The cover design references IKEA, but the clever thing is it has a mailer-style pull-cord to open it up. You can't open the packaging using it without literally tearing the package in half. There is a secret, alternative way to get in with less damage, but if you try it the packaging has a surprise for you. This Image album summarizes most of the packaging secrets.

The records themselves are a pleasant mottled pink colour, reminiscent of the soap bars in the movie. They're labelled "Paper Street Soap Company".

close up of the pink record

09 June, 2022 09:40AM

June 08, 2022

hackergotchi for Daniel Pocock

Daniel Pocock

Ubuntu Underage Girl & Debian Mass Resignations

There are quite a few web sites today with debian in the name. Only one of them is being targetted by the expensive law suit through WIPO. This tells us something: there is something on the debian.community web site that is inconvenient for somebody important. But what is it?

They told us that Outreachy money and other diversity grants were being awarded to improve female participation. Yet what we've seen in practice, and I saw this during the time I was a volunteer administrator for Google Summer of Code (GSoC), is that the sums of money being paid were disproportionate for some countries like Albania and at the same time this money arrived, at least one vulnerable person arrived too. I've probably spent more time helping free software communities in the Balkan countries than anybody else who is going to the DebConf in Kosovo this year so I've met all the people and I generally know what is true and false in this scandal.

When that person first appeared in 2017, she presented herself as the youngest woman in the group. She presented herself as a high school student. She presented herself as a 16 year old. For me, having heard it directly in one of the events in the Balkans, I take it that was all fact.

The second fact is that the Albanian Fedora Ambassador resigned from Fedora and Red Hat activities very soon after people became aware of the girl(friend).

The third fact is that as soon as I got a hint of wrongdoing, I strongly distanced myself from these people but I didn't identify anybody. 15 people distanced themselves from Debian this week.

The fourth fact is that I resigned from the mentoring programs, once again, without disclosing any private information about the problems.

The fifth fact is that various planet aggregation sites like Planet Fedora mysteriously stopped syndicating independent blogs at exactly the same time.

The final fact is that after leaving Red Hat / Fedora, the person concerned started working with Ubuntu.

In hindsight, this looks so much like what happened with the scandal in Australia's parliament. To avoid public awareness of the case, the investigation was canceled and somebody moved out of his job on a minor technicality and went up the road to take a new job at a tobacco company.

Around exactly the same time as the issues in Albania were swept under the carpet, Google staff walked out due to the large termination payments made to men accused of harassment. One of the staff, Andy Rubin, received a $90m payoff when he departed Google.

Double standards

In March 2021, there were intense attacks on Dr Richard Stallman, formerly at MIT, simply because of comments he made. Yet what we see here is far worse than Dr Stallman's alleged mistakes. It is not just one or two people talking about a hypothetical victim, it is organizations turning a blind eye and sending large sums of cash into the region.

Unprecedented: Debian Mass Resignation

Here is a screenshot from the Debian people tracker. Norbert Tretkowski resigned when negative publicity appeared about Dr Norbert Preining. Joachim Breitner resigned in the same week that Jonathan Carter sent a vindictive but flawed dossier to WIPO. A new blog appeared last week about Chris Lamb's proximity to the Albanian problems. Only a few days after that blog and we see fifteen more people resigned on Monday, 7 June.

These appear to be independent developers, unconnected with Ubuntu and Google in particular.

Debian Mass Resignations

08 June, 2022 06:30PM

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

Moving to a faster but smaller disk, encrypted setup

My work computer runs Debian 11 bullseye (the current stable release) in a mechanical 500GB disk, and I was provided with a new SDD disk but its size was 480 GB. So I had to shrink my partitions before copying the data to the new disk. It turned out to be a bit difficult because my main partition was encrypted.

I write here how I did, maybe there are other simpler ways but I couldn’t find them.


I had three partitions in my old 500GB disk: /dev/sda1 is the EFI partition, /dev/sda2 the boot partition and /dev/sda3 the root partition (encrypted, with LVM, the standard way the Debian installer proposes when you choose a simple encrypted setup).

First of all, I made a disk image with Clonezilla to an external USB disk, just in case I mess up things, to be able to return to a safe point and start again.

Then I started my computer with a Debian 11 live USB with KDE Plasma desktop and Spanish localisation environment.

I opened the KDE Partition manager and copied the non encrypted partitions (sda1, EFI and sda2, /boot) to the new disk.

I shrinked the encrypted partition from the terminal with the following commands (I had enough free space so reduced my partition to a total of 300GB):

Removed the swap partition and re-created it:

sudo lvremove /dev/larjona-pc-vg/swap_1
sudo pvresize --setphysicalvolumesize 380G /dev/mapper/cryptdisk
sudo pvchange -x y /dev/mapper/cryptdisk
sudo lvcreate -L 4G -n swap_1 larjona-pc-vg
sudo mkswap -L swap_1 /dev/larjona-pc-vg/swap_1

Display information about the physical volume in order to shrink it:

sudo pvs -v --segments --units s /dev/mapper/cryptdisk
sudo cryptsetup -b 838860800 resize cryptdisk
sudo cryptsetup status cryptdisk
sudo vgchange -a n vgroup
sudo vgchange -an
sudo cryptsetup luksClose cryptdisk

Then reduced the sda3 partition with the KDE partition manager (it took a while), and copy it to the new disk.

Turned off the computer and unplugged the old disk. Started the computer with the Debian 11 Live USB again, UEFI boot.

Now, to make my system boot:

sudo cryptsetup luksOpen /dev/sda3 crypdisk
sudo vgscan --mknodes
sudo vgchange -ay
sudo mount /dev/mapper/larjona--pc--vg-root /mnt
sudo mount /dev/sda2 /mnt/boot
sudo mount /dev/sda1 /mnt/boot/efi
mount --rbind /sys /media/linux/sys
mount -t efivarfs none /sys/firmware/efi/efivars
for i in /dev /dev/pts /proc /run; do sudo mount -B $i /mnt$i; done
sudo chroot /mnt

Then edited /mnt/etc/crypttab to reflect the name of the new encrypted partition, edited /mnt/etc/fstab to paste the UUIDs of the new partitions.
Then ran grub-install and reinstalled the kernels as noted in the reference, rebooted and logged in my Plasma desktop 🙂

(Well, the actual process was not so smooth but after several tries and errors and searching for help I managed to get the needed commands to make my system boot from the new disk).

08 June, 2022 12:05PM by larjona

June 07, 2022

Software Freedom Institute

Using the Debian trademark

At the Institute, we are disappointed by the recent events in Debian. Seventeen Debian Developers have resigned in a very short space of time.

We encourage people to go ahead and use the name Debian in domain names and web sites that are genuinely connected with your Debian activities. The Debian Social Contract, point no. 3, tells us We will not hide problems.

The Debian Constitution tells us that we are independent and autonomous. Therefore, it is completely normal for Debian Developers to create autonomous web sites including the name Debian in the domain. With over a thousand active Debian Developers, it is unlikely that every Developer will always agree with every other Developer. This makes it even more vital to have multiple, autonomous, censor-resistant Debian web sites.

Everybody who ever contributed to Debian in the entire history of the project has legitimate interests in using the trademark in a domain name. These legitimate interests are honored in the Uniform Domain Name Dispute Resolution Policy (UDRP) clause 4(a)(ii). Given the nature of free, open source software projects it is completely impossible to say that the volunteers have no rights whatsoever. Therefore, any claim made under the UDRP can not honestly satisfy clause 4(a)(ii) and will be struck out.

Software Freedom Institute SA is a leader in the development of voice, video and business messaging solutions.

We believe the combination of free, open source software and open standards are the best way to empower you, our customers, in the long term.

07 June, 2022 10:00PM

June 06, 2022

hackergotchi for Norbert Preining

Norbert Preining

Modern world

Just got reminded of this great short movie!

How befitting.

06 June, 2022 11:42PM by Norbert Preining

Reproducible Builds

Reproducible Builds in May 2022

Welcome to the May 2022 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Repfix paper

Zhilei Ren, Shiwei Sun, Jifeng Xuan, Xiaochen Li, Zhide Zhou and He Jiang have published an academic paper titled Automated Patching for Unreproducible Builds:

[..] fixing unreproducible build issues poses a set of challenges [..], among which we consider the localization granularity and the historical knowledge utilization as the most significant ones. To tackle these challenges, we propose a novel approach [called] RepFix that combines tracing-based fine-grained localization with history-based patch generation mechanisms.

The paper (PDF, 3.5MB) uses the Debian mylvmbackup package as an example to show how RepFix can automatically generate patches to make software build reproducibly. As it happens, Reiner Herrmann submitted a patch for the mylvmbackup package which has remained unapplied by the Debian package maintainer for over seven years, thus this paper inadvertently underscores that achieving reproducible builds will require both technical and social solutions.

Python variables

Johannes Schauer discovered a fascinating bug where simply naming your Python variable _m led to unreproducible .pyc files. In particular, the types module in Python 3.10 requires the following patch to make it reproducible:

--- a/Lib/types.py
+++ b/Lib/types.py
@@ -37,8 +37,8 @@ _ag = _ag()
 AsyncGeneratorType = type(_ag)
 class _C:
-    def _m(self): pass
-MethodType = type(_C()._m)
+    def _b(self): pass
+MethodType = type(_C()._b)

Simply renaming the dummy method from _m to _b was enough to workaround the problem. Johannes’ bug report first led to a number of improvements in diffoscope to aid in dissecting .pyc files, but upstream identified this as caused by an issue surrounding interned strings and is being tracked in CPython bug #78274.

New SPDX team to incorporate build metadata in Software Bill of Materials

SPDX, the open standard for Software Bill of Materials (SBOM), is continuously developed by a number of teams and committees. However, SPDX has welcomed a new addition; a team dedicated to enhancing metadata about software builds, complementing reproducible builds in creating a more secure software supply chain. The “SPDX Builds Team” has been working throughout May to define the universal primitives shared by all build systems, including the “who, what, where and how” of builds:

  • Who: the identity of the person or organisation that controls the build infrastructure.

  • What: the inputs and outputs of a given build, combining metadata about the build’s configuration with an SBOM describing source code and dependencies.

  • Where: the software packages making up the build system, from build orchestration tools such as Woodpecker CI and Tekton to language-specific tools.

  • How: the invocation of a build, linking metadata of a build to the identity of the person or automation tool that initiated it.

The SPDX Builds Team expects to have a usable data model by September, ready for inclusion in the SPDX 3.0 standard. The team welcomes new contributors, inviting those interested in joining to introduce themselves on the SPDX-Tech mailing list.

Talks at Debian Reunion Hamburg

Some of the Reproducible Builds team (Holger Levsen, Mattia Rizzolo, Roland Clobus, Philip Rinn, etc.) met in real life at the Debian Reunion Hamburg (official homepage). There were several informal discussions amongst them, as well as two talks related to reproducible builds.

First, Holger Levsen gave a talk on the status of Reproducible Builds for bullseye and bookworm and beyond (WebM, 210MB):

Secondly, Roland Clobus gave a talk called Reproducible builds as applied to non-compiler output (WebM, 115MB):

Supply-chain security attacks

This was another bumper month for supply-chain attacks in package repositories. Early in the month, Lance R. Vick noticed that the maintainer of the NPM foreach package let their personal email domain expire, so they bought it and now “controls foreach on NPM and the 36,826 projects that depend on it”. Shortly afterwards, Drew DeVault published a related blog post titled When will we learn? that offers a brief timeline of major incidents in this area and, not uncontroversially, suggests that the “correct way to ship packages is with your distribution’s package manager”.


“Bootstrapping” is a process for building software tools progressively from a primitive compiler tool and source language up to a full Linux development environment with GCC, etc. This is important given the amount of trust we put in existing compiler binaries. This month, a bootstrappable mini-kernel was announced. Called boot2now, it comprises a series of compilers in the form of bootable machine images.

Google’s new Assured Open Source Software service

Google Cloud (the division responsible for the Google Compute Engine) announced a new Assured Open Source Software service. Noting the considerable 650% year-over-year increase in cyberattacks aimed at open source suppliers, the new service claims to enable “enterprise and public sector users of open source software to easily incorporate the same OSS packages that Google uses into their own developer workflows”. The announcement goes on to enumerate that packages curated by the new service would be:

  • Regularly scanned, analyzed, and fuzz-tested for vulnerabilities.

  • Have corresponding enriched metadata incorporating Container/Artifact Analysis data.

  • Are built with Cloud Build including evidence of verifiable SLSA-compliance

  • Are verifiably signed by Google.

  • Are distributed from an Artifact Registry secured and protected by Google.

(Full announcement)

A retrospective on the Rust programming language

Andrew “bunnie” Huang published a long blog post this month promising a “critical retrospective” on the Rust programming language. Amongst many acute observations about the evolution of the language’s syntax (etc.), the post beings to critique the languages’ approach to supply chain security (“Rust Has A Limited View of Supply Chain Security”) and reproducibility (“You Can’t Reproduce Someone Else’s Rust Build”):

There’s some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say “almost anyone else” because this fix would be OS-dependent, so we’d be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.

(Full post)

Reproducible Builds IRC meeting

The minutes and logs from our May 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on Tuesday 28th June at 15:00 UTC on #reproducible-builds on the OFTC network.

A new tool to improve supply-chain security in Arch Linux

kpcyrd published yet another interesting tool related to reproducibility. Writing about the tool in a recent blog post, kpcyrd mentions that although many PKGBUILDs provide authentication in the context of signed Git tags (i.e. the ability to “verify the Git tag was signed by one of the two trusted keys”), they do not support pinning, ie. that “upstream could create a new signed Git tag with an identical name, and arbitrarily change the source code without the [maintainer] noticing”. Conversely, other PKGBUILDs support pinning but not authentication. The new tool, auth-tarball-from-git, fixes both problems, as nearly outlined in kpcyrd’s original blog post.


diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 212, 213 and 214 to Debian unstable.

Chris also made the following changes:

  • New features:

    • Add support for extracting vmlinuz Linux kernel images. []
    • Support both python-argcomplete 1.x and 2.x. []
    • Strip sticky etc. from x.deb: sticky Debian binary package […]. []
    • Integrate test coverage with GitLab’s concept of artifacts. [][][]
  • Bug fixes:

    • Don’t mask differences in .zip or .jar central directory extra fields. []
    • Don’t show a binary comparison of .zip or .jar files if we have observed at least one nested difference. []
  • Codebase improvements:

    • Substantially update comment for our calls to zipinfo and zipinfo -v. []
    • Use assert_diff in test_zip over calling get_data with a separate assert. []
    • Don’t call re.compile and then call .sub on the result; just call re.sub directly. []
    • Clarify the comment around the difference between --usage and --help. []
  • Testsuite improvements:

    • Test --help and --usage. []
    • Test that --help includes the file formats. []

Vagrant Cascadian added an external tool reference xb-tool for GNU Guix  [] as well as updated the diffoscope package in GNU Guix itself [][][].

Distribution work

In Debian, 41 reviews of Debian packages were added, 85 were updated and 13 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated, including adding a new nondeterministic_ordering_in_deprecated_items_collected_by_doxygen toolchain issue [] as well as ones for mono_mastersummary_xml_files_inherit_filesystem_ordering [], extended_attributes_in_jar_file_created_without_manifest [] and apxs_captures_build_path [].

Vagrant Cascadian performed a rough check of the reproducibility of core package sets in GNU Guix, and in openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Reproducible builds website

Chris Lamb updated the main Reproducible Builds website and documentation in a number of small ways, but also prepared and published an interview with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix. [][][][]

In addition, Tim Jones added a link to the Talos Linux project [] and billchenchina fixed a dead link [].

Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Add support for detecting running kernels that require attention. []
    • Temporarily configure a host to support performing Debian builds for packages that lack .buildinfo files. []
    • Update generated webpages to clarify wishes for feedback. []
    • Update copyright years on various scripts. []
  • Mattia Rizzolo:

    • Provide a facility so that Debian Live image generation can copy a file remotely. [][][][]
  • Roland Clobus:

    • Add initial support for testing generated images with OpenQA. []

And finally, as usual, node maintenance was also performed by Holger Levsen [][].

Misc news

On our mailing list this month:


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 June, 2022 12:23PM

Thorsten Alteholz

My Debian Activities in May 2022

FTP master

This month I accepted 288 and rejected 45 packages. The overall number of packages that got accepted was 290.

Debian LTS

This was my ninety-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 3029-1] cups security update for one embargoed CVE
  • [DLA 3028-1] atftp security update for one CVE
  • [DLA 3030-1] zipios++ security update for one CVE
  • [DSA-5149-1] cups security update in Buster and Bullseye
  • [#1008577] bullseye-pu: golang-github-russellhaering-goxmldsig/1.1.0-1+deb11u1 debdiff was approved and package uploaded
  • [#1009077] bullseye-pu: minidlna/1.3.0+dfsg-2+deb11u1 debdiff was approved and package uploaded
  • [#1009250] bullseye-pu: fribidi/1.0.8-2+deb11u1 debdiff was approved and package uploaded

Further I continued working on libvirt and started to work on blender and ncurses.

I also continued to work on security support for golang packages.

Last but not least I did some days of frontdesk duties and took care of issues on security-master.

Debian ELTS

This month was the forty-seventh ELTS month.

During my allocated time I uploaded:

  • ELS-618-1 for openldap

I also moved/refactored the current ELTS documentation to a new repository.

Further I started to work on blender and ncurses in ELTS as well as in LTS.

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

The reason for the new upstream version of ipp-usb was a strange bug. Some HP printers claim to have fax support but fail to respond to corresponding IPP queries. I understand that nowadays sending a fax is no longer a main theme for quality assurance. But if one tries to advertise as much features as possible, all these features should basically work and not prevent the things a printer should normally do.

The reason for the new upstream version of cups was a security issue. You now should have the latest version of cups installed (there have been updates in other Debian releases as well).

Debian Astro

This month I uploaded new upstream versions or improved packaging of:

Other stuff

This month I uploaded new packages:

06 June, 2022 10:51AM by alteholz

June 05, 2022

hackergotchi for Norbert Preining

Norbert Preining

Key validity extension of OBS repository

Yesterday, the signing key of my OBS repositories has expired (I didn’t know they do). I have the validity extended now.

To fix errors with access to the repos, please download the updated key from OBS or direct from my website.

05 June, 2022 11:14PM by Norbert Preining

June 04, 2022

hackergotchi for Junichi Uekawa

Junichi Uekawa

June came.

June came. I am still playing with rust these days. Learning more things every day.

04 June, 2022 08:42AM by Junichi Uekawa

June 03, 2022

François Marier

Using Gandi DNS for Let's Encrypt certbot verification

I had some problems getting the Gandi certbot plugin to work in Debian bullseye since the documentation appears to be outdated.

When running certbot renew --dry-run, I saw the following error message:

Plugin legacy name certbot-plugin-gandi:dns may be removed in a future version. Please use dns instead.

Thanks to an issue in another DNS plugin, I was able to easily update my configuration to the new naming convention.


Get an API key from Gandi and then put it in /etc/letsencrypt/gandi.ini:

# live dns v5 api key

before make it only readable by root:

chown root:root /etc/letsencrypt/gandi.ini
chmod 600 /etc/letsencrypt/gandi.ini

Then install the required package:

apt install python3-certbot-dns-gandi

Getting an initial certificate

To get an initial certificate using the Gandi plugin, simply use the following command:

certbot certonly -a dns --dns-credentials /etc/letsencrypt/gandi.ini -d example.fmarier.org

Setting up automatic renewal

If you have automatic renewals enabled, you'll want to ensure your /etc/letsencrypt/renewal/example.fmarier.org.conf file looks like this:

# renew_before_expiry = 30 days
version = 1.12.0
archive_dir = /etc/letsencrypt/archive/example.fmarier.org
cert = /etc/letsencrypt/live/example.fmarier.org/cert.pem
privkey = /etc/letsencrypt/live/example.fmarier.org/privkey.pem
chain = /etc/letsencrypt/live/example.fmarier.org/chain.pem
fullchain = /etc/letsencrypt/live/example.fmarier.org/fullchain.pem

account = abcdef
authenticator = dns
server = https://acme-v02.api.letsencrypt.org/directory
dns_credentials = /etc/letsencrypt/gandi.ini

03 June, 2022 04:40AM

June 01, 2022

Russ Allbery

Review: The Seeress of Kell

Review: The Seeress of Kell, by David Eddings

Series: The Malloreon #5
Publisher: Del Rey
Copyright: May 1991
Printing: May 1992
ISBN: 0-345-37759-1
Format: Mass market
Pages: 374

The Seeress of Kell is the conclusion of the five-book Malloreon series and a direct sequel to Sorceress of Darshiva. You do not want to begin the series here (or, to be honest, at all).

We have finally finished the relaxed tour of Mallorea, the second continent of Eddings's remarkably small two-continent world. The heroes have gathered all of their required companions and are headed for Kell, where the seeress Cyradis awaits. From there, they and the new Child of Dark must find their way to the Place Which Is No More for the final confrontation.

By "find," I mean please remain seated with your hands, arms, feet, and legs inside the vehicle. The protagonists have about as much to do with the conclusion of this series as the passengers of a roller coaster have control over its steering.

I am laughing at my younger self, who quite enjoyed this series (although as I recall found it a bit repetitive) and compared it favorably to the earlier Belgariad series. My memory kept telling me that the conclusion of the series was lots of fun. Reader, it was not. It was hilariously bad.

Both of Eddings's first two series, but particularly this one, take place in a fantasy world full of true prophecy. The conceit of the Malloreon in particular (this is a minor spoiler for the early books, but not one that I think interferes with enjoyment) is that there are two competing prophecies that agree on most events but are in conflict over a critical outcome. True prophecy creates an agency problem: why have protagonists if everything they do is fixed in prophecy? The normal way to avoid that is to make the prophecy sufficiently confusing and the mechanism by which it comes true sufficiently subtle that everyone has to act as if there is no prophecy, thus reducing the role of the prophecy to foreshadowing and a game the author plays with the reader.

What makes the Malloreon interesting (and I mean this sincerely) is that Eddings instead leans into the idea of a prophecy as an active agent leading the protagonists around by the nose. As a meta-story commentary on fantasy stories, this can be quite entertaining, and it helps that the prophecy appears as a likable character of sorts in the book. The trap that Eddings had mostly avoided before now is that this structure can make the choices of the protagonists entirely pointless. In The Seeress of Kell, he dives head-first into the trap and then pulls it shut behind him.

The worst part is Ce'Nedra, who once again spends an entire book either carping at Garion in ways that are supposed to be endearing (but aren't) or being actively useless. The low point is when she is manipulated into betraying the heroes, costing them a significant advantage. We're then told that, rather than being a horrific disaster, this is her important and vital role in the story, and indeed the whole reason why she was in the story at all. The heroes were too far ahead of the villains and were in danger of causing the prophecy to fail. At that point, one might reasonably ask why one is bothering reading a novel instead of a summary of the invented history that Eddings is going to tell whether his characters cooperate or not.

The whole middle section of the book is like this: nothing any of the characters do matters because everything is explicitly destined. That includes an extended series of interludes following the other main characters from the Belgariad, who are racing to catch up with the main party but who will turn out to have no role of significance whatsoever.

I wouldn't mind this as much if the prophecy were more active in the story, given that it's the actual protagonist. But it mostly disappears. Instead, the characters blunder around doing whatever seems like a good idea at the time, while Cyradis acts like a bizarre sort of referee with a Calvinball rule set and every random action turns out to be the fulfillment of prophecy in the most ham-handed possible way. Zandramas, meanwhile, is trying to break the prophecy, which would have been a moderately interesting story hook if anyone (Eddings included) thought she were potentially capable of doing so. Since no one truly believes there's any peril, this turns into a series of pointless battles the reader has no reason to care about.

All of this sets up what has been advertised since the start of the series as a decision between good and evil. Now, at the least minute, Eddings (through various character mouthpieces) tries to claim that the decision is not actually between good and evil, but is somehow beyond morality. No one believes this, including the narrator and the reader, making all of the philosophizing a tedious exercise in page-turning. To pull off a contention like that, the author has to lay some sort of foundation to allow the reader to see the supposed villain in multiple lights. Eddings does none of that, instead emphasizing how evil she is at every opportunity.

On top of that, this supposed free choice on which the entire universe rests and for which all of history was pointed depends on someone with astonishing conflicts of interest. While the book is going on about how carefully the prophecy is ensuring that everyone is in the right place at the right time so that no side has an advantage, one side is accruing an absurdly powerful advantage. And the characters don't even seem to realize it!

The less said about the climax, the better. Unsurprisingly, it was completely predictable.

Also, while I am complaining, I could never get past how this entire series starts off with and revolves around an incredibly traumatic and ongoing event that has no impact whatsoever on the person to whom the trauma happens. Other people are intermittently upset or sad, but not only is that person not harmed, they act, at the end of this book, as if the entire series had never happened.

There is one bright spot in this book, and ironically it's the one plot element that Eddings didn't make blatantly obvious in advance and therefore I don't want to spoil it. All I'll say is that one of the companions the heroes pick up along the way turns out to be my favorite character of the series, plays a significant role in the interpersonal dynamics between the heroes, and steals every scene that she's in by being more sensible than any of the other characters in the story. Her story, and backstory, is emotional and moving and is the best part of this book.

Otherwise, not only is the plot a mess and the story structure a failure, but this is also Eddings at his most sexist and socially conservative. There is an extended epilogue after the plot resolution that serves primarily as a showcase of stereotypes: baffled men having their habits and preferences rewritten by their wives, cast-iron gender roles inside marriage, cringeworthy jokes, and of course loads and loads of children because that obviously should be everyone's happily ever after. All of this happens to the characters rather than being planned or actively desired, continuing the theme of prophecy and lack of agency, although of course they're all happy about it (shown mostly via grumbling). One could write an entire academic paper on the tension between this series and the concept of consent.

There were bits of the Malloreon that I enjoyed, but they were generally in spite of the plot rather than because of it. I do like several of Eddings's characters, and in places I liked the lack of urgency and the sense of safety. But I think endings still have to deliver some twist or punch or, at the very least, some clear need for the protagonists to take an action other than stand in the right room at the right time. Eddings probably tried to supply that (I can make a few guesses about where), but it failed miserably for me, making this the worst book of the series.

Unless like me you're revisiting this out of curiosity for your teenage reading habits (and even then, consider not), avoid.

Rating: 3 out of 10

01 June, 2022 04:53AM

May 31, 2022

Paul Wise

FLOSS Activities May 2022


This month I didn't have any particular focus. I just worked on issues in my info bubble.




  • Spam: reported 1 Debian bug reports and 41 Debian mailing list posts
  • Patches: reviewed gt patches
  • Debian packages: sponsored psi-notify
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved cppcheck-gui eta flpsed fluxbox p7zip-full pampi pyqso xboard
    • rejected p7zip (help output), openshot (photo of a physical library), clamav-daemon (movie cartoon character), aptitude (screenshot of random launchpad project), laditools (screenshot of tracker.d.o for src:hello), weboob-qt/chromium-browser/supercollider-vim ((NSFW) selfies), node-split (screenshot of screenshots site), libc6 (Chinese characters alongside a photo of man and bottle)


  • Debian servers: investigate etckeeper cron mail
  • Debian wiki: investigate account existence, approve accounts


  • Respond to queries from Debian users and contributors on the mailing lists and IRC


The gensim and libpst work was sponsored. All other work was done on a volunteer basis.

31 May, 2022 11:49PM

Russell Coker

May 30, 2022

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy May 2022 - August 2022 round.

Israel Galadima and Michael Ikwuegbu will work on Improve yarn package manager integration with Debian, mentored by Akshay S Dinesh and Pirate Praveen.

Congratulations and welcome to Israel Galadima and Michael Ikwuegbu!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

30 May, 2022 10:00AM by Abhijith Pa

hackergotchi for Gunnar Wolf

Gunnar Wolf

On to the next journey

Last Wednesday my father, Kurt Bernardo Wolf Bogner, took the steps towards his next journey, the last that would start in this life. I cannot put words to this… so just sharing this with the world will have to suffice. Goodbye to my teacher, my friend, the person I have always looked up to.

Some of his friends were able to put in words more than what I can come up with. If you can read Spanish, you can read the eulogy from the Science Academy of Morelos.

His last project, enjoyable by anybody who reads Spanish, is the book with his account of his youth travels — Asia, Africa and Europe, beteen 1966 and 1970. You can read it online. And I have many printed copies, in case you want one as well.

We will always remember you with love.

30 May, 2022 03:46AM

May 28, 2022

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Speeding up Samba AD

One Weird Trick(TM) for speeding up a slow Samba Active Directory domain controller is seemingly to leave and rejoin the domain. (If you don't have another domain controller, you'll need to join one in temporarily.) Seemingly, not only can you switch to LMDB (which has two fsyncs instead of eight on commit—which matters a lot, especially on non-SSDs, as the Kerberos authentication path has a blocking write to update account statistics), but you also get to regenerate the database, giving you the advantage of any new indexes since last upgrade.

Oh, and SSDs probably help a bunch, too.

28 May, 2022 07:14PM

May 25, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAPT 0.0.9: Minor Update

A new version of the RcppAPT package with the R interface to the C++ library behind the awesome apt, apt-get, apt-cache, … commands and their cache powering Debian, Ubuntu and the like arrived on CRAN earlier today.

RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.

This release updates the code to the Apt 2.5.0 release this makes. It makes a cleaner distinction between public and private components of the API. We adjusted one access point to a pattern we already used, and while at it, simplified some of the transition from the pre-Apt 2.0.0 interface. No new features. The NEWS entries follow.

Changes in version 0.0.9 (2022-05-25)

  • Simplified and standardized to only use public API

  • No longer tests and accomodates pre-Apt 2.0 API

Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 May, 2022 09:50PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

One of the strangest bug I have ever seen on Linux

Networking starts when you login as root, stops when you log off !

SeLinux messages can be ignored I guess, but we see clearly the devices being activated (it's a Linux bridge)

If you have any explanations I am curious.

25 May, 2022 08:58PM by Emmanuel Kasper (noreply@blogger.com)