May 18, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

I do have a full face

I have been a bearded subject since I was 18, back in 1994. Yes, during 1999-2000, I shaved for my military service, and I briefly tried the goatee look in 2008… Few people nowadays can imagine my face without a forest of hair.

But sometimes, life happens. And, unlike my good friend Bdale, I didn’t get Linus to do the honors… But, all in all, here I am:

Turns out, I have been suffering from quite bad skin infections for a couple of years already. Last Friday, I checked in to the hospital, with an ugly, swollen face (I won’t put you through that), and the hospital staff decided it was in my best interests to trim my beard. And then some more. And then shave me. I sat in the hospital for four days, getting soaked (medical term) with antibiotics and otherstuff, got my recipes for the next few days, and… well, I really hope that’s the end of the infections. We shall see!

So, this is the result of the loving and caring work of three different nurses. Yes, not clean-shaven (I should not trim it further, as shaving blades are a risk of reinfection).

Anyway… I guess the bits of hair you see over the place will not take too long to become a beard again, even get somewhat respectable. But I thought some of you would like to see the real me™ 😉

PS- Thanks to all who have reached out with good wishes. All is fine!

18 May, 2022 02:52PM

Reproducible Builds

Supporter spotlight: Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project.

We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC and the Google Open Source Security Team (GOSST). Today, however, we will be talking with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix.


Chris Lamb: Hi Jan, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself?

Jan: Thanks for the chat; it’s been a while! Well, I’ve always been trying to find something new and interesting that is just asking to be created but is mostly being overlooked. That’s how I came to work on GNU Guix and create GNU Mes to address the bootstrapping problem that we have in free software. It’s also why I have been working on releasing Dezyne, a programming language and set of tools to specify and formally verify concurrent software systems as free software.

Briefly summarised, compilers are often written in the language they are compiling. This creates a chicken-and-egg problem which leads users and distributors to rely on opaque, pre-built binaries of those compilers that they use to build newer versions of the compiler. To gain trust in our computing platforms, we need to be able to tell how each part was produced from source, and opaque binaries are a threat to user security and user freedom since they are not auditable. The goal of bootstrappability (and the bootstrappable.org project in particular) is to minimise the amount of these “bootstrap” binaries.

Anyway, after studying Physics at Eindhoven University of Technology (TU/e), I worked for digicash.com, a startup trying to create a digital and anonymous payment system – sadly, however, a traditional account-based system won. Separate to this, as there was no software (either free or proprietary) to automatically create beautiful music notation, together with Han-Wen Nienhuys, I created GNU LilyPond. Ten years ago, I took the initiative to co-found a democratic school in Eindhoven based on the principles of sociocracy. And last Christmas I finally went vegan, after being mostly vegetarian for about 20 years!


Chris: For those who have not heard of it before, what is GNU Guix? What are the key differences between Guix and other Linux distributions?

Jan: GNU Guix is both a package manager and a full-fledged GNU/Linux distribution. In both forms, it provides state-of-the-art package management features such as transactional upgrades and package roll-backs, hermetical-sealed build environments, unprivileged package management as well as per-user profiles. One obvious difference is that Guix forgoes the usual Filesystem Hierarchy Standard (ie. /usr, /lib, etc.), but there are other significant differences too, such as Guix being scriptable using Guile/Scheme, as well as Guix’s dedication and focus on free software.


Chris: How does GNU Guix relate to GNU Mes? Or, rather, what problem is Mes attempting to solve?

Jan: GNU Mes was created to address the security concerns that arise from bootstrapping an operating system such as Guix. Even if this process entirely involves free software (i.e. the source code is, at least, available), this commonly uses large and unauditable binary blobs.

Mes is a Scheme interpreter written in a simple subset of C and a C compiler written in Scheme, and it comes with a small, bootstrappable C library. Twice, the Mes bootstrap has halved the size of opaque binaries that were needed to bootstrap GNU Guix. These reductions were achieved by first replacing GNU Binutils, GNU GCC and the GNU C Library with Mes, and then replacing Unix utilities such as awk, bash, coreutils, grep sed, etc., by Gash and Gash-Utils. The final goal of Mes is to help create a full-source bootstrap for any interested UNIX-like operating system.


Chris: What is the current status of Mes?

Jan: Mes supports all that is needed from ‘R5RS’ and GNU Guile to run MesCC with Nyacc, the C parser written for Guile, for 32-bit x86 and ARM. The next step for Mes would be more compatible with Guile, e.g., have guile-module support and support running Gash and Gash Utils.

In working to create a full-source bootstrap, I have disregarded the kernel and Guix build system for now, but otherwise, all packages should be built from source, and obviously, no binary blobs should go in. We still need a Guile binary to execute some scripts, and it will take at least another one to two years to remove that binary. I’m using the 80/20 approach, cutting corners initially to get something working and useful early.

Another metric would be how many architectures we have. We are quite a way with ARM, tinycc now works, but there are still problems with GCC and Glibc. RISC-V is coming, too, which could be another metric. Someone has looked into picking up NixOS this summer. “How many distros do anything about reproducibility or bootstrappability?” The bootstrappability community is so small that we don’t ‘need’ metrics, sadly. The number of bytes of binary seed is a nice metric, but running the whole thing on a full-fledged Linux system is tough to put into a metric. Also, it is worth noting that I’m developing on a modern Intel machine (ie. a platform with a management engine), that’s another key component that doesn’t have metrics.


Chris: From your perspective as a Mes/Guix user and developer, what does ‘reproducibility’ mean to you? Are there any related projects?

Jan: From my perspective, I’m more into the problem of bootstrapping, and reproducibility is a prerequisite for bootstrappability. Reproducibility clearly takes a lot of effort to achieve, however. It’s relatively easy to install some Linux distribution and be happy, but if you look at communities that really care about security, they are investing in reproducibility and other ways of improving the security of their supply chain. Projects I believe are complementary to Guix and Mes include NixOS, Debian and — on the hardware side — the RISC-V platform shares many of our core principles and goals.


Chris: Well, what are these principles and goals?

Jan: Reproducibility and bootstrappability often feel like the “next step” in the frontier of free software. If you have all the sources and you can’t reproduce a binary, that just doesn’t “feel right” anymore. We should start to desire (and demand) transparent, elegant and auditable software stacks. To a certain extent, that’s always been a low-level intent since the beginning of free software, but something clearly got lost along the way.

On the other hand, if you look at the NPM or Rust ecosystems, we see a world where people directly install binaries. As they are not as supportive of copyleft as the rest of the free software community, you can see that movement and people in our area are doing more as a response to that so that what we have continues to remain free, and to prevent us from falling asleep and waking up in a couple of years and see, for example, Rust in the Linux kernel and (more importantly) we require big binary blobs to use our systems. It’s an excellent time to advance right now, so we should get a foothold in and ensure we don’t lose any more.


Chris: What would be your ultimate reproducibility goal? And what would the key steps or milestones be to reach that?

Jan: The “ultimate” goal would be to have a system built with open hardware, with all software on it fully bootstrapped from its source. This bootstrap path should be easy to replicate and straightforward to inspect and audit. All fully reproducible, of course! In essence, we would have solved the supply chain security problem.

Our biggest challenge is ignorance. There is much unawareness about the importance of what we are doing. As it is rather technical and doesn’t really affect everyday computer use, that is not surprising. This unawareness can be a great force driving us in the opposite direction. Think of Rust being allowed in the Linux kernel, or Python being required to build a recent GNU C library (glibc). Also, the fact that companies like Google/Apple still want to play “us” vs “them”, not willing to to support GPL software. Not ready yet to truly support user freedom.

Take the infamous log4j bug — everyone is using “open source” these days, but nobody wants to take responsibility and help develop or nurture the community. Not “ecosystem”, as that’s how it’s being approached right now: live and let live/die: see what happens without taking any responsibility. We are growing and we are strong and we can do a lot… but if we have to work against those powers, it can become problematic. So, let’s spread our great message and get more people involved!


Chris: What has been your biggest win?

Jan: From a technical point of view, the “full-source” bootstrap has have been our biggest win. A talk by Carl Dong at the 2019 Breaking Bitcoin conference stated that connecting Jeremiah Orian’s Stage0 project to Mes would be the “holy grail” of bootstrapping, and we recently managed to achieve just that: in other words, starting from hex0, 357-byte binary, we can now build the entire Guix system.

This past year we have not made significant visible progress, however, as our funding was unfortunately not there. The Stage0 project has advanced in RISC-V. A month ago, though, I secured NLnet funding for another year, and thanks to NLnet, Ekaitz Zarraga and Timothy Sample will work on GNU Mes and the Guix bootstrap as well. Separate to this, the bootstrappable community has grown a lot from two people it was six years ago: there are now currently over 100 people in the #bootstrappable IRC channel, for example. The enlarged community is possibly an even more important win going forward.


Chris: How realistic is a 100% bootstrappable toolchain? And from someone who has been working in this area for a while, is “solving Trusting Trust)” actually feasible in reality?

Jan: Two answers: Yes and no, it really depends on your definition. One great thing is that the whole Stage0 project can also run on the Knight virtual machine, a hardware platform that was designed, I think, in the 1970s. I believe we can and must do better than we are doing today, and that there’s a lot of value in it.

The core issue is not the trust; we can probably all trust each other. On the other hand, we don’t want to trust each other or even ourselves. I am not, personally, going to inspect my RISC-V laptop, and other people create the hardware and do probably not want to inspect the software. The answer comes back to being conscientious and doing what is right. Inserting GCC as a binary blob is not right. I think we can do better, and that’s what I’d like to do. The security angle is interesting, but I don’t like the paranoid part of that; I like the beauty of what we are creating together and stepwise improving on that.


Chris: Thanks for taking the time to talk to us today. If someone wanted to get in touch or learn more about GNU Guix or Mes, where might someone go?

Jan: Sure! First, check out:

I’m also on Twitter (@janneke_gnu) and on octodon.social (@janneke@octodon.social).


Chris: Thanks for taking the time to talk to us today.

Jan: No problem. :)




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

18 May, 2022 10:00AM

May 17, 2022

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Clojure Team 2022 Sprint Report

This is the report for the Debian Clojure Team remote sprint that took place on May 13-14th.

Looking at my previous blog entries, this was my first Debian sprint since July 2020! Crazy how fast time flies...

Many thanks to those who participated, namely:

  • Rob Browning (rlb)
  • Elana Hashman (ehashman)
  • Jérôme Charaoui (lavamind)
  • Leandro Doctors (allentiak)
  • Louis-Philippe Véronneau (pollo)

Sadly, Utkarsh Gupta — although having planned on participating — ended up not being able to and worked on DebConf Bursary paperwork instead.

rlb

Rob mostly worked on a creating a dh-clojure tool to help make packaging Clojure libraries easier.

At the moment, most of the packaging is done manually, by invoking build tools by hand. Having a tool to automate many of the steps required to build Clojure packages would go a long way in making them more uniform.

His work (although still very much a WIP) can be found here: https://salsa.debian.org/rlb/dh-clojure/

ehashman

Elana:

  • Finished the Java Team VCS migration to the Clojure Team namespace.
  • Worked on updating Leiningen to 2.9.8.
  • Proposed an upstream dependency update in Leiningen to match Debian's most recent version.
  • Gave pollo Owner access on the Clojure Team namespace and added lavamind as a Developer.
  • Uploaded Clojure 1.10.3-1.
  • Updated sjacket-clojure to version 0.1.1.1 and uploaded it to experimental.
  • Added build tests to spec-alpha-clojure.
  • Filed bug #1010995 for missing test dependency for Clojure.
  • Closed bugs #976151, #992735 and #992736.

lavamind

It was Jérôme's first time working on Clojure packages, and things went great! During the sprint, he:

  • Joined the Clojure Team on salsa.
  • Identified missing dependencies to update puppetdb to the 7.x release.
  • Learned how to package Clojure libraries in Debian.
  • Packaged murphy-clojure, truss-clojure and encore-clojure and uploaded them to NEW.
  • Began to package nippy-clojure.

allentiak

Leandro joined us on Saturday, since he couldn't get off work on Friday. He mostly continued working on replacing our in-house scripts for /usr/bin/clojure by upstream's, a task he had already started during GSoC 2021.

Sadly, none of us were familiar with Debian's mechanism for alternatives. If you (yes you, dear reader) are familiar with it, I'm sure he would warmly welcome feedback on his development branch.

pollo

As for me, I:

  • Fixed a classpath bug in core-async-clojure that was breaking other libraries.
  • Added meaningful autopkgtests to core-async-clojure.
  • Uploaded new versions of tools-analyzer-clojure and trapperkeeper-clojure with autopkgtests.
  • Updated pomegranate-clojure and nrepl-clojure to the latest upstream version and revamped the way they were packaged.
  • Assisted lavamind with Clojure packaging.

Overall, it was quite a productive sprint!

Thanks to Debian for sponsoring our food during the sprint. It was nice to be able to concentrate on fixing things instead of making food :)

Here's a bonus picture of the nice sushi platter I ended up getting for dinner on Saturday night:

Picture of a sushi platter

17 May, 2022 07:48PM by Louis-Philippe Véronneau

May 16, 2022

hackergotchi for Matthew Garrett

Matthew Garrett

Can we fix bearer tokens?

Last month I wrote about how bearer tokens are just awful, and a week later Github announced that someone had managed to exfiltrate bearer tokens from Heroku that gave them access to, well, a lot of Github repositories. This has inevitably resulted in a whole bunch of discussion about a number of things, but people seem to be largely ignoring the fundamental issue that maybe we just shouldn't have magical blobs that grant you access to basically everything even if you've copied them from a legitimate holder to Honest John's Totally Legitimate API Consumer.

To make it clearer what the problem is here, let's use an analogy. You have a safety deposit box. To gain access to it, you simply need to be able to open it with a key you were given. Anyone who turns up with the key can open the box and do whatever they want with the contents. Unfortunately, the key is extremely easy to copy - anyone who is able to get hold of your keyring for a moment is in a position to duplicate it, and then they have access to the box. Wouldn't it be better if something could be done to ensure that whoever showed up with a working key was someone who was actually authorised to have that key?

To achieve that we need some way to verify the identity of the person holding the key. In the physical world we have a range of ways to achieve this, from simply checking whether someone has a piece of ID that associates them with the safety deposit box all the way up to invasive biometric measurements that supposedly verify that they're definitely the same person. But computers don't have passports or fingerprints, so we need another way to identify them.

When you open a browser and try to connect to your bank, the bank's website provides a TLS certificate that lets your browser know that you're talking to your bank instead of someone pretending to be your bank. The spec allows this to be a bi-directional transaction - you can also prove your identity to the remote website. This is referred to as "mutual TLS", or mTLS, and a successful mTLS transaction ends up with both ends knowing who they're talking to, as long as they have a reason to trust the certificate they were presented with.

That's actually a pretty big constraint! We have a reasonable model for the server - it's something that's issued by a trusted third party and it's tied to the DNS name for the server in question. Clients don't tend to have stable DNS identity, and that makes the entire thing sort of awkward. But, thankfully, maybe we don't need to? We don't need the client to be able to prove its identity to arbitrary third party sites here - we just need the client to be able to prove it's a legitimate holder of whichever bearer token it's presenting to that site. And that's a much easier problem.

Here's the simple solution - clients generate a TLS cert. This can be self-signed, because all we want to do here is be able to verify whether the machine talking to us is the same one that had a token issued to it. The client contacts a service that's going to give it a bearer token. The service requests mTLS auth without being picky about the certificate that's presented. The service embeds a hash of that certificate in the token before handing it back to the client. Whenever the client presents that token to any other service, the service ensures that the mTLS cert the client presented matches the hash in the bearer token. Copy the token without copying the mTLS certificate and the token gets rejected. Hurrah hurrah hats for everyone.

Well except for the obvious problem that if you're in a position to exfiltrate the bearer tokens you can probably just steal the client certificates and keys as well, and now you can pretend to be the original client and this is not adding much additional security. Fortunately pretty much everything we care about has the ability to store the private half of an asymmetric key in hardware (TPMs on Linux and Windows systems, the Secure Enclave on Macs and iPhones, either a piece of magical hardware or Trustzone on Android) in a way that avoids anyone being able to just steal the key.

How do we know that the key is actually in hardware? Here's the fun bit - it doesn't matter. If you're issuing a bearer token to a system then you're already asserting that the system is trusted. If the system is lying to you about whether or not the key it's presenting is hardware-backed then you've already lost. If it lied and the system is later compromised then sure all your apes get stolen, but maybe don't run systems that lie and avoid that situation as a result?

Anyway. This is covered in RFC 8705 so why aren't we all doing this already? From the client side, the largest generic issue is that TPMs are astonishingly slow in comparison to doing a TLS handshake on the CPU. RSA signing operations on TPMs can take around half a second, which doesn't sound too bad, except your browser is probably establishing multiple TLS connections to subdomains on the site it's connecting to and performance is going to tank. Fixing this involves doing whatever's necessary to convince the browser to pipe everything over a single TLS connection, and that's just not really where the web is right at the moment. Using EC keys instead helps a lot (~0.1 seconds per signature on modern TPMs), but it's still going to be a bottleneck.

The other problem, of course, is that ecosystem support for hardware-backed certificates is just awful. Windows lets you stick them into the standard platform certificate store, but the docs for this are hidden in a random PDF in a Github repo. Macs require you to do some weird bridging between the Secure Enclave API and the keychain API. Linux? Well, the standard answer is to do PKCS#11, and I have literally never met anybody who likes PKCS#11 and I have spent a bunch of time in standards meetings with the sort of people you might expect to like PKCS#11 and even they don't like it. It turns out that loading a bunch of random C bullshit that has strong feelings about function pointers into your security critical process is not necessarily something that is going to improve your quality of life, so instead you should use something like this and just have enough C to bridge to a language that isn't secretly plotting to kill your pets the moment you turn your back.

And, uh, obviously none of this matters at all unless people actually support it. Github has no support at all for validating the identity of whoever holds a bearer token. Most issuers of bearer tokens have no support for embedding holder identity into the token. This is not good! As of last week, all three of the big cloud providers support virtualised TPMs in their VMs - we should be running CI on systems that can do that, and tying any issued tokens to the VMs that are supposed to be making use of them.

So sure this isn't trivial. But it's also not impossible, and making this stuff work would improve the security of, well, everything. We literally have the technology to prevent attacks like Github suffered. What do we have to do to get people to actually start working on implementing that?

comment count unavailable comments

16 May, 2022 07:48AM

May 15, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.11.1.1.0 on CRAN: Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has syntax deliberately close to Matlab and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 978 other packages on CRAN, downloaded over 24 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 469 times according to Google Scholar.

This release brings a first new upstream fix in the new release series 11.*. In particular, treatment of ill-conditioned matrices is further strengthened. We once again tested this very rigorously via three different RC releases each of which got a full reverse-dependencies run (for which results are always logged here). A minor issue with old g++ compilers was found once 11.1.0 was tagged to this upstream release is now 11.1.1. Also fixed is an OpenMP setup issue where Justin Silverman noticed that we did not propagate the -fopenmp setting correctly.

The full set of changes (since the last CRAN release 0.11.0.0.0) follows.

Changes in RcppArmadillo version 0.11.1.1.0 (2022-05-15)

  • Upgraded to Armadillo release 11.1.1 (Angry Kitchen Appliance)

    • added inv_opts::no_ugly option to inv() and inv_sympd() to disallow inverses of poorly conditioned matrices

    • more efficient handling of rank-deficient matrices via inv_opts::allow_approx option in inv() and inv_sympd()

    • better detection of rank deficient matrices by solve()

    • faster handling of symmetric and diagonal matrices by cond()

  • The configure script again propagates the'found' case again, thanks to Justin Silverman for the heads-up and suggested fix (Dirk and Justin in #376 and #377 fixing #375).

Changes in RcppArmadillo version 0.11.0.1.0 (2022-04-14)

  • Upgraded to Armadillo release 11.0.1 (Creme Brulee)

    • fix miscompilation of inv() and inv_sympd() functions when using inv_opts::allow_approx and inv_opts::tiny options

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 May, 2022 09:21PM

Debian Community News

Overpaid lawyer & Debian miss WIPO deadline

Debian gangsters promise to continue their vendetta despite missing WIPO's deadline.

We have all seen those horrendously long discussion threads that take place in Debian from time to time. We wonder if the Debian trademark team, Brian Gupta and Taowa Munene-Tardif (Rosetwig) have involved the lawyer in one of those endless threads. The lawyer's phone may well be chiming with the sound of a cash register each time a new Debian email comes in.

Nonetheless, apart from the budget blowout, the other problem with these endless email discussions from indecisive volunteers is that it is impossible to meet deadlines.

Subject: Re: (LARC) D2022-1524 Request for amendment and/or clarification
Date: Sun, 15 May 2022 13:37:55 +0000
From: Jonathan Cohen <jonathancohen@charlesfussell.com>
To: Disputes, Domain <domain.disputes@wipo.int>

Dear Lucie A.

Our thanks for your 9 May email which we acknowledge receipt of. In light of this morning's deemed withdrawal of it without prejudice to the Complainants' right to submit a different Complaint UDRP Rule 4 (d), we will shortly be filing that new Complaint including recently received examples of bad faith in the operation of the confirmed Respondent's disputed domain. In addition we request that the Center uses the already submitted fee for the purposes of re-submitting the Complaint under Rule 5 (c) of the WIPO Supplemental Rules given this notification of intention to re-submit.

Yours sincerely,

Jonathan Cohen.
Partner
CHARLES FUSSELL & Co LLP
8 Buckingham Street
Strand
London
WC2N 6BX
DDI +442078399718
Office +44 (0) 20 7839 9710
Mobile +44 (0)7903 061 182
Fax +44 (0)2078 399 711
www.charlesfussell.com

This email is intended solely for the addressee. It is confidential and may be subject to legal professional privilege. If you have received this email in error please delete it immediately and do not copy it or any information contained within it. Please notify the sender by email or telephone +44 (0) 20 7839 9710.

Charles Fussell & Co LLP is a limited liability partnership registered in England & Wales (with registered number OC353144) and is authorised and regulated by the Solicitors Regulation Authority. A list of members' names is open to inspection at our registered office, 8 Buckingham Street, Strand, London WC2N 6BX.

Jonathan Cohen, Charles Fussell, Debian, embezzlement, WIPO, UDRP Richard Stallman, RMS, lynching, Debian, 2021, debian-vote, Steve Langasek, Molly de Blanc, Jonathan Carter, Kurt Roeckx Jonathan Cohen, Charles Fussell, Debian, embezzlement, WIPO, UDRP

15 May, 2022 06:00PM

May 14, 2022

hackergotchi for Daniel Pocock

Daniel Pocock

Subscribing to iCalendar feeds with QR codes

calendar, iCalendar, subscribe, android, iphone, qr code

Every day I receive emails containing invitations to events. Whether they are local community activities or international conferences, they share a common problem that is easily avoidable. Every recipient of the invitation has to manually copy and paste the event to their calendar.

If it is a recurring event then it can be even more challenging. If the participant sets up a recurring event in their own software and stops reading the emails then it will not always be obvious to them when one instance of the meeting is skipped or in an unusual venue.

iCalendar feeds provide a simple solution to all these problems. We've tried to use this solution in a few groups with mixed reults.

In one Toastmasters group, we put the webcal:// URL of the feed into a QR code on the form for new members and guests. Any time a guest came to the meeting we would suggest they scan the code to include future meetings in their calendar. For an event that operates around real-world meetings, having somebody follow the calendar appears to be even more critical than having somebody follow the email list.

We had mixed results with this. Everybody liked the idea and everybody was willing to scan the QR code. I think that almost everybody who attended one of the meetings already had an app for QR codes on their phone. However, after scanning the code, different phones behaved differently.

Some phones would try to send the URL to the web browser instead of the calendar. The browser would then make a one-off download of the calendar file and import the events to the calendar once. There would be no subscription to poll the URL and detect new events or changes to existing events.

Based on these experiences, I felt the best thing to do is create a landing page (download here) that can be embedded in the QR code. The landing page, served by a short PHP script, can look at the contents of the HTTP Accept header and determine if the page is being accessed from a browser or a real calendar app. In the former case, it will show people instructions and in the latter case, it serves the calendar feed contents.

Here are some examples of the Accept header, the script will serve the feed contents whenever it sees text/calendar:

Accept: text/calendar,text/plain;q=0.8,*/*;q=0.5
User-Agent: Mozilla/5.0 (X11; Linux) Gecko/Thunderbird Lightning

Accept: text/calendar, */*;q=0.9
User-Agent: ICSx5/2.0.2 (ical4j/ okhttp/ Android/)

Browser:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8

I had to contemplate the question of whether the QR code should include the webcals:// prefix or the https:// prefix. I decided to use the latter as it is more reliable. Nonetheless, a webcals:// URL is in the HTML page, if somebody clicks it and if they have suitable calendar software installed then there is a good chance they will be subscribed. Even though this takes two steps (scan the code, click the webcals:// URL), a higher percentage of users end up subscribed to the calendar.

The topic of landing pages is now available for discussion in the ICSx5 repository.

Here is a QR code for subscribing to the EPFL-UNIL Toastmasters club calendar, please test it and share feedback about any issues with the landing page:

EPFL-UNIL Toastmasters Club calendar feed

Download the landing page PHP source code

The source code is available on Gitlab, it is easily customizable for any iCalendar URL

14 May, 2022 07:30AM

May 13, 2022

Antoine Beaupré

NVMe/SSD disk failure

Yesterday, my workstation (curie) was hung when I came in the office. After a "skinny elephant", the box rebooted, but it couldn't find the primary disk (in the BIOS). Instead, it booted on the secondary HDD drive, still running an old Fedora 27 install which somehow survived to this day, possibly because ?BTRFS is incomprehensible.

Somehow, I blindly accepted the Fedora prompt asking me to upgrade to Fedora 28, not realizing that:

  1. Fedora is now at release 36, not 28
  2. major upgrades take about an hour...
  3. ... and happen at boot time, blocking the entire machine (I'll remember this next time I laugh at Windows and Mac OS users stuck on updates on boot)
  4. you can't skip more than one major upgrade

Which means that upgrading to latest would take over 4 hours. Thankfully, it's mostly automated and seems to work pretty well (which is not exactly the case for Debian). It still seems like a lot of wasted time -- it would probably be better to just reinstall the machine at this point -- and not what I had planned to do that morning at all.

In any case, after waiting all that time, the machine booted (in Fedora) again, and now it could detect the SSD disk. The BIOS could find the disk too, so after I reinstalled grub (from Fedora) and fixed the boot order, it rebooted, but secureboot failed, so I turned that off (!?), and I was back in Debian.

I did an emergency backup with ddrescue, from the running system which probably doesn't really work as a backup (because the filesystem is likely to be corrupt) but it was fast enough (20 minutes) and gave me some peace of mind. My offsites backup have been down for a while and since I treat my workstations as "cattle" (not "pets"), I don't have a solid recovery scenario for those situations other than "just reinstall and run Puppet", which takes a while.

Now I'm wondering what the next step is: probably replace the disk anyways (the new one is bigger: 1TB instead of 500GB), or keep the new one as a hot backup somehow. Too bad I don't have a snapshotting filesystem on there... (Technically, I have LVM, but LVM snapshots are heavy and slow, and can't atomically cover the entire machine.)

It's kind of scary how this thing failed: totally dropped off the bus, just not in the BIOS at all. I prefer the way spinning rust fails: clickety sounds, tons of warnings beforehand, partial recovery possible. With this new flashy junk, you just lose everything all at once. Not fun.

13 May, 2022 08:19PM

BTRFS notes

I'm not a fan of BTRFS. This page serves as a reminder of why, but also a cheat sheet to figure out basic tasks in a BTRFS environment because those are not obvious to me, even after repeatedly having to deal with them.

Content warning: there might be mentions of ZFS.

Stability concerns

I'm worried about BTRFS stability, which has been historically ... changing. RAID-5 and RAID-6 are still marked unstable, for example. It's kind of a lucky guess whether your current kernel will behave properly with your planned workload. For example, in Linux 4.9, RAID-1 and RAID-10 were marked as "mostly OK" with a note that says:

Needs to be able to create two copies always. Can get stuck in irreversible read-only mode if only one copy can be made.

Even as of now, RAID-1 and RAID-10 has this note:

The simple redundancy RAID levels utilize different mirrors in a way that does not achieve the maximum performance. The logic can be improved so the reads will spread over the mirrors evenly or based on device congestion.

Granted, that's not a stability concern anymore, just performance. A reviewer of a draft of this article actually claimed that BTRFS only reads from one of the drives, which hopefully is inaccurate, but goes to show how confusing all this is.

There are other warnings in the Debian wiki that are quite scary. Even the legendary Arch wiki has a warning on top of their BTRFS page, still.

Even if those issues are now fixed, it can be hard to tell when they were fixed. There is a changelog by feature but it explicitly warns that it doesn't know "which kernel version it is considered mature enough for production use", so it's also useless for this.

It would have been much better if BTRFS was released into the world only when those bugs were being completely fixed. Or that, at least, features were announced when they were stable, not just "we merged to mainline, good luck". Even now, we get mixed messages even in the official BTRFS documentation which says "The Btrfs code base is stable" (main page) while at the same time clearly stating unstable parts in the status page (currently RAID56).

There are much harsher BTRFS critics than me out there so I will stop here, but let's just say that I feel a little uncomfortable trusting server data with full RAID arrays to BTRFS. But surely, for a workstation, things should just work smoothly... Right? Well, let's see the snags I hit.

My BTRFS test setup

Before I go any further, I should probably clarify how I am testing BTRFS in the first place.

The reason I tried BTRFS is that I was ... let's just say "strongly encouraged" by the LWN editors to install Fedora for the terminal emulators series. That, in turn, meant the setup was done with BTRFS, because that was somewhat the default in Fedora 27 (or did I want to experiment? I don't remember, it's been too long already).

So Fedora was setup on my 1TB HDD and, with encryption, the partition table looks like this:

NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  /boot/efi
├─sda2                   8:2    0     1G  0 part  /boot
├─sda3                   8:3    0   7,8G  0 part  
│ └─fedora_swap        253:5    0   7.8G  0 crypt [SWAP]
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /

(This might not entirely be accurate: I rebuilt this from the Debian side of things.)

This is pretty straightforward, except for the swap partition: normally, I just treat swap like any other logical volume and create it in a logical volume. This is now just speculation, but I bet it was setup this way because "swap" support was only added in BTRFS 5.0.

I fully expect BTRFS experts to yell at me now because this is an old setup and BTRFS is so much better now, but that's exactly the point here. That setup is not that old (2018? old? really?), and migrating to a new partition scheme isn't exactly practical right now. But let's move on to more practical considerations.

No builtin encryption

BTRFS aims at replacing the entire mdadm, LVM, and ext4 stack with a single entity, and adding new features like deduplication, checksums and so on.

Yet there is one feature it is critically missing: encryption. See, my typical stack is actually mdadm, LUKS, and then LVM and ext4. This is convenient because I have only a single volume to decrypt.

If I were to use BTRFS on servers, I'd need to have one LUKS volume per-disk. For a simple RAID-1 array, that's not too bad: one extra key. But for large RAID-10 arrays, this gets really unwieldy.

The obvious BTRFS alternative, ZFS, supports encryption out of the box and mixes it above the disks so you only have one passphrase to enter. The main downside of ZFS encryption is that it happens above the "pool" level so you can typically see filesystem names (and possibly snapshots, depending on how it is built), which is not the case with a more traditional stack.

Subvolumes, filesystems, and devices

I find BTRFS's architecture to be utterly confusing. In the traditional LVM stack (which is itself kind of confusing if you're new to that stuff), you have those layers:

  • disks: let's say /dev/nvme0n1 and nvme1n1
  • RAID arrays with mdadm: let's say the above disks are joined in a RAID-1 array in /dev/md1
  • volume groups or VG with LVM: the above RAID device (technically a "physical volume" or PV) is assigned into a VG, let's call it vg_tbbuild05 (multiple PVs can be added to a single VG which is why there is that abstraction)
  • LVM logical volumes: out of that volume group actually "virtual partitions" or "logical volumes" are created, that is where your filesystem lives
  • filesystem, typically with ext4: that's your normal filesystem, which treats the logical volume as just another block device

A typical server setup would look like this:

NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
nvme0n1                   259:0    0   1.7T  0 disk  
├─nvme0n1p1               259:1    0     8M  0 part  
├─nvme0n1p2               259:2    0   512M  0 part  
│ └─md0                     9:0    0   511M  0 raid1 /boot
├─nvme0n1p3               259:3    0   1.7T  0 part  
│ └─md1                     9:1    0   1.7T  0 raid1 
│   └─crypt_dev_md1       253:0    0   1.7T  0 crypt 
│     ├─vg_tbbuild05-root 253:1    0    30G  0 lvm   /
│     ├─vg_tbbuild05-swap 253:2    0 125.7G  0 lvm   [SWAP]
│     └─vg_tbbuild05-srv  253:3    0   1.5T  0 lvm   /srv
└─nvme0n1p4               259:4    0     1M  0 part

I stripped the other nvme1n1 disk because it's basically the same.

Now, if we look at my BTRFS-enabled workstation, which doesn't even have RAID, we have the following:

  • disk: /dev/sda with, again, /dev/sda4 being where BTRFS lives
  • filesystem: fedora_crypt, which is, confusingly, kind of like a volume group. it's where everything lives. i think.
  • subvolumes: home, root, /, etc. those are actually the things that get mounted. you'd think you'd mount a filesystem, but no, you mount a subvolume. that is backwards.

It looks something like this to lsblk:

NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  /boot/efi
├─sda2                   8:2    0     1G  0 part  /boot
├─sda3                   8:3    0   7,8G  0 part  [SWAP]
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /srv

Notice how we don't see all the BTRFS volumes here? Maybe it's because I'm mounting this from the Debian side, but lsblk definitely gets confused here. I frankly don't quite understand what's going on, even after repeatedly looking around the rather dismal documentation. But that's what I gather from the following commands:

root@curie:/home/anarcat# btrfs filesystem show
Label: 'fedora'  uuid: 5abb9def-c725-44ef-a45e-d72657803f37
    Total devices 1 FS bytes used 883.29GiB
    devid    1 size 922.47GiB used 916.47GiB path /dev/mapper/fedora_crypt

root@curie:/home/anarcat# btrfs subvolume list /srv
ID 257 gen 108092 top level 5 path home
ID 258 gen 108094 top level 5 path root
ID 263 gen 108020 top level 258 path root/var/lib/machines

I only got to that point through trial and error. Notice how I use an existing mountpoint to list the related subvolumes. If I try to use the filesystem path, the one that's listed in filesystem show, I fail:

root@curie:/home/anarcat# btrfs subvolume list /dev/mapper/fedora_crypt 
ERROR: not a btrfs filesystem: /dev/mapper/fedora_crypt
ERROR: can't access '/dev/mapper/fedora_crypt'

Maybe I just need to use the label? Nope:

root@curie:/home/anarcat# btrfs subvolume list fedora
ERROR: cannot access 'fedora': No such file or directory
ERROR: can't access 'fedora'

This is really confusing. I don't even know if I understand this right, and I've been staring at this all afternoon. Hopefully, the lazyweb will correct me eventually.

(As an aside, why are they called "subvolumes"? If something is a "sub" of "something else", that "something else" must exist right? But no, BTRFS doesn't have "volumes", it only has "subvolumes". Go figure. Presumably the filesystem still holds "files" though, at least empirically it doesn't seem like it lost anything so far.

In any case, at least I can refer to this section in the future, the next time I fumble around the btrfs commandline, as I surely will. I will possibly even update this section as I get better at it, or based on my reader's judicious feedback.

Mounting BTRFS subvolumes

So how did I even get to that point? I have this in my /etc/fstab, on the Debian side of things:

UUID=5abb9def-c725-44ef-a45e-d72657803f37   /srv    btrfs  defaults 0   2

This thankfully ignores all the subvolume nonsense because it relies on the UUID. mount tells me that's actually the "root" (? /?) subvolume:

root@curie:/home/anarcat# mount | grep /srv
/dev/mapper/fedora_crypt on /srv type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)

Let's see if I can mount the other volumes I have on there. Remember that subvolume list showed I had home, root, and var/lib/machines. Let's try root:

mount -o subvol=root /dev/mapper/fedora_crypt /mnt

Interestingly, root is not the same as /, it's a different subvolume! It seems to be the Fedora root (/, really) filesystem. No idea what is happening here. I also have a home subvolume, let's mount it too, for good measure:

mount -o subvol=home /dev/mapper/fedora_crypt /mnt/home

Note that lsblk doesn't notice those two new mountpoints, and that's normal: it only lists block devices and subvolumes (rather inconveniently, I'd say) do not show up as devices:

root@curie:/home/anarcat# lsblk 
NAME                   MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                      8:0    0 931,5G  0 disk  
├─sda1                   8:1    0   200M  0 part  
├─sda2                   8:2    0     1G  0 part  
├─sda3                   8:3    0   7,8G  0 part  
└─sda4                   8:4    0 922,5G  0 part  
  └─fedora_crypt       253:4    0 922,5G  0 crypt /srv

This is really, really confusing. Maybe I did something wrong in the setup. Maybe it's because I'm mounting it from outside Fedora. Either way, it just doesn't feel right.

No disk usage per volume

If you want to see what's taking up space in one of those subvolumes, tough luck:

root@curie:/home/anarcat# df -h  /srv /mnt /mnt/home
Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/fedora_crypt  923G  886G   31G  97% /srv
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt
/dev/mapper/fedora_crypt  923G  886G   31G  97% /mnt/home

(Notice, in passing, that it looks like the same filesystem is mounted in different places. In that sense, you'd expect /srv and /mnt (and /mnt/home?!) to be exactly the same, but no: they are entirely different directory structures, which I will not call "filesystems" here because everyone's head will explode in sparks of confusion.)

Yes, disk space is shared (that's the Size and Avail columns, makes sense). But nope, no cookie for you: they all have the same Used columns, so you need to actually walk the entire filesystem to figure out what each disk takes.

(For future reference, that's basically:

root@curie:/home/anarcat# time du -schx /mnt/home /mnt /srv
124M    /mnt/home
7.5G    /mnt
875G    /srv
883G    total

real    2m49.080s
user    0m3.664s
sys 0m19.013s

And yes, that was painfully slow.)

ZFS actually has some oddities in that regard, but at least it tells me how much disk each volume (and snapshot) takes:

root@tubman:~# time df -t zfs -h
Filesystem         Size  Used Avail Use% Mounted on
rpool/ROOT/debian  3.5T  1.4G  3.5T   1% /
rpool/var/tmp      3.5T  384K  3.5T   1% /var/tmp
rpool/var/spool    3.5T  256K  3.5T   1% /var/spool
rpool/var/log      3.5T  2.0G  3.5T   1% /var/log
rpool/home/root    3.5T  2.2G  3.5T   1% /root
rpool/home         3.5T  256K  3.5T   1% /home
rpool/srv          3.5T   80G  3.5T   3% /srv
rpool/var/cache    3.5T  114M  3.5T   1% /var/cache
bpool/BOOT/debian  571M   90M  481M  16% /boot

real    0m0.003s
user    0m0.002s
sys 0m0.000s

That's 56360 times faster, by the way.

But yes, that's not fair: those in the know will know there's a different command to do what df does with BTRFS filesystems, the btrfs filesystem usage command:

root@curie:/home/anarcat# time btrfs filesystem usage /srv
Overall:
    Device size:         922.47GiB
    Device allocated:        916.47GiB
    Device unallocated:        6.00GiB
    Device missing:          0.00B
    Used:            884.97GiB
    Free (estimated):         30.84GiB  (min: 27.84GiB)
    Free (statfs, df):        30.84GiB
    Data ratio:               1.00
    Metadata ratio:           2.00
    Global reserve:      512.00MiB  (used: 0.00B)
    Multiple profiles:              no

Data,single: Size:906.45GiB, Used:881.61GiB (97.26%)
   /dev/mapper/fedora_crypt  906.45GiB

Metadata,DUP: Size:5.00GiB, Used:1.68GiB (33.58%)
   /dev/mapper/fedora_crypt   10.00GiB

System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
   /dev/mapper/fedora_crypt   16.00MiB

Unallocated:
   /dev/mapper/fedora_crypt    6.00GiB

real    0m0,004s
user    0m0,000s
sys 0m0,004s

Almost as fast as ZFS's df! Good job. But wait. That doesn't actually tell me usage per subvolume. Notice it's filesystem usage, not subvolume usage, which unhelpfully refuses to exist. That command only shows that one "filesystem" internal statistics that are pretty opaque.. You can also appreciate that it's wasting 6GB of "unallocated" disk space there: I probably did something Very Wrong and should be punished by Hacker News. I also wonder why it has 1.68GB of "metadata" used...

At this point, I just really want to throw that thing out of the window and restart from scratch. I don't really feel like learning the BTRFS internals, as they seem oblique and completely bizarre to me. It feels a little like the state of PHP now: it's actually pretty solid, but built upon so many layers of cruft that I still feel it corrupts my brain every time I have to deal with it (needle or haystack first? anyone?)...

Conclusion

I find BTRFS utterly confusing and I'm worried about its reliability. I think a lot of work is needed on usability and coherence before I even consider running this anywhere else than a lab, and that's really too bad, because there are really nice features in BTRFS that would greatly help my workflow. (I want to use filesystem snapshots as high-performance, high frequency backups.)

So now I'm experimenting with OpenZFS. It's so much simpler, just works, and it's rock solid. After this 8 minute read, I had a good understanding of how ZFS worked. Here's the 30 seconds overview:

  • vdev: a RAID array
  • vpool: a volume group of vdevs
  • datasets: normal filesystems (or block device, if you want to use another filesystem on top of ZFS)

There's also other special volumes like caches and logs that you can (really easily, compared to LVM caching) use to tweak your setup. You might also want to look at recordsize or ashift to tweak the filesystem to fit better your workload (or deal with drives lying about their sector size, I'm looking at you Samsung), but that's it.

Running ZFS on Linux currently involves building kernel modules from scratch on every host, which I think is pretty bad. But I was able to setup a ZFS-only server using this excellent documentation without too much problem.

I'm hoping some day the copyright issues are resolved and we can at least ship binary packages, but the politics (e.g. convincing Debian that is the right thing to do) and the logistics (e.g. DKMS auto-builders? is that even a thing? how about signed DKMS packages? fun-fun-fun!) seem really impractical. Who knows, maybe hell will freeze over (again) and Oracle will fix the CDDL. I personally think that we should just completely ignore this problem (which wasn't even supposed to be a problem) and ship binary packages directly, but I'm a pragmatic and do not always fit well with the free software fundamentalists.

All of this to say that, short term, we don't have a reliable, advanced filesystem/logical disk manager in Linux. And that's really too bad.

13 May, 2022 08:04PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (March and April 2022)

The following contributors got their Debian Developer accounts in the last two months:

  • Henry-Nicolas Tourneur (hntourne)
  • Nick Black (dank)

The following contributors were added as Debian Maintainers in the last two months:

  • Jan Mojžíš
  • Philip Wyett
  • Thomas Ward
  • Fabio Fantoni
  • Mohammed Bilal
  • Guilherme de Paula Xavier Segundo

Congratulations!

13 May, 2022 02:57PM by Jean-Pierre Giraud

Arturo Borrero González

Toolforge GridEngine Debian 10 Buster migration

Toolforge logo, a circle with an anvil in the middle

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

In accordance with our operating system upgrade policy, we should migrate our servers to Debian Buster.

As discussed in the previous post, one of the most important and successful services provided by the Wikimedia Cloud Services team at the Wikimedia Foundation is Toolforge. Toolforge is a platform that allows users and developers to run and use a variety of applications with the ultimate goal of helping the Wikimedia mission from the technical side.

As you may know already, all Wikimedia Foundation servers are powered by Debian, and this includes Toolforge and Cloud VPS. The Debian Project mostly follows a two year cadence for releases, and Toolforge has been using Debian Stretch for some years now, which nowadays is considered “old-old-stable”. In accordance with our operating system upgrade policy, we should migrate our servers to Debian Buster.

Toolforge’s two different backend engines, Kubernetes and Grid Engine, are impacted by this upgrade policy. Grid Engine is notably tied to the underlying Debian release, and the execution environment offered to tools running in the grid is limited to what the Debian archive contains for a given release. This is unlike in Kubernetes, where tool developers can leverage container images and decouple the runtime environment selection from the base operating system.

Since the Toolforge grid original conception, we have been doing the same operation over and over again:

  • Prepare a parallel grid deployment with the new operating system.
  • Ask our users (tool developers) to evaluate a newer version of their runtime and programming languages.
  • Introduce a migration window and coordinate a quick migration.
  • Finally, drop the old operating system from grid servers.

We’ve done this type of migration several times before. The last few ones were Ubuntu Precise to Ubuntu Trusty and Ubuntu Trusty to Debian Stretch. But this time around we had some special angles to consider.

So, you are upgrading the Debian release

  • You are migrating to Debian 11 Bullseye, no?
  • No, we’re migrating to Debian 10 Buster
  • Wait, but Debian 11 Bullseye exists!
  • Yes, we know! Let me explain…

We’re migrating the grid from Debian 9 Stretch to Debian 10 Buster, but perhaps we should be migrating from Debian 9 Stretch to Debian 11 Bullseye directly. This is a legitimate concern, and we discussed it in September 2021.

A timeline showing Debian versions since 2014

Back then, our reasoning was that skipping to Debian 11 Bullseye would be more difficult for our users, especially because greater jump in version numbers for the underlying runtimes. Additionally, all the migration work started before Debian 11 Bullseye was released. Our original intention was for the migration to be completed before the release. For a couple of reasons the project was delayed, and when it was time to restart the project we decided to continue with the original idea.

We had some work done to get Debian 10 Buster working correctly with the grid, and supporting Debian 11 Bullseye would require an additional effort. We didn’t even check if Grid Engine could be installed in the latest Debian release. For the grid, in general, the engineering effort to do a N+1 upgrade is lower than doing a N+2 upgrade. If we had tried a N+2 upgrade directly, things would have been much slower and difficult for us, and for our users.

In that sense, our conclusion was to not skip Debian 10 Buster.

We no longer want to run Grid Engine

In a previous blog post we shared information about our desired future for Grid Engine in Toolforge. Our intention is to discontinue our usage of this technology.

No grid? What about my tools?

Toolforge logo, a circle with an anvil in the middle

Traditionally there have been two main workflows or use cases that were supported in the grid, but not in our Kubernetes backend:

  • Running jobs, long-running bots and other scheduled tasks.
  • Mixing runtime environments (for example, a nodejs app that runs some python code).

The good news is that work to handle the continuity of such use cases has already started. This takes the form of two main efforts:

  • The Toolforge buildpacks project — to support arbitrary runtime environments.
  • The Toolforge Jobs Framework — to support jobs, scheduled tasks, etc.

In particular, the Toolforge Jobs Framework has been available for a while in an open beta phase. We did some initial design and implementation, then deployed it in Toolforge for some users to try it and report bugs, report missing features, etc.

These are complex, and feature-rich projects, and they deserve a dedicated blog post. More information on each will be shared in the future. For now, it is worth noting that both initiatives have some degree of development already.

The conclusion

Knowing all the moving parts, we were faced with a few hard questions when deciding how to approach the Debian 9 Stretch deprecation:

  • Should we not upgrade the grid, and focus on Kubernetes instead? Let Debian 9 Stretch be the last supported version on the grid?
  • What is the impact of these decisions on the technical community? What is best for our users?

The choices we made are already known in the community. A couple of weeks ago we announced the Debian 9 Stretch Grid Engine deprecation. In parallel to this migration, we decided to promote the new Toolforge Jobs Framework, even if it’s still in beta phase. This new option should help users to future-proof their tool, and reduce maintenance effort. An early migration to Kubernetes now will avoid any more future grid problems.

We truly hope that Debian 10 Buster is the last version we have for the grid, but as they say, hope is not a good strategy when it comes to engineering. What we will do is to work really hard in bringing Toolforge to the service level we want, and that means to keep developing and enabling more Kubernetes-based functionalities.

Stay tuned for more upcoming blog posts with additional information about Toolforge.

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

13 May, 2022 09:42AM

May 12, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

Scalable Computing seminar

title slide

Last week I delivered a seminar for the research group I belong to, Scalable Computing. This was a slightly-expanded version of the presentation I gave at uksystems21. The most substantial change is the addition of a fourth example to describe recent work on optimising for a second non-functional requirement: Bandwidth.

12 May, 2022 10:19AM

May 11, 2022

Debian Community News

Sexism processing travel reimbursement

According to the DebConf travel funding rules, volunteers need to buy their own tickets and then wait for a reimbursement to come later.

For DebConf18 (Taiwan) and DebConf19 (Brazil), some of the Albanian women asked to have the tickets purchased in advance. Debian changed the rules for these women but not for anybody else.

Subject: Re: [rt.debian.org #7328] DebConf travel pre-payment requests
From: Martin Michlmayr
Time: Fri Jun 29 08:56:42 2018

* Hector Oron [2018-06-28 10:55]:
> I added Martin to the list, he'll be taking care of flight ticket
> purchase if you send him flight details.

This has been taken care of.

--
Martin Michlmayr
https://www.cyrius.com/

They are the same women seen in the photo with the Israeli.

DebConf18, Enkelena Haxhiu, Diellza Shabani, Elena Gjevukaj, Lior Kaplan, Kristi Progri

Every year male students complain that their funds have still not been reimbursed months after DebConf. DebConf15 was in August but payments were still outstanding at the end of November:

Subject: Re: [Soc-coordination] DebConf travel / GSoC student payments?
Date: Wed, 25 Nov 2015 00:25:18 +0530
From: Komal Sukhani <komaldsukhani@gmail.com>
To: Michael Schultheiss <schultmc@spi-inc.org>
CC: treasurer@spi-inc.org, soc-coordination@lists.alioth.debian.org

Hi Michael,

I still don't got the DebConf travel reimbursement. Have you made the payment?

Sorry for trouble.

On Mon, Nov 2, 2015 at 9:54 AM, Michael Schultheiss <mailto:schultmc@spi-inc.org> wrote:

Apologies for the delays in payments. I should have the payments processed this week and payments shoud be received in approximately 1-2 weeks.

11 May, 2022 09:15PM

Free Software Fellowship

VMWare GPL case: Linux Foundation reneged on Conservancy funding

Bounced cheque, Linux foundation, conservancy, GPL, vmware

The leaked email below may be a bit old but it reveals the consequences suffered by Conservancy when they began the VMware GPL lawsuit.

The key point is that a sponsor canceled a donation after signing documentation. It is believed to be Linux Foundation.

In one case, a major donation was outright canceled after we'd already received commitment documentation from the funder. The funder told us they specifically reneged due to the VMware lawsuit.

The Register already reported on a leaked copy of the email but they didn't mention that a sponsor had reneged on a signed agreement.

Karen Sandler nominated in the Linux Foundation elections and Linux Foundation responded by removing the independent membership class, effectively expelling Sandler and all other private members in one go.

That expulsion looks a lot like the membership frauds committed by Matthias Kirschner in the FSFE.

Date: Wed, 25 Nov 2015 09:24:22 -0800
From: "Bradley M. Kuhn" <bkuhn@sfconservancy.org>
Subject: Our fundraiser & the potential for "hibernation" of GPL enforcement.

You may have seen that Conservancy launched its annual fundraiser yesterday: https://sfconservancy.org/news/2015/nov/23/2015fundraiser/ The details of our appeal are here: https://sfconservancy.org/supporter/

You may be wondering what it means to "hibernate" GPL enforcement. I explain below what is happening.

As most of you know, we announced funding for the ground-breaking VMware lawsuit in March. That lawsuit is the first in history specifically considering the combined/derived work issue under copyright law and its interaction with strong copyleft. While there are no guarantees in litigation, we remain confident that the Courts in Germany will adjudicate the GPL positively. However, it *will* be a while before the suit is decided. We've set aside the funding we raised earlier in 2015 specifically for that lawsuit. Both organizationally -- and personally for me and Karen -- we plan to stick with Christoph to the end and to fund and guide the lawsuit to its conclusion.

However, the VMware lawsuit has not been without consequences for us. Companies have withdrawn funding from Conservancy, and a few industry insiders are actively working to discredit Conservancy and GPL enforcement generally; they're encouraging larger donors not to contribute. In one case, a major donation was outright canceled after we'd already received commitment documentation from the funder. The funder told us they specifically reneged due to the VMware lawsuit.

In short, looking at our 2016 budget, we cannot continue other GPL enforcement without substantially more funding. Karen and I pored over the numbers; we built the fundraiser around the needs in that budget.

I know that many of you have a specific GPL violation that you want us to do something about and we want to be responsive. But we need resources to proceed. Indeed, Karen and I have a large, strategic plan of how to resolve the proprietary LKM issues and other threats to strong copyleft, but it won't be easy and it will take time and resources.

Thus, we need a mandate from the community in two directions. First, we need resilient funding that's only available from lots of small, individual donors. Large donations, particularly from for-profit companies, are just too fickle and capricious. Second, with a Supporter base of 2,500 individuals, we have a public mandate that allows us to show violators that not only all of you -- as an enforcement coalition of copyright holders -- support what we do, but a large constituency who don't necessarily hold copyright want to see us enforce your copyrights to benefit the public good and software freedom.

Finally, we ask for your assistance in two directions. First, we ask each of you to consider becoming a Supporter at https://sfconservancy.org/supporters/. Second, and more importantly, we ask that you publicly talk on social media over the next two months (and beyond, if you like) about why you're part of an license enforcement coalition at Conservancy and ask your friends and colleagues to become Supporters so we can effectively do the work to defend copyleft via your copyrights. We ask in particular that those of you who have hitherto been anonymous members of these coalitions consider whether you're ready to come forward publicly now.

Thank you again for sticking with us, and Karen and I are both happy to discuss any questions here on these lists or privately with you.

--
Bradley M. Kuhn
President & Distinguished Technologist of Software Freedom Conservancy

11 May, 2022 04:30PM

Debian Community News

Brian Gupta & Debian: WIPO claim botched, suspended

WIPO has finally responded to the Debian vendetta, informing the overpaid lawyers at Charles Fussell (parody site) that their claim didn't even reach first base.

We believe but can't prove that the lawyers have now gone back to the Debian leader, Jonathan Carter, asking him to advance more money so they can spend more time revising their original and inadequate documents.

Who do we blame for this expensive mess?

Debian Community News is calling for the resignation of Jonathan Carter and Brian Gupta

Felix Lechner already resigned in disgust from the Debian trademark team while another member, Taowa Munene-Tardif / Rosetwig appears to be little more than a groupie or ring-in.

Brian Gupta of Brandorr Group, New York, appears to be the more experienced member of the Debian trademark team. There is a recording of him talking about it at DebConf14. It is unlikely that he can explain or justify this mess.

Brian Gupta, Debian, Brandorr, Trademark, DebConf

WIPO has given a deadline of 14 May for the errors in the claim to be corrected. This gives Debian cabal members very little time to decide whether they want to advance more money to the lawyers or abandon the claim, forefeiting the extravagent fees already paid to WIPO.

Subject: (LARC) D2022-1524 <debian.community> Request for amendment and/or clarification
Date: Mon, 9 May 2022 14:28:01 +0000
From: Disputes, Domain <domain.disputes@wipo.int>
To: jonathancohen@charlesfussell.com <jonathancohen@charlesfussell.com>

Dear Complainant,

Further to our Acknowledgement of Receipt of Complaint, as required by Paragraph 4(c) of the Rules for Uniform Domain Name Dispute Resolution Policy (the “Rules”) and Paragraph 5 of the WIPO Arbitration and Mediation Center (the “Center”) Supplemental Rules for Uniform Domain Name Dispute Resolution Policy (the “Supplemental Rules”), we have reviewed your Complaint to verify whether it satisfies the formal requirements of the Uniform Domain Name Dispute Resolution Policy (the “Policy”), the Rules and Supplemental Rules.

[snip]

Please submit your amendment(s) and clarification(s) by May 14, 2022. You may do so by electronically submitting a simple amendment or an amended Complaint. You may also wish to include further facts or arguments in light of the above.

Any submission by you to the Center that is required as a result of this notification, must be copied to the Respondent in accordance with Rules, Paragraph 2(b) and 2(h).

Sincerely,

Lucie A.

Legal Case Manager / Administratrice du litige
_______________________________________________________________________________
WIPO Arbitration and Mediation Center - Le Centre d’arbitrage et de
médiation de l’OMPI
34, chemin des Colombettes, 1211 Geneva 20, Switzerland
T +41 22 338 71 21 F +41 22 740 37 00 E domain.disputes@wipo.int
W www.wipo.int/amc

Jonathan Cohen, Charles Fussell, Debian, embezzlement, WIPO, UDRP

11 May, 2022 10:00AM

May 10, 2022

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, April 2022

In April I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 8 hours from March. I worked 11 hours, and will carry over the remaining time to May.

I spent most of my time triaging security issues for Linux, working out which of them were fixed upstream and which actually applied to the versions provided in Debian 9 "stretch". I also rebased the Linux 4.9 (linux) package on the latest stable update, but did not make an upload this month.

10 May, 2022 09:41PM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

2022 Digital Rights Job Fair

I'm lucky enough to work at the intersection between information communications technology and civil rights/civil liberties. I get to combine technical interests and social/political interests.

I've talked with many folks over the years who are interested in doing similar work. Some come from a technical background, and some from an activist background (and some from both). Are you one of them? Are you someone who works as an activist or in a technical field who wants to look into different ways of meging these interests?

Some great organizers maintain a job board for Digital Rights. Next month they'll host a Digital Rights Job Fair, which offers an opportunity to talk with good people at organizations that fight in different ways for a better world. You need to RSVP to attend.

Digital Rights Job Fair

10 May, 2022 08:39PM by Daniel Kahn Gillmor

Free Software Fellowship

Daniele Scasciafratte & Mozilla, OSCAL, Albania dating

Debian Community News has raised the issue of dating and romance with women from Albania and Kosovo.

OSCAL 2022 is back, 18-19 June 2022, register here if you dare.

Debian Community News has also published evidence that many of the women who received travel bursaries are not publishing code.

It is obvious that many of the women in red t-shirts are being paid to attend the Albanian conferences and pretend to be volunteers. They come and sit in the workshops to make the room look full. They ask questions and then at the end of the day they all disappear. There is no follow up on any of the topics presented. As soon as they receive their cash payment at the end of the day, all these women vanish.

If this is so obvious, why do the leaders of free software organizations continue to give their developers travel funds to go to events in Albania?

If it sounds like a rort, if it looks like a rort and if it smells like a rort (or pussy) then it probably is a rort.

Many free software organizations are registered as tax-privileged charities, for example, using the US 501(c)(3) status, which can lead to serious punishments if the funds pay for holidays and dating.

Looking through photos published by Andis Rado, we found pictures of the the Mozilla Rep Daniele Scasciafratte.

Daniele Scasciafratte, Blerta, OSCAL, dating, Tirana, Albania Daniele Scasciafratte, OSCAL, dating, Tirana, Albania

Scasciafratte looks comfortable, a lot like the photos of Justin Flory (UNICEF, Red Hat) with a pimp at his feet:

10 May, 2022 08:00PM

Russell Coker

Elon and Free Speech

Elon Musk has made the news for spending billions to buy a share of Twitter for the alleged purpose of providing free speech. The problem with this claim is that having any company controlling a large portion of the world’s communication is inherently bad for free speech. The same applies for Facebook, but that’s not a hot news item at the moment.

If Elon wanted to provide free speech he would want to have decentralised messaging systems so that someone who breaks rules on one platform could find another with different rules. Among other things free speech ideally permits people to debate issues with residents of another country on issues related to different laws. If advocates for the Russian government get kicked off Twitter as part of the American sanctions against Russia then American citizens can’t debate the issue with Russian citizens via Twitter. Mastodon is one example of a federated competitor to Twitter [1]. With a federated messaging system each host could make independent decisions about interpretation of sanctions. Someone who used a Mastodon instance based in the US could get a second account in another country if they wanted to communicate with people in countries that are sanctioned by the US.

The problem with Mastodon at the moment is lack of use. It’s got a good set of features and support for different platforms, there are apps for Android and iPhone as well as lots of other software using the API. But if the people you want to communicate with aren’t on it then it’s less useful. Elon could solve that problem by creating a Tesla Mastodon server and give a free account to everyone who buys a new Tesla, which is the sort of thing that a lot of Tesla buyers would like. It’s quite likely that other companies selling prestige products would follow that example. Everyone has seen evidence of people sharing photos on social media with someone else’s expensive car, a Mastodon account on ferrari.com or mercedes.com would be proof of buying the cars in question. The number of people who buy expensive cars new is a very small portion of the world population, but it’s a group of people who are more influential than average and others would join Mastodon servers to follow them.

The next thing that Elon could do to kill Twitter would be to have all his companies (which have something more than a dozen verified Twitter accounts) use Mastodon accounts for their primary PR releases and then send the same content to Twitter with a 48 hour delay. That would force journalists and people who want to discuss those companies on social media to follow the Mastodon accounts. Again this wouldn’t be a significant number of people, but they would be influential people. Getting journalists to use a communications system increases it’s importance.

The question is whether Elon is lacking the vision necessary to plan a Mastodon deployment or whether he just wants to allow horrible people to run wild on Twitter.

The Verge has an interesting article from 2019 about Gab using Mastodon [2]. The fact that over the last 2.5 years I didn’t even hear of Gab using Mastodon suggests that the fears of some people significantly exceeded the problem. I’m sure that some Gab users managed to harass some Mastodon users, but generally they were apparently banned quickly. As an aside the Mastodon server I use doesn’t appear to ban Gab, a search for Gab on it gave me a user posting about being “pureblood” at the top of the list.

Gab claims to have 4 million accounts and has an estimated 100,000 active users. If 5.5% of Tesla owners became active users on a hypothetical Tesla server that would be the largest Mastodon server. Elon could demonstrate his commitment to free speech by refusing to ban Gab in any way. The Wikipedia page about Gab [3] has a long list of horrible people and activities associated with it. Is that the “free speech” to associate with Tesla? Polestar makes some nice electric cars that appear quite luxurious [4] and doesn’t get negative PR from the behaviour of it’s owner, that’s something Elon might want to consider.

Is this really about bragging rights? Buying a controlling interest in a company that has a partial monopoly on Internet communication is something to boast about. Could users of commercial social media be considered serfs who serve their billionaire overlord?

10 May, 2022 11:53AM by etbe

Debian Community News

Girlfriends, Sex, Prostitution & Debian at DebConf22, Prizren, Kosovo

When the Formula One grand prix went to Hungary, they faced a big problem: prostitution was illegal. This is a challenge for events with a predominantly male audience. Hungary changed their laws and even constructed Erotik Camping zones for diversity purposes.

We already exposed the segregated accommodation at DebConf22 and now we will penetrate deeper into the issue.

The last DebConf in Europe was DebConf16 in Heidelberg, Germany. It was the biggest DebConf ever with over 700 participants. Participants attend between one and two weeks, up to 17 or 18 days for some people. Participation is over 98 percent male.

DebConf Kosovo originally planned to be in the capital city, Prishtina. The organizers subsequently decided to move it to a much smaller city, Prizren. The main reason for this move is to be inaccessible.

Debian has banned people like Linus Torvalds. They want to have DebConf concealed within the walls and gates of a former military base just in case Linus or any other censored speaker decides to show up.

This plan has various problems. Kosovo is a small and conservative country and Prizren is a very small city. Everybody knows each other. Women in Prizren don't do one-night-stands in their home town.

The hotels are not very big and everybody knows each other in these villages. Any local woman who is seen coming out of the bedroom of a Google or Ubuntu employee will be noticed. It is likely she will be recognized by at least one of the staff. She may be captured on CCTV recordings.

Prostitution is illegal in Kosovo. There is no massage parlour or red light district in a place like Prizren.

Debian pre-committed to pay for meals and accommodation for women on diversity bursaries but the women are not coming. Kosovo is a developing country and Debian can't even give the meals away.

There was some hope that women would come from neighboring countries like Macedonia and Albania. However, the women in those countries are not fools.

The women have all seen the pictures from Brazil, where Kosovan women were photographed with Chris Lamb and used like trophies.

Here is the trailer for the movie Wild Orchid, a reminder of DebConf19:

DebConf19, Wild Orchid 1989, Curitiba, Brazil

Reasons why women may not go to DebConf

Any young woman in Kosovo with any technical skill has already found a job. Employers will not give these women a week off work to go to a tech conference.

Three women from the region already received Outreachy internships. Renata Gegaj received $6,000 for 3 months work. Kristi Progri received $6,000 for 3 months work. Then Anisa Kuci received $6,000 for 3 months work. That is approximately $20 per hour in a country where most women earn less than $20 per day.

Therefore, why should other women go and work for free as volunteers? Balkan women have dignity and confidence and some are now asking for similar payments to do work at DebConf.

Outreachy

Albanian pimps to rescue sex-starved Debian

Debian is going to great lengths to cover up news about the Albanian Ubuntu employee who gave a job to his underage girlfriend.

Given all the recent publicity about a Red Hat associate pictured with the pimp, in scenes that resemble Epstein's search for credibility at MIT, why are the Debian and Ubuntu people willing to continue working with the pimp?

The pimp has a reputation as a fixer who can organize women to show up.

The Albanian conference OSCAL has been held in May each year but for 2022, they are holding it in June, just 3 weeks before DebConf in neighboring Kosovo. Some people are hoping that OSCAL will be an opportunity to scout for Albanian women and give them any leftover bursaries. Will the women be told about the accommodation and the history of misogyny in Debian before they get on the bus?

Here is that picture of the FSFE president Matthias Kirschner with the paid volunteers at OSCAL before the pandemic:

Matthias Kirschner, Chris Lamb, Oxford Street, London, Albanian women, people trafficking, modern slavery Matthias Kirschner, Tirana, Albania, women, girls, Outreachy Matthias Kirschner, Tirana, Albania, women, girls, Outreachy

Albanian pimp Elio Qoshi with all the girls paid to pose as volunteers at OSCAL 2016:

Elio Qoshi, Ubuntu, pimp

10 May, 2022 07:45AM

May 09, 2022

Taowa Munene-Tardif (Rosetwig) & Debian trademark, libel, wasting money

Taowa Munene-Tardif, Taowa Rosetwig, NeuroPoly, Debian

Debian appears to be wasting money on a trademark case that is doomed to legal failure. One member of the trademark team, Felix Lechner, has resigned in disgust. Therefore, who is advancing this vendetta?

The remaining two members of the trademark team are Taowa Munene-Tardif and Brian Gupta. Today we look at Taowa.

Taowa's contributor profile shows she is little more than a groupie who joined around DebConf17 in Montreal. She was only added to the Debian keyring in 2021.

As far as we can tell, Taowa has never met the volunteer she is attacking. She is not his employer, she is not one of his clients and she has no right to impose herself on him with silly demands. This is the stuff of a toxic woman.

In her Debian communications, Taowa hides her last name. She also uses the aliases Taowa Rosetwig and Taowo. Nonetheless, we didn't have much trouble finding her full name. This says a lot more about Debian than it says about Taowa. When women come to Debian, they never quite feel comfortable. There is a culture issue. There are various other women who hide their full name, like Cryptie of the FSFE, who is really Amandine Jambert at CNIL, an agency of the French Government.

A Debian Developer with over 20 years experience resigned from some of his voluntary duties at a time when he lost two family members.

The Debian trademark gang, Taowa Munene-Tardif (Rosetwig) and Brian Gupta, are now trying to use the WIPO arbitration service to publish defamation and libel of the volunteer, in full knowledge that their case will fail.

If that isn't a toxic woman, what is?

Github profile. Taowa claims to be a research associate of NeuroPoly, Ecole Polytechnique, Université de Montréal. Taowa previously worked with Santropol Roulant. Did she gain her place at NeuroPoly by claiming association with Debian and DebConf17?

09 May, 2022 09:30PM

hackergotchi for Robert McQueen

Robert McQueen

Evolving a strategy for 2022 and beyond

As a board, we have been working on several initiatives to make the Foundation a better asset for the GNOME Project. We’re working on a number of threads in parallel, so I wanted to explain the “big picture” a bit more to try and connect together things like the new ED search and the bylaw changes.

We’re all here to see free and open source software succeed and thrive, so that people can be be truly empowered with agency over their technology, rather than being passive consumers. We want to bring GNOME to as many people as possible so that they have computing devices that they can inspect, trust, share and learn from.

In previous years we’ve tried to boost the relevance of GNOME (or technologies such as GTK) or solicit donations from businesses and individuals with existing engagement in FOSS ideology and technology. The problem with this approach is that we’re mostly addressing people and organisations who are already supporting or contributing FOSS in some way. To truly scale our impact, we need to look to the outside world, build better awareness of GNOME outside of our current user base, and find opportunities to secure funding to invest back into the GNOME project.

The Foundation supports the GNOME project with infrastructure, arranging conferences, sponsoring hackfests and travel, design work, legal support, managing sponsorships, advisory board, being the fiscal sponsor of GNOME, GTK, Flathub… and we will keep doing all of these things. What we’re talking about here are additional ways for the Foundation to support the GNOME project – we want to go beyond these activities, and invest into GNOME to grow its adoption amongst people who need it. This has a cost, and that means in parallel with these initiatives, we need to find partners to fund this work.

Neil has previously talked about themes such as education, advocacy, privacy, but we’ve not previously translated these into clear specific initiatives that we would establish in addition to the Foundation’s existing work. This is all a work in progress and we welcome any feedback from the community about refining these ideas, but here are the current strategic initiatives the board is working on. We’ve been thinking about growing our community by encouraging and retaining diverse contributors, and addressing evolving computing needs which aren’t currently well served on the desktop.

Initiative 1. Welcoming newcomers. The community is already spending a lot of time welcoming newcomers and teaching them the best practices. Those activities are as time consuming as they are important, but currently a handful of individuals are running initiatives such as GSoC, Outreachy and outreach to Universities. These activities help bring diverse individuals and perspectives into the community, and helps them develop skills and experience of collaborating to create Open Source projects. We want to make those efforts more sustainable by finding sponsors for these activities. With funding, we can hire people to dedicate their time to operating these programs, including paid mentors and creating materials to support newcomers in future, such as developer documentation, examples and tutorials. This is the initiative that needs to be refined the most before we can turn it into something real.

Initiative 2: Diverse and sustainable Linux app ecosystem. I spoke at the Linux App Summit about the work that GNOME and Endless has been supporting in Flathub, but this is an example of something which has a great overlap between commercial, technical and mission-based advantages. The key goal here is to improve the financial sustainability of participating in our community, which in turn has an impact on the diversity of who we can expect to afford to enter and remain in our community. We believe the existence of this is critically important for individual developers and contributors to unlock earning potential from our ecosystem, through donations or app sales. In turn, a healthy app ecosystem also improves the usefulness of the Linux desktop as a whole for potential users. We believe that we can build a case for commercial vendors in the space to join an advisory board alongside with GNOME, KDE, etc to input into the governance and contribute to the costs of growing Flathub.

Initiative 3: Local-first applications for the GNOME desktop. This is what Thib has been starting to discuss on Discourse, in this thread. There are many different threats to free access to computing and information in today’s world. The GNOME desktop and apps need to give users convenient and reliable access to technology which works similarly to the tools they already use everyday, but keeps them and their data safe from surveillance, censorship, filtering or just being completely cut off from the Internet. We believe that we can seek both philanthropic and grant funding for this work. It will make GNOME a more appealing and comprehensive offering for the many people who want to protect their privacy.

The idea is that these initiatives all sit on the boundary between the GNOME community and the outside world. If the Foundation can grow and deliver these kinds of projects, we are reaching to new people, new contributors and new funding. These contributions and investments back into GNOME represent a true “win-win” for the newcomers and our existing community.

(Originally posted to GNOME Discourse, please feel free to join the discussion there.)

09 May, 2022 02:01PM by ramcq

Russ Allbery

remctl 3.18

remctl is a simple RPC mechanism using Kerberos GSS-API authentication (or SSH authentication).

The primary change in this release, and the reason for the release, is to add support for PCRE2, the latest version of the Perl-Compatible Regular Expression library, since PCRE1 is now deprecated.

This release also improves some documentation, marks the allocation functions in the C client library with deallocation functions for GCC 11, and fixes some issues with the Python and Ruby bindings that were spotted by Ken Dreyer, as well as the normal update of portability support.

I still do plan to move the language bindings into separate packages, since this will make it easier to upload them to their per-language module repositories and that, in turn, will make them easier to use, but this version doesn't have those changes. I wanted to flush the portability changes and PCRE update out first before starting that project.

You can get the latest version from the remctl distribution page.

09 May, 2022 04:49AM

rra-c-util 10.2

rra-c-util is my collection of utility functions, mostly but not entirely for C, that I use with my various software releases.

There are two major changes in this release. The first is Autoconf support for PCRE2, the new version of the Perl-Compatible Regular Expression library (PCRE1 is now deprecated), which was the motivation for a new release. The second is a huge update to the Perl formatting rules due to lots of work by Julien ÉLIE for INN.

This release also tags deallocation functions, similar to the change mentioned for C TAP Harness 4.8, for all the utility libraries provided by rra-c-util, and fixes an issue with the systemd support.

You can get the latest version from the rra-c-util distribution page.

09 May, 2022 04:43AM

C TAP Harness 4.8

C TAP Harness is my C implementation of the Perl "Test Anything Protocol" test suite framework. It includes test runner and libraries for both C and shell.

This is mostly a cleanup release to resync with other utility libraries. It does fix an installation problem by managing symlinks correctly, and adds support for GCC 11's new deallocation warnings.

The latter is a rather interesting new GCC feature. There is a Red Hat blog post about the implementation with more details, but the short version is that the __malloc__ attribute can now take an argument that specifies the function that should be used to deallocate the allocated object. GCC 11 and later can use that information to catch some deallocation bugs, such as deallocating things with the wrong function.

You can get the latest version from the C TAP Harness distribution page.

09 May, 2022 04:26AM

May 08, 2022

hackergotchi for Sean Whitton

Sean Whitton

lispreading

I recently released Consfigurator 1.0.0 and I’m now returning to my Common Lisp reading. Building Consfigurator involved the ad hoc development of a cross between a Haskell-style functional DSL and a Lisp-style macro DSL. I am hoping that it will be easier to retain lessons about building these DSLs more systematically, and making better use of macros, by finishing my studying of macrology books and papers only after having completed the ad hoc DSL. Here’s my current list:

  • Finishing off On Lisp and Let Over Lambda.

  • Richard C. Waters. 1993. “Macroexpand-All: an example of a simple lisp code walker.” In Newsletter ACM SIGPLAN Lisp Pointers 6 (1).

  • Naive vs. proper code-walking.

  • Michael Raskin. 2017. “Writing a best-effort portable code walker in Common Lisp.” In Proceedings of 10th European Lisp Symposium (ELS2017).

  • Cullpepper et. al. 2019. “From Macros to DSLs: The Evolution of Racket”. Summet of Advances in Programming Languages.

One thing that I would like to understand better is the place of code walking in macro programming. The Raskin paper explains that it is not possible to write a fully correct code walker in ANSI CL. Consfigurator currently uses Raskin’s best-effort portable code walker. Common Lisp: The Language 2 includes a few additional functions which didn’t make it into the ANSI standard that would make it possible to write a fully correct code walker, and most implementations of CL provide them under one name or another. So one possibility is to write a code walker in terms of ANSI CL + those few additional functions, and then use a portability layer to get access to those functions on different implementations (e.g. trivial-cltl2).

However, both On Lisp and Let Over Lambda, the two most substantive texts on CL macrology, both explicitly put code walking out-of-scope. I am led to wonder: does the Zen of Common Lisp-style macrology involve doing without code walking? One key idea with macros is to productively blur the distinction between designing languages and writing code in those languages. If your macros require code walking, have you perhaps ended up too far to the side of designing whole languages? Should you perhaps rework things so as not to require the code walking? Then it would matter less that those parts of CLtL2 didn’t make it into ANSI. Graham notes in ch. 17 of On Lisp that read macros are technically more powerful than defmacro because they can do everything that defmacro can and more. But it would be a similar sort of mistake to conclude that Lisp is about read macros rather than defmacro.

There might be some connection between arguments for and against avoiding code walking in macro programming and the maintainance of homoiconicity. One extant CL code walker, hu.dwim.walker, works by converting back and forth between conses and CLOS objects (Raskin’s best-effort code walker has a more minimal interface), and hygienic macro systems in Scheme similarly trade away homoiconicity for additional metadata (one Lisp programmer I know says this is an important sense in which Scheme could be considered not a Lisp). Perhaps arguments against involving much code walking in macro programming are equivalent to arguments against Racket’s idea of language-oriented programming. When Racket’s designers say that Racket’s macro system is “more powerful” than CL’s, they would be right in the sense that the system can do all that defmacro can do and more, but wrong if indeed the activity of macro programming is more powerful when kept further away from language design. Anyway, these are some hypotheses I am hoping to develop some more concrete ideas about in my reading.

08 May, 2022 08:41PM

Thorsten Alteholz

My Debian Activities in April 2022

FTP master

This month I accepted 186 and rejected 26 packages. The overall number of packages that got accepted was 188.

Debian LTS

This was my ninety-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2973-1] minidlna security update for one CVE
  • [DLA 2974-1] fribidi security update for three CVEs
  • [DLA 2988-1] tinyxml security update for one CVE
  • [DLA 2987-1] libarchive security update for three CVEs
  • [#1009076] buster-pu: minidlna/1.2.1+dfsg-2+deb10u3
  • [#1009077] bullseye-pu: minidlna/1.3.0+dfsg-2+deb11u1
  • [#1009251] buster-pu: fribidi/1.0.5-3.1+deb10u2
  • [#1009250] bullseye-pu: fribidi/1.0.8-2+deb11u1
  • [#1010380] buster-pu: flac/1.3.2-3+deb10u2

Further I worked on libvirt, the dependency problems in unstable have been resolved and fixing in other releases can continue.

I also continued to work on security support for golang packages.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the forty-siyth ELTS month.

During my allocated time I uploaded:

  • ELA-591-1 for minidlna
  • ELA-592-1 for fribidi
  • ELA-602-1 for tinyxml
  • ELS-603-1 for libarchive

Last but not least I did some days of frontdesk duties.

Debian Printing

This month I uploaded new upstream versions or improved packaging of:

As I already became the maintainer of usb-modeswitch I also adopted usb-modeswitch-data

Debian Astro

Unfortunately I didn’t do anything for this group, but in May I will upload a new version of openvlbi and several indi-3rdparty packages.

Other stuff

Last but not least I uploaded several new upstream version of golang packages but not before checking with ratt that all dependencies still work.

08 May, 2022 10:01AM by alteholz

May 07, 2022

Free Software Fellowship

Charles Fussell, Jonathan Cohen & Debian SLAPP microsite, resignations

The Debian web site has an interesting definition of freedom:

Many people new to this subject are confused because of the word "free". It isn't used the way they expect – "free" means "free of cost" to them. If you look at an English dictionary, it lists almost twenty different meanings for "free", and only one of them is "at no cost". The rest refer to "liberty" and "lack of constraint". So, when we speak of Free Software, we mean freedom, not payment.

The biggest rort in town

A large sum of money from Debian funds is now paying for Jonathan Cohen, a lawyer at Charles Fussell & Co LLP, to pursue a SLAPP lawsuit and try to shut down the Debian Community News and the Uncensored Debian Planet web sites.

Jonathan Cohen, Charles Fussell, Debian, embezzlement, WIPO, UDRP

Debian Community News has published a blog about the initiation of the lawsuit.

A Charles Fussell parody microsite has appeared at FussellCharles.com.

Resignations

DPL candidate Felix Lechner has resigned from the Debian Trademark Team, citing his disagreement with the lawsuit:

Subject: Resignation from Trademark Team
Date: Wed, 20 Apr 2022 05:57:56 -0700
From: Felix Lechner <felix.lechner@lease-up.com>
To: leader@debian.org
CC: secretary@debian.org, debian-www@lists.debian.org, debian-project@lists.debian.org

Dear Mr. Leader,

Congratulations on your re-election as Project Leader. I wish you the best for your third term.

Please accept herewith my resignation as your trademark delegate, effective immediately.

Since two of your trademark delegates remain, you are probably staffed adequately until a replacement is found.

With this step, I hope to make room for someone whose attitudes toward business and legal matters can reflect more closely the preferences of the membership as a whole.

Thank you for the opportunity to serve.

Respectfully,
Felix Lechner

cc: Secretary Kurt Roeckx
Web Team
d-project mailing list

07 May, 2022 09:30AM

May 06, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RProtoBuf 0.4.19 on CRAN: Updates

A new release 0.4.19 of RProtoBuf arrived on CRAN earlier today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release contains a pull request contribution by Michael Chirico to add support for the TextFormat API, a minor maintenance fix ensuring (standard) string are referenced as std::string to avoid a hickup on Arch builds, some repo updates, plus reporting of (package and library) versions on startup. The following section from the NEWS.Rd file has more details.

Changes in RProtoBuf version 0.4.19 (2022-05-06)

  • Small cleanups to repository

  • Raise minimum Protocol Buffers version to 3.3 (closes #83)

  • Update package version display, added to startup message

  • Expose TextFormat API (Michael Chirico in #88 closing #87)

  • Add missing explicit std:: on seven string instances in one file (closes #89)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 May, 2022 11:33PM

Antoine Beaupré

Wallabako 1.4.0 released

I don't particularly like it when people announce their personal projects on their blog, but I'm making an exception for this one, because it's a little special for me.

You see, I have just released Wallabako 1.4.0 (and a quick, mostly irrelevant 1.4.1 hotfix) today. It's the first release of that project in almost 3 years (the previous was 1.3.1, before the pandemic).

The other reason I figured I would mention it is that I have almost never talked about Wallabako on this blog at all, so many of my readers probably don't even know I sometimes meddle with in Golang which surprises even me sometimes.

What's Wallabako

Wallabako is a weird little program I designed to read articles on my E-book reader. I use it to spend less time on the computer: I save articles in a read-it-later app named Wallabag (hosted by a generous friend), and then Wallabako connects to that app, downloads an EPUB version of the book, and then I can read it on the device directly.

When I'm done reading the book, Wallabako notices and sets the article as read in Wallabag. I also set it to delete the book locally, but you can actually configure to keep those books around forever if you feel like it.

Wallabako supports syncing read status with the built-in Kobo interface (called "Nickel"), Koreader and Plato. I happen to use Koreader for everything nowadays, but it should work equally well on the others.

Wallabako is actually setup to be started by udev when there's a connection change detected by the kernel, which is kind of a gross hack. It's clunky, but actually works and I thought for a while about switching to something else, but it's really the easiest way to go, and that requires the less interaction by the user.

Why I'm (still) using it

I wrote Wallabako because I read a lot of articles on the internet. It's actually most of my readings. I read about 10 books a year (which I don't think is much), but I probably read more in terms of time and pages in Wallabag. I haven't actually made the math, but I estimate I spend at least double the time reading articles than I spend reading books.

If I wouldn't have Wallabag, I would have hundreds of tabs open in my web browser all the time. So at least that problem is easily solved: throw everything in Wallabag, sort and read later.

If I wouldn't have Wallabako however, I would be either spend that time reading on the computer -- which I prefer to spend working on free software or work -- or on my phone -- which is kind of better, but really cramped.

I had stopped (and developing) Wallabako for a while, actually, Around 2019, I got tired of always read those technical articles (basically work stuff!) at home. I realized I was just not "reading" (as in books! fiction! fun stuff!) anymore, at least not as much as I wanted.

So I tried to make this separation: the ebook reader is for cool book stuff. The rest is work. But because I had the Wallabag Android app on my phone and tablet, I could still read those articles there, which I thought was pretty neat. But that meant that I was constantly looking at my phone, which is something I'm generally trying to avoid, as it sets a bad example for the kids (small and big) around me.

Then I realized there was one stray ebook reader lying around at home. I had recently bought a Kobo Aura HD to read books, and I like that device. And it's going to stay locked down to reading books. But there's still that old battered Kobo Glo HD reader lying around, and I figured I could just borrow it to read Wallabag articles.

What is this new release

But oh boy that was a lot of work. Wallabako was kind of a mess: it was using the deprecated go dep tool, which lost the battle with go mod. Cross-compilation was broken for older devices, and I had to implement support for Koreader.

go mod

So I had to learn go mod. I'm still not sure I got that part right: LSP is yelling at me because it can't find the imports, and I'm generally just "YOLO everythihng" every time I get anywhere close to it. That's not the way to do Go, in general, and not how I like to do it either.

But I guess that, given time, I'll figure it out and make it work for me. It certainly works now. I think.

Cross compilation

The hard part was different. You see, Nickel uses SQLite to store metadata about books, so Wallabako actually needs to tap into that SQLite database to propagate read status. Originally, I just linked against some sqlite3 library I found lying around. It's basically a wrapper around the C-based SQLite and generally works fine. But that means you actually link your Golang program against a C library. And that's when things get a little nutty.

If you would just build Wallabag naively, it would fail when deployed on the Kobo Glo HD. That's because the device runs a really old kernel: the prehistoric Linux kobo 2.6.35.3-850-gbc67621+ #2049 PREEMPT Mon Jan 9 13:33:11 CST 2017 armv7l GNU/Linux. That was built in 2017, but the kernel was actually released in 2010, a whole 5 years before the Glo HD was released, in 2015 which is kind of outrageous. and yes, that is with the latest firmware release.

My bet is they just don't upgrade the kernel on those things, as the Glo was probably bought around 2017...

In any case, the problem is we are cross-compiling here. And Golang is pretty good about cross-compiling, but because we have C in there, we're actually cross-compiling with "CGO" which is really just Golang with a GCC backend. And that's much, much harder to figure out because you need to pass down flags into GCC and so on. It was a nightmare.

That's until I found this outrageous "little" project called modernc.org/sqlite. What that thing does (with a hefty does of dependencies that would make any Debian developer recoil in horror) is to transpile the SQLite C source code to Golang. You read that right: it rewrites SQLite in Go. On the fly. It's nuts.

But it works. And you end up with a "pure go" program, and that thing compiles much faster and runs fine on older kernel.

I still wasn't sure I wanted to just stick with that forever, so I kept the old sqlite3 code around, behind a compile-time tag. At the top of the nickel_modernc.go file, there's this magic string:

//+build !sqlite3

And at the top of nickel_sqlite3.go file, there's this magic string:

//+build sqlite3

So now, by default, the modernc file gets included, but if I pass --tags sqlite3 to the Go compiler (to go install or whatever), it will actually switch to the other implementation. Pretty neat stuff.

Koreader port

The last part was something I was hesitant in doing for a long time, but that turned out to be pretty easy. I have basically switch to using Koreader to read everything. Books, PDF, everything goes through it. I really like that it stores its metadata in sidecar files: I synchronize all my books with Syncthing which means I can carry my read status, annotations and all that stuff without having to think about it. (And yes, I installed Syncthing on my Kobo.)

The koreader.go port was less than 80 lines, and I could even make a nice little test suite so that I don't have to redeploy that thing to the ebook reader at every code iteration.

I had originally thought I should add some sort of graphical interface in Koreader for Wallabako as well, and had requested that feature upstream. Unfortunately (or fortunately?), they took my idea and just ran with it. Some courageous soul actually wrote a full Wallabag plugin for koreader, in Lua of course.

Compared to the Wallabako implementation however, the koreader plugin is much slower, probably because it downloads articles serially instead of concurrently. It is, however, much more usable as the user is given a visible feedback of the various steps. I still had to enable full debugging to diagnose a problem (which was that I shouldn't have a trailing slash, and that some special characters don't work in passwords). It's also better to write the config file with a normal text editor, over SSH or with the Kobo mounted to your computer instead of typing those really long strings over the kobo.

There's no sample config file which makes that harder but a workaround is to save the configuration with dummy values and fix them up after. Finally I also found the default setting ("Remotely delete finished articles") really dangerous as it can basically lead to data loss (Wallabag article being deleted!) for an unsuspecting user...

So basically, I started working on Wallabag again because the koreader implementation of their Wallabag client was not up to spec for me. It might be good enough for you, but I guess if you like Wallabako, you should thank the koreader folks for their sloppy implementation, as I'm now working again on Wallabako.

Actual release notes

Those are the actual release notes for 1.4.0.

Ship a lot of fixes that have accumulated in the 3 years since the last release.

Features:

  • add timestamp and git version to build artifacts
  • cleanup and improve debugging output
  • switch to pure go sqlite implementation, which helps
  • update all module dependencies
  • port to wallabago v6
  • support Plato library changes from 0.8.5+
  • support reading koreader progress/read status
  • Allow containerized builds, use gomod and avoid GOPATH hell
  • overhaul Dockerfile
  • switch to go mod

Documentation changes:

  • remove instability warning: this works well enough
  • README: replace branch name master by main in links
  • tweak mention of libreoffice to clarify concern
  • replace "kobo" references by "nickel" where appropriate
  • make a section about related projects
  • mention NickelMenu
  • quick review of the koreader implementation

Bugfixes:

  • handle errors in http request creation
  • Use OutputDir configuration instead of hardcoded wallabako paths
  • do not noisily fail if there's no entry for book in plato
  • regression: properly detect read status again after koreader (or plato?) support was added

How do I use this?

This is amazing. I can't believe someone did something that awesome. I want to cover you with gold and Tesla cars and fresh water.

You're weird please stop. But if you want to use Wallabako, head over to the README file which has installation instructions. It basically uses a hack in Kobo e-readers that will happily overwrite their root filesystem as soon as you drop this file named KoboRoot.tgz in the .kobo directory of your e-reader.

Note that there is no uninstall procedure and it messes with the reader's udev configuration (to trigger runs on wifi connect). You'll also need to create a JSON configuration file and configure a client in Wallabag.

And if you're looking for Wallabag hosting, Wallabag.it offers a 14-day free trial. You can also, obviously, host it yourself. Which is not the case for Pocket, even years after Mozilla bought the company. All this wouldn't actually be necessary if Pocket was open-source because Nickel actually ships with a Pocket client.

Shame on you, Mozilla. But you still make an awesome browser, so keep doing that.

06 May, 2022 04:39PM

hackergotchi for Holger Levsen

Holger Levsen

20220506-i-had-an-abortion

I had an abortion...

Well, it wasn't me, but when I was 18 my partner thankfully was able to take a 'morning-after-pill' because we were seriously not ready to have a baby. As one data point: We were both still in high school.

It's not possible to ban abortions. It's only possible to ban safe abortions.

06 May, 2022 01:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.16 on CRAN: Small Updates

A new release 0.4.16 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

The release of RQuantLib comes agaain about four months after the previous release, and brings a a few small updates for daycounters, all thanks to Kai Lin, plus a small parameter change to avoid an error in an example, and small updates to the Docker files.

Changes in RQuantLib version 0.4.16 (2022-05-05)

  • Documentationn for daycounters was updated and extended (Kai Lin)

  • Deprecated daycounters were approtiately updated (Kai Lin)

  • One example parameterization was changed to avoid error (Dirk)

  • The Docker files were updated

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 May, 2022 12:50AM

May 05, 2022

Free Software Fellowship

FSFE Youth Hacking 4 Freedom: radio silence

In October 2021, FSFE announced their Youth Hacking 4 Freedom unpaid internships for children aged 14 to 17. The Fellowship was quick to denounce it due to the risk of child labor.

Six months later and we notice there has been radio silence from FSFE.

We can't even find the most basic details about how many children registered to participate in the competition.

It looks like the FSFE donors and volunteers have been duped into supporting child labor.

The Fellowship called it out and it looks like the FSFE has quietly mothballed this controversial program.

On the other hand, if this program has proceeded in secret, it would be inconsistent with the principles of free software development. Free software is all about open and transparent collaboration.

In the 1940s, children from Ukraine, as young as 14 years, were forced to work in German factories:

FSFE, Germany, child labor, yh4f

To be considered for the prize, the participants are told that they must publish all their code under a free software license: in other words, they must give up all rights to any future payment. This doesn't happen in any other industry, certainly not with children.

Fellowship believes all children of working age should be paid for hours worked in accordance with human rights obligations.

05 May, 2022 07:30PM

Reproducible Builds

Reproducible Builds in April 2022

Welcome to the April 2022 report from the Reproducible Builds project! In these reports, we try to summarise the most important things that we have been up to over the past month. If you are interested in contributing to the project, please take a few moments to visit our Contribute page on our website.

News

Cory Doctorow published an interesting article this month about the possibility of Undetectable backdoors for machine learning models. Given that machine learning models can provide unpredictably incorrect results, Doctorow recounts that there exists another category of “adversarial examples” that comprise “a gimmicked machine-learning input that, to the human eye, seems totally normal — but which causes the ML system to misfire dramatically” that permit the possibility of planting “undetectable back doors into any machine learning system at training time”.


Chris Lamb published two ‘supporter spotlights’ on our blog: the first about Amateur Radio Digital Communications (ARDC) and the second about the Google Open Source Security Team (GOSST).


Piergiorgio Ladisa, Henrik Plate, Matias Martinez and Olivier Barais published a new academic paper titled A Taxonomy of Attacks on Open-Source Software Supply Chains (PDF):

This work proposes a general taxonomy for attacks on open-source supply chains, independent of specific programming languages or ecosystems, and covering all supply chain stages from code contributions to package distribution. Taking the form of an attack tree, it covers 107 unique vectors, linked to 94 real-world incidents, and mapped to 33 mitigating safeguards.


Elsewhere in academia, Ly Vu Duc published his PhD thesis. Titled Towards Understanding and Securing the OSS Supply Chain (PDF), Duc’s abstract reads as follows:

This dissertation starts from the first link in the software supply chain, ‘developers’. Since many developers do not update their vulnerable software libraries, thus exposing the user of their code to security risks. To understand how they choose, manage and update the libraries, packages, and other Open-Source Software (OSS) that become the building blocks of companies’ completed products consumed by end-users, twenty-five semi-structured interviews were conducted with developers of both large and small-medium enterprises in nine countries. All interviews were transcribed, coded, and analyzed according to applied thematic analysis


Upstream news

Filippo Valsorda published an informative blog post recently called How Go Mitigates Supply Chain Attacks outlining the high-level features of the Go ecosystem that helps prevent various supply-chain attacks.


There was new/further activity on a pull request filed against openssl by Sebastian Andrzej Siewior in order to prevent saved CFLAGS (which may contain the -fdebug-prefix-map=<PATH> flag that is used to strip an arbitrary the build path from the debug info — if this information remains recorded then the binary is no longer reproducible if the build directory changes.


Events

The Linux Foundation’s SupplyChainSecurityCon, will take place June 21st — 24th 2022, both virtually and in Austin, Texas. Long-time Reproducible Builds and openSUSE contributor Bernhard M. Wiedemann learned that he had his talk accepted, and will speak on Reproducible Builds: Unexpected Benefits and Problems on June 21st.


There will be an in-person “Debian Reunion” in Hamburg, Germany later this year, taking place from 23 — 30 May. Although this is a “Debian” event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information. 41 people have registered so far, and there’s approx 10 “on-site” beds still left.


The minutes and logs from our April 2022 IRC meeting have been published. In case you missed this one, our next IRC meeting will take place on May 31st at 15:00 UTC on #reproducible-builds on the OFTC network.


Debian

Roland Clobus wrote another in-depth status update about the status of ‘live’ Debian images, summarising the current situation that all major desktops build reproducibly with bullseye, bookworm and sid, including the Cinnamon desktop on bookworm and sid, “but at a small functionality cost: 14 words will be incorrectly abbreviated”. This work incorporated:

  • Reporting an issue about unnecessarily modified timestamps in the daily Debian installer images. []
  • Reporting a bug against the debian-installer: in order to use a suitable kernel version. (#1006800)
  • Reporting a bug in: texlive-binaries regarding the unreproducible content of .fmt files. (#1009196)
  • Adding hacks to make the Cinnamon desktop image reproducible in bookworm and sid. []
  • Added a script to rebuild a live-build ISO image from a given timestamp. [
  • etc.

On our mailing list, Venkata Pyla started a thread on the Debian debconf cache is non-reproducible issue while creating system images and Vagrant Cascadian posted an excellent summary of the reproducibility status of core package sets in Debian and solicited for similar information from other distributions.


Lastly, 122 reviews of Debian packages were added, 44 were updated and 193 were removed this month adding to our extensive knowledge about identified issues. A number of issue types have been updated as well, including timestamps_generated_by_hevea, randomness_in_ocaml_preprocessed_files, build_path_captured_in_emacs_el_file, golang_compiler_captures_build_path_in_binary and build_path_captured_in_assembly_objects,


Other distributions

Happy birthday to GNU Guix, which recently turned 10 years old! People have been sharing their stories, in which reproducible builds and bootstrappable builds are a recurring theme as a feature important to its users and developers. The experiences are available on the GNU Guix blog as well as a post on fossandcrafts.org


In openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 210 and 211 to Debian unstable, as well as noticed that some Python .pyc files are reported as data, so we should support .pyc as a fallback filename extension [].

In addition, Mattia Rizzolo disabled the Gnumeric tests in Debian as the package is not currently available [] and dropped mplayer from Build-Depends too []. In addition, Mattia fixed an issue to ensure that the PATH environment variable is properly modified for all actions, not just when running the comparator. []


Testing framework

The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Daniel Golle:

    • Prefer a different solution to avoid building all OpenWrt packages; skip packages from optional community feeds. []
  • Holger Levsen:

    • Detect Python deprecation warnings in the node health check. []
    • Detect failure to build the Debian Installer. []
  • Mattia Rizzolo:

    • Install disorderfs for building OpenWrt packages. []
  • Paul Spooren (OpenWrt-related changes):

    • Don’t build all packages whilst the core packages are not yet reproducible. []
    • Add a missing RUN directive to node_cleanup. []
    • Be less verbose during a toolchain build. []
    • Use disorderfs for rebuilds and update the documentation to match. [][][]
  • Roland Clobus:

    • Publish the last reproducible Debian ISO image. []
    • Use the rebuild.sh script from the live-build package. []

Lastly, node maintenance was also performed by Holger Levsen [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 May, 2022 07:18PM

hackergotchi for Bits from Debian

Bits from Debian

Google Platinum Sponsor of DebConf22

Googlelogo

We are very pleased to announce that Google has committed to support DebConf22 as a Platinum sponsor. This is the third year in a row that Google is sponsoring The Debian Conference with the higher tier!

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf since more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

With this additional commitment as Platinum Sponsor for DebConf22, Google contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Google, for your support of DebConf22!

Become a sponsor too!

DebConf22 will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

And DebConf22 is still accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf22 website at https://debconf22.debconf.org/sponsors/become-a-sponsor.

DebConf22 banner open registration

05 May, 2022 08:00AM by The Debian Publicity Team

hackergotchi for Norbert Preining

Norbert Preining

KDE Gears 22.04 and Plasma 5.24.5 for Debian

I have updated my OBS builds to contain the new KDE Gears 22.04 as well as the last point release of KDE Plasma 5.24.5.

As usual, the packages are provided via my OBS builds. If you have used my packages till now, then you only need to change the apps2112 line to read apps2204. To give full details, I repeat (and update) instructions for all here: First of all, you need to add my OBS key say in /etc/apt/trusted.gpg.d/obs-npreining.asc and add a file /etc/apt/sources.lists.d/obs-npreining-kde.list, containing the following lines, replacing the DISTRIBUTION part with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma524/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2204/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/DISTRIBUTION/ ./

Some programs in the other group have been recompiled against the Gears 22.04 libraries.

Enjoy!

PS: Considering that I don’t have a user-facing Debian computer anymore, all these packages are only tested by third parties and not by myself. Be aware!

PPS: Funny to read the Debian Social Contract, Point 4. Our priorities are our users and free software, obviously I care a lot about my users.

05 May, 2022 05:37AM by Norbert Preining

May 03, 2022

hackergotchi for Sean Whitton

Sean Whitton

for-bullseye

Consfigurator has long has combinators OS:TYPECASE and OS:ETYPECASE to conditionalise on a host’s operating system. For example:

(os:etypecase
  (debian-stable (apt:installed-backport "notmuch"))
  (debian-unstable (apt:installed "notmuch")

You can’t distinguish between stable releases of Debian like this, however, because while that information is known, it’s not represented at the level of types. You can manually conditionalise on Debian suite using something like this:

(defpropspec notmuch-installed :posix ()
  (switch ((os:debian-suite (get-hostattrs-car :os)) :test #'string=)
    ("bullseye" '(apt:installed-backport "notmuch"))
    (t          '(apt:installed "notmuch"))))

but that means stepping outside of Consfigurator’s DSL, which has various disadvantages, such as a reduction in readability. So today I’ve added some new combinators, so that you can say

(os:debian-suite-case
  ("bullseye" (apt:installed-backport "notmuch"))
  (t          (apt:installed "notmuch")))

For my own use I came up with this additional simple wrapper:

(defmacro for-bullseye (atomic)
  `(os:debian-suite-case
     ("buster")
     ("bullseye" ,atomic)
     ;; Check the property is actually unapplicable.
     ,@(and (get (car atomic) 'punapply) `((t (unapplied ,atomic))))))

So now I can say

(for-bullseye (apt:pinned '("elpa-org-roam") '(os:debian-unstable) 900))

which is a succinct expression of the following: “on bullseye, pin elpa-org-roam to sid with priority 900, drop the pin when we upgrade the machine to bookworm, and don’t do anything at all if the machine is still on buster”.

As a consequence of my doing Debian development but running Debian stable everywhere, I accumulate a number of tweaks like this one over the course of each Debian stable release. In the past I’ve gone through and deleted them all when it’s time to upgrade to the next release, but then I’ve had to add properties to undo changes made for the last stable release, and write comments saying why those are there and when they can be safely removed, which is tedious and verbose. This new combinator is cleaner.

03 May, 2022 11:13PM

hackergotchi for Steve Kemp

Steve Kemp

A plea for books ..

Recently I've been getting much more interested in the "retro" computers of my youth, partly because I've been writing crazy code in Z80 assembly-language, and partly because I've been preparing to introduce our child to his first computer:

  • An actual 1982 ZX Spectrum, cassette deck and all.
    • No internet
    • No hi-rez graphics
    • Easily available BASIC
    • And as a nice bonus the keyboard is wipe-clean!

I've got a few books, books I've hoarded for 30+ years, but I'd love to collect some more. So here's my request:

  • If you have any books covering either the Z80 processor, or the ZX Spectrum, please consider dropping me an email.

I'd be happy to pay €5-10 each for any book I don't yet own, and I'd also be more than happy to cover the cost of postage to Finland.

I'd be particularly pleased to see anything from Melbourne House, and while low-level is best, the coding-books from Usbourne (The Mystery Of Silver Mountain, etc, etc) wouldn't go amiss either.

I suspect most people who have collected and kept these wouldn't want to part with them, but just in case ..

03 May, 2022 06:12PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Using a RPi as a display adapter

Almost ten months ago, I mentioned on this blog I bought an ARM laptop, which is now my main machine while away from home — a Lenovo Yoga C630 13Q50. Yes, yes, I am still not as much away from home as I used to before, as this pandemic is still somewhat of a thing, but I do move more.

My main activity in the outside world with my laptop is teaching. I teach twice a week, and… well, having a display for my slides and for showing examples in the terminal and such is a must. However, as I said back in August, one of the hardware support issues for this machine is:

No HDMI support via the USB-C displayport. While I don’t expect
to go to conferences or even classes in the next several months,
I hope this can be fixed before I do. It’s a potential important
issue for me.

It has sadly… not yet been solved ☹ While many things have improved since kernel 5.12 (the first I used), the Device Tree does not yet hint at where external video might sit.

So, I went to the obvious: Many people carry different kinds of video adaptors… I carry a slightly bulky one: A RPi3 �

For two months already (time flies!), I had an ugly contraption where the RPi3 connected via Ethernet and displayed a VNC client, and my laptop had a VNC server. Oh, but did I mention — My laptop works so much better with Wayland than with Xorg that I switched, and am now a happy user of the Sway compositor (a drop-in replacement for the i3 window manager). It is built over WLRoots, which is a great and (relatively) simple project, but will thankfully not carry some of Gnome or KDE’s ideas — not even those I’d rather have. So it took a bit of searching; I was very happy to find WayVNC, a VNC server for wlroot-sbased Wayland compositors. I launched a second Wayland, to be able to have my main session undisturbed and present only a window from it.

Only that… VNC is slow and laggy, and sometimes awkward. So I kept searching for something better. And something better is, happily, what I was finally able to do!

In the laptop, I am using wf-recorder to grab an area of the screen and funnel it into a V4L2 loopback device (which allows it to be used as a camera, solving the main issue with grabbing parts of a Wayland screen):

/usr/bin/wf-recorder -g '0,32 960x540' -t --muxer=v4l2 --codec=rawvideo --pixelformat=yuv420p --file=/dev/video10

(yes, my V4L2Loopback device is set to /dev/video10). You will note I’m grabbing a 960×540 rectangle, which is the top ¼ of my screen (1920x1080) minus the Waybar. I think I’ll increase it to 960×720, as the projector to which I connect the Raspberry has a 4×3 output.

After this is sent to /dev/video10, I tell ffmpeg to send it via RTP to the fixed address of the Raspberry:

/usr/bin/ffmpeg -i /dev/video10 -an -f rtp -sdp_file /tmp/video.sdp rtp://10.0.0.100:7000/

Yes, some uglier things happen here. You will note /tmp/video.sdp is created in the laptop itself; this file describes the stream’s metadata so it can be used from the client side. I cheated and copied it over to the Raspberry, doing an ugly hardcode along the way:

user@raspi:~ $ cat video.sdp
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 10.0.0.100
t=0 0
a=tool:libavformat 58.76.100
m=video 7000 RTP/AVP 96
b=AS:200
a=rtpmap:96 MP4V-ES/90000
a=fmtp:96 profile-level-id=1

People familiar with RTP will scold me: How come I’m streaming to the unicast client address? I should do it to an address in the 224.0.0.0–239.0.0.0 range. And it worked, sometimes. I switched over to 10.0.0.100 because it works, basically always ☺

Finally, upon bootup, I have configured NoDM to start a session with the user user, and dropped the following in my user’s .xsession:

setterm -blank 0 -powersave off -powerdown 0
xset s off
xset -dpms
xset s noblank

mplayer -msglevel all=1 -fs /home/usuario/video.sdp

Anyway, as a result, my students are able to much better follow the pace of my presentation, and I’m able to do some tricks better (particularly when it requires quick reaction times, as often happens when dealing with concurrency and such issues).

Oh, and of course — in case it’s of interest to anybody, knowing that SD cards are all but reliable in the long run, I wrote a vmdb2 recipe to build the images. You can grab it here; it requires some local files to be present to be built — some are the ones I copied over above, and the other ones are surely of no interest to you (such as my public ssh key or such :-] )

What am I still missing? (read: Can you help me with some ideas? 😉)

  • I’d prefer having Ethernet-over-USB. I have the USB-C Ethernet adapter, which powers the RPi and provides a physical link, but I’m sure I could do away with the fugly cable wrapped around the machine…
  • Of course, if that happens, I would switch to a much sexier Zero RPi. I have to check whether the video codec is light enough for a plain ol’ Zero (armel) or I have to use the much more powerful Zero 2… I prefer sticking to the lowest possible hardware!
  • Naturally… The best would be to just be able to connect my USB-C-to-{HDMI,VGA} adapter, that has been sitting idly… 😕 One day, I guess…

Of course, this is a blog post published to brag about my stuff, but also to serve me as persistent memory in case I need to recreate this…

03 May, 2022 04:16PM

May 01, 2022

hackergotchi for Norbert Preining

Norbert Preining

TLContrib on CTAN

Since a few years I am also managing tlcontrib – the supplementary TeX Live package repository. It contains quite a lot of packages which cannot make it into TeX Live proper out of various reasons. Thanks to CTAN Team, package pages on CTAN now also show if the package is available via tlcontrib. Here is an example from the classico package:

Bit thanks to all who made this possible, and make the TeX ecosystem even more friendly!

Enjoy

01 May, 2022 09:31AM by Norbert Preining

Paul Wise

FLOSS Activities April 2022

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

  • Spam: reported 33 Debian mailing list posts
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The libpst, gensim, SPTAG work was sponsored. All other work was done on a volunteer basis.

01 May, 2022 12:26AM

April 30, 2022

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already May.

Already May. I've been writing some code in rust and a bit of javascript. But real life is too busy.

30 April, 2022 11:57PM by Junichi Uekawa

hackergotchi for Sean Whitton

Sean Whitton

consfigurator 1.0.0

I am pleased to announce Consfigurator 1.0.0.

Reaching version 1.0.0 signifies that we will try to avoid API breaks. You should be able to use Consfigurator to manage production systems.

You can find the source at https://git.spwhitton.name/consfigurator for browsing online or git cloning.

Releases are made by publishing signed git tags to that repository. The tag for this release is named ‘v1.0.0’, and is signed by me.

On Debian/etc. systems, apt-get install cl-consfigurator

-8<-

Consfigurator is a system for declarative configuration management using Common Lisp. You can use it to configure hosts as root, deploy services as unprivileged users, build and deploy containers, install operating systems, produce disc images, and more. Some key advantages:

  • Apply configuration by transparently starting up another Lisp image on the machine to be configured, so that you can use the full power of Common Lisp to inspect and control the host.

  • Also define properties of hosts in a more restricted language, that of :POSIX properties, to configure machines, containers and user accounts where you can’t install Lisp. These properties can be applied using just an SSH or serial connection, but they can also be applied by remote Lisp images, enabling code reuse.

  • Flexibly chain and nest methods of connecting to hosts. For example, you could have Consfigurator SSH to a host, sudo to root, start up Lisp, use the setns(2) system call to enter a Linux container, and then deploy a service. Secrets, and other prerequisite data, are properly passed along.

  • Combine declarative semantics for defining hosts and services with a multiparadigmatic general-purpose programming language that won’t get in your way.

Declarative configuration management systems like Consfigurator and Propellor share a number of goals with projects like the GNU Guix System and NixOS. However, tools like Consfigurator and Propellor try to layer the power of declarative and reproducible configuration semantics on top of traditional, battle-tested UNIX system administration infrastructure like distro package managers, package archives and daemon configuration mechanisms, rather than seeking to replace any of those. Let’s get as much as we can out of all that existing distro policy-compliant work!

30 April, 2022 07:50PM

April 29, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

hyperlinked PDF planner

The Year page

The Year page

A day page

A day page

I've been having reasonable success with time blocking, a technique I learned from Cal Newport's writings, in particular Deep Work. I'd been doing it on paper for a while, but I wanted to try and move to a digital solution.

There's a cottage industry of people making (and selling) various types of diary and planner as PDF files for use on tablets such as the Remarkable. Some of these use PDF hyperlinks to greatly improve navigating around. This one from Clou Media is particularly good, but I found that I wanted something slightly different from what I could find out there, so I decided to build my own.

I explored a couple of different approaches for how to do this. One was Latex, and here's one example of a latex-based planner, but I decided against as I spend too much time wrestling with it for my PhD work already.

Another approach might have been Pandoc, but as far as I could tell its PDF pipeline went via Latex, so I thought I might as well cut out the middleman.

Eventually I stumbled across tools to build PDFs from HTML, via "CSS Paged Media". This appealed, because I've done plenty of HTML generation. print-css.rocks is a fantastic resource to explore the print-specific CSS features. Weasyprint is a fantastic open source tool to convert appropriately-written HTML/CSS into PDF.

Finally I wanted to use a templating system to take shortcuts on writing HTML. I settled for embedded Ruby, which is something I haven't touched in over a decade. This was a relatively simple project and I found it surprisingly fun.

The results are available on GitHub: https://github.com/jmtd/planner. Right now, you get exactly what I have described. But my next plan is to add support for re-generating a planner, incorporating new information: pulling diary info from iCal, and any annotations made (such as with the Remarkable tablet) on top of the last generation and preserving them on the next.

29 April, 2022 09:29PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Should we stop teaching the normal distribution?

I guess Betteridge's law of headlines gives you the answer, but bear with me. :-)

Like most engineers, I am a layperson in statistics; I had some in high school, then an intro course in university and then used it in a couple of random courses (like speech recognition). (I also took a multivariate statistics course on my own after I had graduated.) But pretty much every practical tool I ever learned was, eventually, centered around the normal distribution; we learned about Student's t-test in various scenarios, made confidence intervals, learned about the central limit theorem that showed its special place in statistics, how the binomial distribution converges to the normal distribution under reasonable circumstances (not the least due to the CLT), and so on.

But then I got out in the wild and started trying to make sense out of the troves of data coming my way (including some stemming from experiments I designed on my own). And it turns out… a lot of things really are not normal. I'd see distributions with heavy tails, with skew, or that were bimodal. And here's the thing—people, who had the same kind of non-statistics-specialized education as me, continued to treat these as Gaussian. And it still appears to work. You get the beautiful confidence intervals and low p-values that seem to make sense… it's just so odd that you get “p<0.05 significant“ tests way too often from random noise. You just assume that's how it is, without really realizing that you're doing junk statistics. And even if you do, you don't have the tools to do anything about it, because everything else is hidden away in obscure R libraries or somewhere on Math Stack Exchange.

So I ask: If we're really going to learn people one thing, is the normal distribution really the best tool? (Yes, sure, we learned about the Poission and Weibull and many others, but we never really did hypothesis testing on them, and we never really learned what to do when things didn't follow a tidy mathematical formula. Or even how to identify that.) It's beautiful and simple (“simple”) and mathematical and you only need a huge table and then you can almost do calculations by hand, but perhaps that's not really what we want? I understand we want to teach fundamental understanding and not just “use this computer tool”, but again, we're sending people out with a really limited tool set to make sense of the world.

I don't know what we should do instead—again, I am a layperson, and my understanding of this is limited. But it feels like we should be able to come up with fairly simple techniques that don't break down fatally if the data doesn't follow one given distribution, no matter how important. Bootstrap? Wilcoxon signed-rank test? I know, of course, that if the data really is normal, you will need a lot less data for the same-quality result (and some natural processes, like, I guess, radioactive decay, surely follow normal distributions), but perhaps we should leave the Gaussians and other parametric tools for the advanced courses? I don't know. But it's worth a thought. And I need to learn more statistics.

29 April, 2022 06:59PM

hackergotchi for Holger Levsen

Holger Levsen

20220429-Debian-Reunion-Hamburg-2022

Debian Reunion Hamburg 2022 from May 23 to 30

This is just a quick reminder for the Debian Reunion Hamburg 2022 happening in a bit more than 3 weeks.

So far 43 people have registered and thus there's still some on site accomodation available. There's no real deadline for registration, however if you register after May 1st you might not get a t-shirt in your prefered size.

Also: if you attend to give a presentation but haven't replied to the CfP, please do so.

The wiki page linked above has all the details.

29 April, 2022 12:34PM

Russ Allbery

Review: Interesting Times

Review: Interesting Times, by Terry Pratchett

Series: Discworld #17
Publisher: Harper
Copyright: 1994
Printing: February 2014
ISBN: 0-06-227629-8
Format: Mass market
Pages: 399

Interesting Times is the seventeenth Discworld novel and certainly not the place to start. At the least, you will probably want to read The Colour of Magic and The Light Fantastic before this book, since it's a sequel to those (although Rincewind has had some intervening adventures).

Lord Vetinari has received a message from the Counterweight Continent, the first in ten years, cryptically demanding the Great Wizzard be sent immediately.

The Agatean Empire is one of the most powerful states on the Disc. Thankfully for everyone else, it normally suits its rulers to believe that the lands outside their walls are inhabited only by ghosts. No one is inclined to try to change their minds or otherwise draw their attention. Accordingly, the Great Wizard must be sent, a task that Vetinari efficiently delegates to the Archchancellor. There is only the small matter of determining who the Great Wizzard is, and why it was spelled with two z's.

Discworld readers with a better memory than I will recall Rincewind's hat. Why the Counterweight Continent would demanding a wizard notorious for his near-total inability to perform magic is a puzzle for other people. Rincewind is promptly located by a magical computer, and nearly as promptly transported across the Disc, swapping him for an unnecessarily exciting object of roughly equivalent mass and hurling him into an unexpected rescue of Cohen the Barbarian. Rincewind predictably reacts by running away, although not fast or far enough to keep him from being entangled in a glorious popular uprising. Or, well, something that has aspirations of being glorious, and popular, and an uprising.

I hate to say this, because Pratchett is an ethically thoughtful writer to whom I am willing to give the benefit of many doubts, but this book was kind of racist.

The Agatean Empire is modeled after China, and the Rincewind books tend to be the broadest and most obvious parodies, so that was already a recipe for some trouble. Some of the social parody is not too objectionable, albeit not my thing. I find ethnic stereotypes and making fun of funny-sounding names in other languages (like a city named Hunghung) to be in poor taste, but Pratchett makes fun of everyone's names and cultures rather equally. (Also, I admit that some of the water buffalo jokes, despite the stereotypes, were pretty good.) If it had stopped there, it would have prompted some eye-rolling but not much comment.

Unfortunately, a significant portion of the plot depends on the idea that the population of the Agatean Empire has been so brainwashed into obedience that they have a hard time even imagining resistance, and even their revolutionaries are so polite that the best they can manage for slogans are things like "Timely Demise to All Enemies!" What they need are a bunch of outsiders, such as Rincewind or Cohen and his gang. More details would be spoilers, but there are several deliberate uses of Ankh-Morpork as a revolutionary inspiration and a great deal of narrative hand-wringing over how awful it is to so completely convince people they are slaves that you don't need chains.

There is a depressingly tedious tendency of western writers, even otherwise thoughtful and well-meaning ones like Pratchett, to adopt a simplistic ranking of political systems on a crude measure of freedom. That analysis immediately encounters the problem that lots of people who live within systems that rate poorly on this one-dimensional scale seem inadequately upset about circumstances that are "obviously" horrific oppression. This should raise questions about the validity of the assumptions, but those assumptions are so unquestionable that the writer instead decides the people who are insufficiently upset about their lack of freedom must be defective. The more racist writers attribute that defectiveness to racial characteristics. The less racist writers, like Pratchett, attribute that defectiveness to brainwashing and systemic evil, which is not quite as bad as overt racism but still rests on a foundation of smug cultural superiority.

Krister Stendahl, a bishop of the Church of Sweden, coined three famous rules for understanding other religions:

  1. When you are trying to understand another religion, you should ask the adherents of that religion and not its enemies.
  2. Don't compare your best to their worst.
  3. Leave room for "holy envy."

This is excellent advice that should also be applied to politics. Most systems exist for some reason. The differences from your preferred system are easy to see, particularly those that strike you as horrible. But often there are countervailing advantages that are less obvious, and those are more psychologically difficult to understand and objectively analyze. You might find they have something that you wish your system had, which causes discomfort if you're convinced you have the best political system in the world, or are making yourself feel better about the abuses of your local politics by assuring yourself that at least you're better than those people.

I was particularly irritated to see this sort of simplistic stereotyping in Discworld given that Ankh-Morpork, the setting of most of the Discworld novels, is an authoritarian dictatorship. Vetinari quite capably maintains his hold on power, and yet this is not taken as a sign that the city's inhabitants have been brainwashed into considering themselves slaves. Instead, he's shown as adept at maintaining the stability of a precarious system with a lot of competing forces and a high potential for destructive chaos. Vetinari is an awful person, but he may be better than anyone who would replace him. Hmm.

This sort of complexity is permitted in the "local" city, but as soon as we end up in an analog of China, the rulers are evil, the system lacks any justification, and the peasants only don't revolt because they've been trained to believe they can't. Gah.

I was muttering about this all the way through Interesting Times, which is a shame because, outside of the ham-handed political plot, it has some great Pratchett moments. Rincewind's approach to any and all danger is a running (sorry) gag that keeps working, and Cohen and his gang of absurdly competent decrepit barbarians are both funnier here than they have been in any previous book and the rare highly-positive portrayal of old people in fantasy adventures who are not wizards or crones. Pretty Butterfly is a great character who deserved to be in a better plot. And I loved the trouble that Rincewind had with the Agatean tonal language, which is an excuse for Pratchett to write dialog full of frustrated non-sequiturs when Rincewind mispronounces a word.

I do have to grumble about the Luggage, though. From a world-building perspective its subplot makes sense, but the Luggage was always the best character in the Rincewind stories, and the way it lost all of its specialness here was oddly sad and depressing. Pratchett also failed to convince me of the drastic retcon of The Colour of Magic and The Light Fantastic that he does here (and which I can't talk about in detail due to spoilers), in part because it's entangled in the orientalism of the plot.

I'm not sure Pratchett could write a bad book, and I still enjoyed reading Interesting Times, but I don't think he gave the politics his normal care, attention, and thoughtful humanism. I hope later books in this part of the Disc add more nuance, and are less confident and judgmental. I can't really recommend this one, even though it has some merits.

Also, just for the record, "may you live in interesting times" is not a Chinese curse. It's an English saying that likely was attributed to China to make it sound exotic, which is the sort of landmine that good-natured parody of other people's cultures needs to be wary of.

Followed in publication order by Maskerade, and in Rincewind's personal timeline by The Last Continent.

Rating: 6 out of 10

29 April, 2022 02:50AM

April 28, 2022

hackergotchi for Jonathan McDowell

Jonathan McDowell

Resizing consoles automatically

I have 2 very useful shell scripts related to resizing consoles. The first is imaginatively called resize and just configures the terminal to be the requested size, neatly resizing an xterm or gnome-terminal:

#!/bin/sh

# resize <rows> <columns>
/bin/echo -e '\033[8;'$1';'$2't'

The other is a bit more complicated and useful when connecting to a host via a serial console, or when driving a qemu VM with -display none -nographic and all output coming over a “serial console” on stdio. It figures out the size of the terminal it’s running in and correctly sets the local settings to match so you can take full advantage of a larger terminal than the default 80x24:

#!/bin/bash

echo -ne '\e[s\e[5000;5000H'
IFS='[;' read -p $'\e[6n' -d R -a pos -rs
echo -ne '\e[u'

# cols / rows
echo "Size: ${pos[2]} x ${pos[1]}"

stty cols "${pos[2]}" rows "${pos[1]}"

export TERM=xterm-256color

Generally I source this with . fix-term or the TERM export doesn’t get applied. Both of these exist in various places around the ‘net (and there’s a resize binary shipped along with xterm) but I always forget the exact terms to find it again when I need it. So this post is mostly intended to serve as future reference next time I don’t have them handy.

28 April, 2022 07:03PM

Antoine Beaupré

building Debian packages under qemu with sbuild

I've been using sbuild for a while to build my Debian packages, mainly because it's what is used by the Debian autobuilders, but also because it's pretty powerful and efficient. Configuring it just right, however, can be a challenge. In my quick Debian development guide, I had a few pointers on how to configure sbuild with the normal schroot setup, but today I finished a qemu based configuration.

Why

I want to use qemu mainly because it provides better isolation than a chroot. I sponsor packages sometimes and while I typically audit the source code before building, it still feels like the extra protection shouldn't hurt.

I also like the idea of unifying my existing virtual machine setup with my build setup. My current VM is kind of all over the place: libvirt, vagrant, GNOME Boxes, etc?). I've been slowly converging over libvirt however, and most solutions I use right now rely on qemu under the hood, certainly not chroots...

I could also have decided to go with containers like LXC, LXD, Docker (with conbuilder, whalebuilder, docker-buildpackage), systemd-nspawn (with debspawn), unshare (with schroot --chroot-mode=unshare), or whatever: I didn't feel those offer the level of isolation that is provided by qemu.

The main downside of this approach is that it is (obviously) slower than native builds. But on modern hardware, that cost should be minimal.

How

Basically, you need this:

sudo mkdir -p /srv/sbuild/qemu/
sudo apt install sbuild-qemu
sudo sbuild-qemu-create -o /srv/sbuild/qemu/unstable.img unstable https://deb.debian.org/debian

Then to make this used by default, add this to ~/.sbuildrc:

# run autopkgtest inside the schroot
$run_autopkgtest = 1;
# tell sbuild to use autopkgtest as a chroot
$chroot_mode = 'autopkgtest';
# tell autopkgtest to use qemu
$autopkgtest_virt_server = 'qemu';
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ '--', '/srv/sbuild/qemu/%r-%a.img' ];
# tell plain autopkgtest to use qemu, and the right image
$autopkgtest_opts = [ '--', 'qemu', '/srv/sbuild/qemu/%r-%a.img' ];
# no need to cleanup the chroot after build, we run in a completely clean VM
$purge_build_deps = 'never';
# no need for sudo
$autopkgtest_root_args = '';

Note that the above will use the default autopkgtest (1GB, one core) and qemu (128MB, one core) configuration, which might be a little low on resources. You probably want to be explicit about this, with something like this:

# extra parameters to pass to qemu
# --enable-kvm is not necessary, detected on the fly by autopkgtest
my @_qemu_options = ['--ram-size=4096', '--cpus=2'];
# tell autopkgtest-virt-qemu the path to the image
# use --debug there to show what autopkgtest is doing
$autopkgtest_virt_server_options = [ @_qemu_options, '--', '/srv/sbuild/qemu/%r-%a.img' ];
$autopkgtest_opts = [ '--', 'qemu', @qemu_options, '/srv/sbuild/qemu/%r-%a.img'];

This configuration will:

  1. create a virtual machine image in /srv/sbuild/qemu for unstable
  2. tell sbuild to use that image to create a temporary VM to build the packages
  3. tell sbuild to run autopkgtest (which should really be default)
  4. tell autopkgtest to use qemu for builds and for tests

Note that the VM created by sbuild-qemu-create have an unlocked root account with an empty password.

Other useful tasks

  • enter the VM to make test, changes will be discarded (thanks Nick Brown for the sbuild-qemu-boot tip!):

     sbuild-qemu-boot /srv/sbuild/qemu/unstable-amd64.img
    

    That program is shipped only with bookworm and later, an equivalent command is:

     qemu-system-x86_64 -snapshot -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    

    The key argument here is -snapshot.

  • enter the VM to make permanent changes, which will not be discarded:

     sudo sbuild-qemu-boot --readwrite /srv/sbuild/qemu/unstable-amd64.img
    

    Equivalent command:

     sudo qemu-system-x86_64 -enable-kvm -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -m 2048 -nographic /srv/sbuild/qemu/unstable-amd64.img
    
  • update the VM (thanks lavamind):

     sudo sbuild-qemu-update /srv/sbuild/qemu/unstable-amd64.img
    
  • build in a specific VM regardless of the suite specified in the changelog (e.g. UNRELEASED, bookworm-backports, bookworm-security, etc):

     sbuild --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    

    Note that you'd also need to pass --autopkgtest-opts if you want autopkgtest to run in the correct VM as well:

     sbuild --autopkgtest-opts="-- qemu /var/lib/sbuild/qemu/unstable.img" --autopkgtest-virt-server-opts="-- qemu /var/lib/sbuild/qemu/bookworm-amd64.img"
    

    You might also need parameters like --ram-size if you customized it above.

And yes, this is all quite complicated and could be streamlined a little, but that's what you get when you have years of legacy and just want to get stuff done. It seems to me autopkgtest-virt-qemu should have a magic flag starts a shell for you, but it doesn't look like that's a thing. When that program starts, it just says ok and sits there.

Maybe because the authors consider the above to be simple enough (see also bug #911977 for a discussion of this problem).

Live access to a running test

When autopkgtest starts a VM, it uses this funky qemu commandline:

qemu-system-x86_64 -m 4096 -smp 2 -nographic -net nic,model=virtio -net user,hostfwd=tcp:127.0.0.1:10022-:22 -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0,id=rng-device0 -monitor unix:/tmp/autopkgtest-qemu.w1mlh54b/monitor,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS0,server,nowait -serial unix:/tmp/autopkgtest-qemu.w1mlh54b/ttyS1,server,nowait -virtfs local,id=autopkgtest,path=/tmp/autopkgtest-qemu.w1mlh54b/shared,security_model=none,mount_tag=autopkgtest -drive index=0,file=/tmp/autopkgtest-qemu.w1mlh54b/overlay.img,cache=unsafe,if=virtio,discard=unmap,format=qcow2 -enable-kvm -cpu kvm64,+vmx,+lahf_lm

... which is a typical qemu commandline, I'm sorry to say. That gives us a VM with those settings (paths are relative to a temporary directory, /tmp/autopkgtest-qemu.w1mlh54b/ in the above example):

  • the shared/ directory is, well, shared with the VM
  • port 10022 is forward to the VM's port 22, presumably for SSH, but not SSH server is started by default
  • the ttyS1 and ttyS2 UNIX sockets are mapped to the first two serial ports (use nc -U to talk with those)
  • the monitor UNIX socket is a qemu control socket (see the QEMU monitor documentation, also nc -U)

In other words, it's possible to access the VM with:

nc -U /tmp/autopkgtest-qemu.w1mlh54b/ttyS2

The nc socket interface is ... not great, but it works well enough. And you can probably fire up an SSHd to get a better shell if you feel like it.

Nitty-gritty details no one cares about

Fixing hang in sbuild cleanup

I'm having a hard time making heads or tails of this, but please bear with me.

In sbuild + schroot, there's this notion that we don't really need to cleanup after ourselves inside the schroot, as the schroot will just be delted anyways. This behavior seems to be handled by the internal "Session Purged" parameter.

At least in lib/Sbuild/Build.pm, we can see this:

my $is_cloned_session = (defined ($session->get('Session Purged')) &&
             $session->get('Session Purged') == 1) ? 1 : 0;

[...]

if ($is_cloned_session) {
$self->log("Not cleaning session: cloned chroot in use\n");
} else {
if ($purge_build_deps) {
    # Removing dependencies
    $resolver->uninstall_deps();
} else {
    $self->log("Not removing build depends: as requested\n");
}
}

The schroot builder defines that parameter as:

    $self->set('Session Purged', $info->{'Session Purged'});

... which is ... a little confusing to me. $info is:

my $info = $self->get('Chroots')->get_info($schroot_session);

... so I presume that depends on whether the schroot was correctly cleaned up? I stopped digging there...

ChrootUnshare.pm is way more explicit:

$self->set('Session Purged', 1);

I wonder if we should do something like this with the autopkgtest backend. I guess people might technically use it with something else than qemu, but qemu is the typical use case of the autopkgtest backend, in my experience. Or at least certainly with things that cleanup after themselves. Right?

For some reason, before I added this line to my configuration:

$purge_build_deps = 'never';

... the "Cleanup" step would just completely hang. It was quite bizarre.

Disgression on the diversity of VM-like things

There are a lot of different virtualization solutions one can use (e.g. Xen, KVM, Docker or Virtualbox). I have also found libguestfs to be useful to operate on virtual images in various ways. Libvirt and Vagrant are also useful wrappers on top of the above systems.

There are particularly a lot of different tools which use Docker, Virtual machines or some sort of isolation stronger than chroot to build packages. Here are some of the alternatives I am aware of:

Take, for example, Whalebuilder, which uses Docker to build packages instead of pbuilder or sbuild. Docker provides more isolation than a simple chroot: in whalebuilder, packages are built without network access and inside a virtualized environment. Keep in mind there are limitations to Docker's security and that pbuilder and sbuild do build under a different user which will limit the security issues with building untrusted packages.

On the upside, some of things are being fixed: whalebuilder is now an official Debian package (whalebuilder) and has added the feature of passing custom arguments to dpkg-buildpackage.

None of those solutions (except the autopkgtest/qemu backend) are implemented as a sbuild plugin, which would greatly reduce their complexity.

I was previously using Qemu directly to run virtual machines, and had to create VMs by hand with various tools. This didn't work so well so I switched to using Vagrant as a de-facto standard to build development environment machines, but I'm returning to Qemu because it uses a similar backend as KVM and can be used to host longer-running virtual machines through libvirt.

The great thing now is that autopkgtest has good support for qemu and sbuild has bridged the gap and can use it as a build backend. I originally had found those bugs in that setup, but all of them are now fixed:

  • #911977: sbuild: how do we correctly guess the VM name in autopkgtest?
  • #911979: sbuild: fails on chown in autopkgtest-qemu backend
  • #911963: autopkgtest qemu build fails with proxy_cmd: parameter not set
  • #911981: autopkgtest: qemu server warns about missing CPU features

So we have unification! It's possible to run your virtual machines and Debian builds using a single VM image backend storage, which is no small feat, in my humble opinion. See the sbuild-qemu blog post for the annoucement

Now I just need to figure out how to merge Vagrant, GNOME Boxes, and libvirt together, which should be a matter of placing images in the right place... right? See also hosting.

pbuilder vs sbuild

I was previously using pbuilder and switched in 2017 to sbuild. AskUbuntu.com has a good comparative between pbuilder and sbuild that shows they are pretty similar. The big advantage of sbuild is that it is the tool in use on the buildds and it's written in Perl instead of shell.

My concerns about switching were POLA (I'm used to pbuilder), the fact that pbuilder runs as a separate user (works with sbuild as well now, if the _apt user is present), and setting up COW semantics in sbuild (can't just plug cowbuilder there, need to configure overlayfs or aufs, which was non-trivial in Debian jessie).

Ubuntu folks, again, have more documentation there. Debian also has extensive documentation, especially about how to configure overlays.

I was ultimately convinced by stapelberg's post on the topic which shows how much simpler sbuild really is...

Who

Thanks lavamind for the introduction to the sbuild-qemu package.

28 April, 2022 04:02PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf22 bursary applications and call for papers are closing in less than 72 hours!

If you intend to apply for a DebConf22 bursary and/or submit an event proposal and have not yet done so, please proceed as soon as possible!

Bursary applications for DebConf22 will be accepted until May 1st at 23:59 UTC. Applications submitted after this deadline will not be considered.

You can apply for a bursary when you register for the conference.

Remember that giving a talk or organising an event is considered towards your bursary; if you have a submission to make, submit it even if it is only sketched-out. You will be able to detail it later. DebCamp plans can be entered in the usual Sprints page at the Debian wiki.

Please make sure to double-check your accommodation choices (dates and venue). Details about accommodation arrangements can be found on the accommodation page.

Event proposals will be accepted until May 1st at 23:59 UTC too.

Events are not limited to traditional presentations or informal sessions (BoFs): we welcome submissions of tutorials, performances, art installations, debates, or any other format of event that you think would be of interest to the Debian community.

Regular sessions may either be 20 or 45 minutes long (including time for questions), other kinds of sessions (workshops, demos, lightning talks, and so on) could have different durations. Please choose the most suitable duration for your event and explain any special requests. You can submit it here.

The the 23rd edition of DebConf will take place from July 17th to 24th, 2022 at the Innovation and Training Park (ITP) in Prizren, Kosovo, and will be preceded by DebCamp, from July 10th to 16th.

See you in Prizren!

DebConf22 banner open registration

28 April, 2022 07:30AM by The Debian Publicity Team

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - April 2022

After two long years of COVID hiatus, local Debian events in Montreal are back! Last Sunday, nine of us met at Koumbit to work on Debian (and other stuff!), chat and socialise.

Even though these events aren't always the most productive, it was super fun and definitely helps keeping me motivated to work on Debian in my spare time.

Many thanks to Debian for providing us a budget to rent the venue for the day and for the pizzas! Here are a few pictures I took during the event:

Pizza boxes on a wooden bench

Whiteboard listing TODO items for some of the participants

A table with a bunch of laptops, and LeLutin :)

If everything goes according to plan, our next meeting should be sometime in June. If you are interested, the best way to stay in touch is either to subscribe to our mailing list or to join our IRC channel (#debian-quebec on OFTC). Events are also posted on Quebec's Agenda du libre.

28 April, 2022 04:00AM by Louis-Philippe Véronneau

April 27, 2022

Antoine Beaupré

Using LSP in Emacs and Debian

The Language Server Protocol (LSP) is a neat mechanism that provides a common interface to what used to be language-specific lookup mechanisms (like, say, running a Python interpreter in the background to find function definitions).

There is also ctags shipped with UNIX since forever, but that doesn't support looking backwards ("who uses this function"), linting, or refactoring. In short, LSP rocks, and how do I use it right now in my editor of choice (Emacs, in my case) and OS (Debian) please?

Editor (emacs) setup

First, you need to setup your editor. The Emacs LSP mode has pretty good installation instructions which, for me, currently mean:

apt install elpa-lsp-mode

and this .emacs snippet:

(use-package lsp-mode
  :commands (lsp lsp-deferred)
  :hook ((python-mode go-mode) . lsp-deferred)
  :demand t
  :init
  (setq lsp-keymap-prefix "C-c l")
  ;; TODO: https://emacs-lsp.github.io/lsp-mode/page/performance/
  ;; also note re "native compilation": <+varemara> it's the
  ;; difference between lsp-mode being usable or not, for me
  :config
  (setq lsp-auto-configure t))

(use-package lsp-ui
  :config
  (setq lsp-ui-flycheck-enable t)
  (add-to-list 'lsp-ui-doc-frame-parameters '(no-accept-focus . t))
  (define-key lsp-ui-mode-map [remap xref-find-definitions] #'lsp-ui-peek-find-definitions)
  (define-key lsp-ui-mode-map [remap xref-find-references] #'lsp-ui-peek-find-references))

Note: this configuration might have changed since I wrote this, see my init.el configuration for the most recent config.

The main reason for choosing lsp-mode over eglot is that it's in Debian (and eglot is not). (Apparently, eglot has more chance of being upstreamed, "when it's done", but I guess I'll cross that bridge when I get there.)

I already had lsp-mode partially setup in Emacs so I only had to do this small tweak to switch and change the prefix key (because s-l or mod is used by my window manager). I also had to pin LSP packages to bookworm here so that it properly detects pylsp (the older version in Debian bullseye only supports pyls, not packaged in Debian).

This won't do anything by itself: Emacs will need something to talk with to provide the magic. Those are called "servers" and are basically different programs, for each programming language, that provide the magic.

Servers setup

The Emacs package provides a way (M-x lsp-install-server) to install some of them, but I prefer to manage those tools through Debian packages if possible, just like lsp-mode itself. Those are the servers I currently know of in Debian:

package languages
ccls C, C++, ObjectiveC
clangd C, C++, ObjectiveC
elpa-lsp-haskell Haskell
fortran-language-server Fortran
gopls Golang
python3-pyls Python

There might be more such packages, but those are surprisingly hard to find. I found a few with apt search "Language Server Protocol", but that didn't find ccls, for example, because that just said "Language Server" in the description (which also found a few more pyls plugins, e.g. black support).

Note that the Python packages, in particular, need to be upgraded to their bookworm releases to work properly (here). It seems like there's some interoperability problems there that I haven't quite figured out yet. See also my Puppet configuration for LSP.

Finally, note that I have now completely switched away from Elpy to pyls, and I'm quite happy with the results. lsp-mode feels slower than elpy but I haven't done any of the performance tuning and this will improve even more with native compilation. And lsp-mode is much more powerful. I particularly like the "rename symbol" functionality, which ... mostly works.

Remaining work

Puppet and Ruby

I still have to figure how to actually use this: I mostly spend my time in Puppet these days, there is no server listed in the Emacs lsp-mode language list, but there is one listed over at the upstream language list, the puppet-editor-services server.

But it's not packaged in Debian, and seems somewhat... involved. It could still be a good productivity boost. The Voxpupuli team have vim install instructions which also suggest installing solargraph, the Ruby language server, also not packaged in Debian.

Bash

I guess I do a bit of shell scripting from time to time nowadays, even though I don't like it. So the bash-language-server may prove useful as well.

Other languages

Here are more language servers available:

27 April, 2022 08:55PM

Russ Allbery

Review: Sorceress of Darshiva

Review: Sorceress of Darshiva, by David Eddings

Series: The Malloreon #4
Publisher: Del Rey
Copyright: December 1989
Printing: November 1990
ISBN: 0-345-36935-1
Format: Mass market
Pages: 371

This is the fourth book of the Malloreon, the sequel series to the Belgariad. Eddings as usual helpfully summarizes the plot of previous books (the one thing about his writing that I wish more authors would copy), this time by having various important people around the world briefed on current events. That said, you don't want to start reading here (although you might wish you could).

This is such a weird book.

One could argue that not much happens in the Belgariad other than map exploration and collecting a party, but the party collection involves meddling in local problems to extract each new party member. It's a bit of a random sequence of events, but things clearly happen. The Malloreon starts off with a similar structure, including an explicit task to create a new party of adventurers to save the world, but most of the party is already gathered at the start of the series since they carry over from the previous series. There is a ton of map exploration, but it's within the territory of the bad guys from the previous series. Rather than local meddling and acquiring new characters, the story is therefore chasing Zandramas (the big bad of the series) and books of prophecy.

This could still be an effective plot trigger but for another decision of Eddings that becomes obvious in Demon Lord of Karanda (the third book): the second continent of this world, unlike the Kingdoms of Hats world-building of the Belgariad, is mostly uniform. There are large cities, tons of commercial activity, and a fairly effective and well-run empire, with only a little local variation. In some ways it's a welcome break from Eddings's previous characterization by stereotype, but there isn't much in the way of local happenings for the characters to get entangled in.

Even more oddly, this continental empire, which the previous series set up as the mysterious and evil adversaries of the west akin to Sauron's domain in Lord of the Rings, is not mysterious to some of the party at all. Silk, the Drasnian spy who is a major character in both series, has apparently been running a vast trading empire in Mallorea. Not only has he been there before, he has houses and factors and local employees in every major city and keeps being distracted from the plot by his cutthroat capitalist business shenanigans. It's as if the characters ventured into the heart of the evil empire and found themselves in the entirely normal city next door, complete with restaurant recommendations from one of their traveling companions.

I think this is an intentional subversion of the normal fantasy plot by Eddings, and I kind of like it. We have met the evil empire, and they're more normal than most of the good guys, and both unaware and entirely uninterested in being the evil empire. But in terms of dramatic plot structure, it is such an odd choice. Combined with the heroes being so absurdly powerful that they have no reason to take most dangers seriously (and indeed do not), it makes this book remarkably anticlimactic and weirdly lacking in drama.

And yet I kind of enjoyed reading it? It's oddly quiet and comfortable reading. Nothing bad happens, nor seems very likely to happen. The primary plot tension is Belgarath trying to figure out the plot of the series by tracking down prophecies in which the plot is written down with all of the dramatic tension of an irritated rare book collector. In the middle of the plot, the characters take a detour to investigate an alchemist who is apparently immortal, featuring a university on Melcena that could have come straight out of a Discworld novel, because investigating people who spontaneously discover magic is of arguably equal importance to saving the world. Given how much the plot is both on rails and clearly willing to wait for the protagonists to catch up, it's hard to argue with them. It felt like a side quest in a video game.

I continue to find the way that Eddings uses prophecy in this series to be highly amusing, although there aren't nearly enough moments of the prophecy giving Garion stage direction. The basic concept of two competing prophecies that are active characters in the world attempting to create their own sequence of events is one that would support a better series than this one. It's a shame that Zandramas, the main villain, is rather uninteresting despite being female in a highly sexist society, highly competent, a different type of shapeshifter (I won't say more to avoid spoilers for earlier books), and the anchor of the other prophecy. It's good material, but Eddings uses it very poorly, on top of making the weird decision to have her talk like an extra in a Shakespeare play.

This book was astonishingly pointless. I think the only significant plot advancement besides map movement is picking up a new party member (who was rather predictable), and the plot is so completely on rails that the characters are commenting about the brand of railroad ties that Eddings used. Ce'Nedra continues to be spectacularly irritating. It's not, by any stretch of the imagination, a good book, and yet for some reason I enjoyed it more than the other books of the series so far. Chalk one up for brain candy when one is in the right mood, I guess.

Followed by The Seeress of Kell, the epic (?) conclusion.

Rating: 6 out of 10

27 April, 2022 04:30AM

Russell Coker

PIN for Login

Windows 10 added a new “PIN” login method, which is an optional login method instead of an Internet based password through Microsoft or a Domain password through Active Directory. Here is a web page explaining some of the technology (don’t watch the YouTube video) [1]. There are three issues here, whether a PIN is any good in concept, whether the specifics of how it works are any good, and whether we can copy any useful ideas for Linux.

Is a PIN Any Good?

A PIN in concept is a shorter password. I think that less secure methods of screen unlocking (fingerprint, face unlock, and a PIN) can be reasonably used in less hostile environments. For example if you go to the bathroom or to get a drink in a relatively secure environment like a typical home or office you don’t need to enter a long password afterwards. Having a short password that works for short time periods of screen locking and a long password for longer times could be a viable option.

It could also be an option to allow short passwords when the device is in a certain area (determined by GPS or Wifi connection). Android devices have in the past had options to disable passwords when at home.

Is the Windows 10 PIN Any Good?

The Windows 10 PIN is based on TPM security which can provide real benefits, but this is more of a failure of Windows local passwords in not using the TPM than a benefit for the PIN. When you login to a Windows 10 system you will be given a choice of PIN or the configured password (local password or AD password).

As a general rule providing a user a choice of ways to login is bad for security as an attacker can use whichever option is least secure.

The configuration options for Windows 10 allow either group policy in AD or the registry to determine whether PIN login is allowed but doesn’t have any control over when the PIN can be used which seems like a major limitation to me.

The claim that the PIN is more secure than a password would only make sense if it was a viable option to disable the local password or AD domain password and only use the PIN. That’s unreasonably difficult for home users and usually impossible for people on machines with corporate management.

Ideas For Linux

I think it would be good to have separate options for short term and long term screen locks. This could be implemented by having a screen locking program use two different PAM configurations for unlocking after short term and long term lock periods.

Having local passwords based on the TPM might be useful. But if you have the root filesystem encrypted via the TPM using systemd-cryptoenroll it probably doesn’t gain you a lot. One benefit of the TPM is limiting the number of incorrect attempts at guessing the password in hardware, the default is allowing 32 wrong attempts and then one every 10 minutes. Trying to do that in software would allow 32 guesses and then a hardware reset which could average at something like 32 guesses per minute instead of 32 guesses per 320 minutes. Maybe something like fail2ban could help with this (a similar algorithm but for password authentication guesses instead of network access).

Having a local login method to use when there is no Internet access and network authentication can’t work could be useful. But if the local login method is easier then an attacker could disrupt Internet access to force a less secure login method.

Is there a good federated authentication system for Linux? Something to provide comparable functionality to AD but with distributed operation as a possibility?

27 April, 2022 03:18AM by etbe

April 26, 2022

Tim Retout

Exploring StackRox

At the end of March, the source code to StackRox was released, following the 2021 acquisition by Red Hat. StackRox is a Kubernetes security tool which is now badged as Red Hat Advanced Cluster Security (RHACS), offering features such as vulnerability management, validating cluster configurations against CIS benchmarks, and some runtime behaviour analysis. In fact, it’s such a diverse range of features that I have trouble getting my head round it from the product page or even the documentation.

Source code is available via the StackRox organisation on GitHub, and the most obviously interesting repositories seem to be:

  • stackrox/stackrox, containing the main application, written in Go
  • stackrox/scanner, the vulnerability scanner, also in Go. From a first glance at the go.mod file, it does not seem to share much code with Clair, which is interesting.
  • stackrox/collector, the runtime analysis component, in C++ but also with hooks into the kernel.

My initial curiosity has been around the ‘collector’, to better understand what runtime behaviour the tool can actually pick up. I was intrigued to find that the actual kernel component is a patched version of Falco’s kernel module/eBPF probes; a few features are disabled compared to Falco, e.g. page faults and signal events.

There’s a list of supported syscalls in driver/syscall_table.c, which seems to have drifted slightly or be slightly behind the upstream Falco version? In particular I note the absence of io_uring, but given RHACS is mainly deployed on Linux 4.18 at the moment (RHEL 8) this is probably a non-issue. (But relevant if anyone were to run it on newer kernels.)

That’s as far as I’ve got for now. Red Hat are making great efforts to reach out to the community; there’s a Slack channel, and office hours recordings, and a community hub to explore further. It’s great to see new free software projects created through acquisition in this way - I’m not sure I remember seeing a comparable example.

26 April, 2022 08:07PM

hackergotchi for Steve Kemp

Steve Kemp

Porting a game from CP/M to the ZX Spectrum 48k

Back in April 2021 I introduced a simple text-based adventure game, The Lighthouse of Doom, which I'd written in Z80 assembly language for CP/M systems.

As it was recently the 40th Anniversary of the ZX Spectrum 48k, the first computer I had, and the reason I got into programming in the first place, it crossed my mind that it might be possible to port my game from CP/M to the ZX Spectrum.

To recap my game is a simple text-based adventure game, which you can complete in fifteen minutes, or less, with a bunch of Paw Patrol easter-eggs.

  • You enter simple commands such as "up", "down", "take rug", etc etc.
  • You receive text-based replies "You can't see a telephone to use here!".

My code is largely table-based, having structures that cover objects, locations, and similar state-things. Most of the code involves working with those objects, with only a few small platform-specific routines being necessary:

  • Clearing the screen.
  • Pausing for "a short while".
  • Reading a line of input from the user.
  • Sending a $-terminated string to the console.
  • etc.

My feeling was that I could replace the use of those CP/M functions with something custom, and I'd have done the 99% of the work. Of course the devil is always in the details.

Let's start. To begin with I'm lucky in that I'm using the pasmo assembler which is capable of outputting .TAP files, which can be loaded into ZX Spectrum emulators.

I'm not going to walk through all the code here, because that is available within the project repository, but here's a very brief getting-started guide which demonstrates writing some code on a Linux host, and generating a TAP file which can be loaded into your favourite emulator. As I needed similar routines I started working out how to read keyboard input, clear the screen, and output messages which is what the following sample will demonstrate..

First of all you'll need to install the dependencies, specifically the assembler and an emulator to run the thing:

# apt install pasmo spectemu-x11

Now we'll create a simple assembly-language file, to test things out - save the following as hello.z80:

    ; Code starts here
    org 32768

    ; clear the screen
    call cls

    ; output some text
    ld   de, instructions                  ; DE points to the text string
    ld   bc, instructions_end-instructions ; BC contains the length
    call 8252

    ; wait for a key
    ld hl,0x5c08        ; LASTK
    ld a,255
    ld (hl),a
wkey:
    cp (hl)             ; wait for the value to change
    jr z, wkey

    ; get the key and save it
    ld a,(HL)
    push af

    ; clear the screen
    call cls

    ; show a second message
    ld de, you_pressed
    ld bc, you_pressed_end-you_pressed
    call 8252

    ;; Output the ASCII character in A
    ld a,2
    call 0x1601
    pop af
    call 0x0010

    ; loop forever.  simple demo is simple
endless:
    jr endless

cls:
    ld a,2
    call 0x1601  ; ROM_OPEN_CHANNEL
    call 0x0DAF  ; ROM_CLS
    ret

instructions:
    defb 'Please press a key to continue!'
instructions_end:

you_pressed:
    defb 'You pressed:'
you_pressed_end:

end 32768

Now you can assemble that into a TAP file like so:

$ pasmo --tapbas hello.z80 hello.tap

The final step is to load it in the emulator:

$ xspect -quick-load -load-immed -tap hello.tap

The reason I specifically chose that emulator was because it allows easily loading of a TAP file, without waiting for the tape to play, and without the use of any menus. (If you can tell me how to make FUSE auto-start like that, I'd love to hear!)

I wrote a small number of "CP/M emulation functions" allowing me to clear the screen, pause, prompt for input, and output text, which will work via the primitives available within the standard ZX Spectrum ROM. Then I reworked the game a little to cope with the different screen resolution (though only minimally, some of the text still breaks lines in unfortunate spots):

The end result is reasonably playable, even if it isn't quite as nice as the CP/M version (largely because of the unfortunate word-wrapping, and smaller console-area). So now my repository contains a .TAP file which can be loaded into your emulator of choice, available from the releases list.

Here's a brief teaser of what you can expect:

Outstanding bugs? Well the line-input is a bit horrid, and unfortunately this was written for CP/M accessed over a terminal - so I'd assumed a "standard" 80x25 resolution, which means that line/word-wrapping is broken in places.

That said it didn't take me too long to make the port, and it was kinda fun.

26 April, 2022 08:00PM

Reproducible Builds

Supporter spotlight: Google Open Source Security Team (GOSST)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.

This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org.

We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent one about ARDC. However, today, we’ll be talking with Meder Kydyraliev of the Google Open Source Security Team (GOSST).


Chris Lamb: Hi Meder, thanks for taking the time to talk to us today. So, for someone who has not heard of the Google Open Source Security Team (GOSST) before, could you tell us what your team is about?

Meder: Of course. The Google Open Source Security Team (or ‘GOSST’) was created in 2020 to work with the open source community at large, with the goal of making the open source software that everyone relies on more secure.


Chris: What kinds of activities is the GOSST involved in?

Meder: The range of initiatives that the team is involved in recognizes the diversity of the ecosystem and unique challenges that projects face on their security journey. For example, our sponsorship of sos.dev ensures that developers are rewarded for security improvements to open source projects, whilst the long term work on improving Linux kernel security tackles specific kernel-related vulnerability classes.

Many of the projects GOSST is involved with aim to make it easier for developers to improve security through automated assessment (Scorecards and Allstar) and vulnerability discovery tools (OSS-Fuzz, ClusterFuzzLite, FuzzIntrospector), in addition to contributing to infrastructure to make adopting certain ‘best practices’ easier. Two great examples of best practice efforts are Sigstore for artifact signing and OSV for automated vulnerability management.


Chris: The usage of open source software has exploded in the last decade, but supply-chain hygiene and best practices has seemingly not kept up. How does GOSST see this issue and what approaches is it taking to ensure that past and future systems are resilient?

Meder: Every open source ecosystem is a complex environment and that awareness informs our approaches in this space. There are, of course, no ‘silver bullets’, and long-lasting and material supply-chain improvements requires infrastructure and tools that will make lives of open source developers easier, all whilst improving the state of the wider software supply chain.

As part of a broader effort, we created the Supply-chain Levels for Software Artifacts framework that has been used internally at Google to protect production workloads. This framework describes the best practices for source code and binary artifact integrity, and we are engaging with the community on its refinement and adoption. Here, package managers (such as PyPI, Maven Central, Debian, etc.) are an essential link in the software supply chain due to their near-universal adoption; users do not download and compile their own software anymore. GOSST is starting to work with package managers to explore ways to collaborate together on improving the state of the supply chain and helping package maintainers and application developers do better… all with the understanding that many open source projects are developed in spare time as a hobby! Solutions like this, which are the result of collaboration between GOSST and GitHub, are very encouraging as they demonstrate a way to materially strengthen software supply chain security with readily available tools, while also improving development workflows.

For GOSST, the problem of supply chain security also covers vulnerability management and solutions to make it easier for everyone to discover known vulnerabilities in open source packages in a scalable and automated way. This has been difficult in the past due to lack of consistently high-quality data in an easily-consumable format. To address this, we’re working on infrastructure (OSV.dev) to make vulnerability data more easily accessible as well as a widely adopted and automation friendly data format.


Chris: How does the Reproducible Builds effort help GOSST achieve its goals?

Meder: Build reproducibility has a lot of attributes that are attractive as part of generally good ‘build hygiene’. As an example, hermeticity, one of requirements to meet SLSA level 4, makes it much easier to reason about the dependencies of a piece of software. This is an enormous benefit during vulnerability or supply chain incident response.

On a higher level, however, we always think about reproducibility from the viewpoint of a user and the threats that reproducibility protects them from. Here, a lot of progress has been made, of course, but a lot of work remains to make reproducibility part of everyone’s software consumption practices.


Chris: So if someone wanted to know more about GOSST or follow the team’s work, where might they go to look?

Meder: We post regular updates on Google’s Security Blog and on the Linux hardening mailing list. We also welcome community participation in the projects we work on! See any of the projects linked above or OpenSSF’s GitHub projects page for a list of efforts you can contribute to directly if you want to get involved in strengthening the open source ecosystem.


Chris: Thanks for taking the time to talk to us today.

Meder: No problem. :)




For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

26 April, 2022 10:00AM

hackergotchi for Sean Whitton

Sean Whitton

Newcomers to Emacs and Emacs configurations

I like to look at the Emacs subreddit and something I’ve noticed recently is people asking “should I start by writing my own Emacs config, or should I use this or that prepackaged one?” There is also this new config generator published by Philip Kaludercic. I find implicit in these the idea that one’s init.el is a singular product. To start using Emacs, newcomers seem to think, you need to couple it with a completed init.el, and so there is the question of writing your own or using one someone else has written. I think that an appropriate analogy is certain shell scripts. If you want to burn backups to DVDs you might download someone’s DVD burning shell script which tries to make that easy, or you might write your own. In both cases, you are likely to want to tweak the script after you’ve started using it, but there is nevertheless a discrete point at which you go from having part of a script and not being able to burn DVDs, to having a completed script and now being able to burn DVDs. Similarly, the idea that you can’t start using Emacs until you couple it with an init.el is like thinking that there is a process of producing or downloading an init.el, and only after that can you begin using Emacs.

This thinking makes sense if you’re developing one of the large Emacs configuration frameworks like Spacemacs or Doom Emacs. The people behind those projects are seeking to build something quite different from Emacs, using Emacs as a base, and for many people using that new, quite different thing is preferable to using Emacs. Then indeed, until you’ve finished developing your configuration framework’s init.el to a degree that you’re ready to release version 0.1 of your framework, you haven’t got something that’s ready to use. Like the shell script, there’s a discrete point after which you have a product, and there’s lots of labour that must precede it. (I think it’s pretty cool that Emacs is flexible enough to be something on its own and also a base for these projects.)

However, this temporal structure does not make sense to me when it comes to using just Emacs. I find the idea that one’s init.el is a singular product strange. In particular, my init.el is not something which transforms Emacs into something else, like the init.el that’s part of Doom Emacs does. It’s just a long collection of incrementally developed, largely unrelated hacks and tweaks. You could insist that it transforms default Emacs into Sean’s Emacs, but I think that misleadingly implies that there’s an overarching design and cohesion to my init.el, which just isn’t there – and it would be weird if it was there, because then I would be doing something more like the developers behind Doom Emacs. So if you’re not going to use one of the large configuration frameworks, then there is no task of “writing your own init.el” that stands before you. You just start using Emacs, and as part of that you’re going to write functions and rebind keys, etc., and your init.el is the file in which those changes are collected. The choice is not between writing your own init.el or downloading a prepackaged one. It’s between using Emacs, or using another product that has been built out of Emacs. The latter necessarily involves a completed init.el, but that’s an implementation detail.

I am very happy Kaludercic’s configuration generator has been made available, but I would be inclined to rename it. A new user of Emacs is likely to be overwhelmed with unintuitive defaults that have stuck around for mostly historical reasons. There are a lot of them, so it is a lot to ask of new users that they just identify the defaults that don’t suit them and add lines to their init.el to change those. When too many things are unintuitive, it’s hard to know where to start. Kaludercic’s configuration generator is a way to walk newcomers through the most significant defaults in a way that something structured like a reference manual would struggle to do. The result is some Lisp code, but I would prefer not to refer to that result as an Emacs configuration. It’s a series of configuration snippets that you can add to your Emacs configuration to help deal with the newcomer’s problem of too many unintuitive defaults.

I’m not sure it’s important to actually rename Kaludercic’s tool to something which says it’s a generator of configuration snippets rather than a generator of configurations. But I would like to challenge the idea that to start using Emacs you first need to couple it with a completed init.el. If you’re going to use Emacs, rather than Spacemacs or Doom Emacs, you can just start using it. If you find yourself butting up against a lot of unintuitive defaults, then you can use a walkthrough tool like Kaludercic’s to figure out what you need to add to your init.el to deal with those. But that is better understood as just more of the tweaking and customisation that Emacs users are always getting up to, not some prerequisite labour.

26 April, 2022 06:01AM

April 23, 2022

hackergotchi for Bálint Réczey

Bálint Réczey

Firefox on Ubuntu 22.04 from .deb (not from snap)

It is now widely known that Ubuntu 22.04 LTS (Jammy Jellyfish) ships Firefox as a snap, but some people (like me) may prefer installing it from .deb packages to retain control over upgrades or to keep extensions working.

Luckily there is still a PPA serving firefox (and thunderbird) debs at https://launchpad.net/~mozillateam/+archive/ubuntu/ppa maintained by the Mozilla Team. (Thank you!)

You can block the Ubuntu archive’s version that just pulls in the snap by pinning it:

$ cat /etc/apt/preferences.d/firefox-no-snap 
Package: firefox*
Pin: release o=Ubuntu*
Pin-Priority: -1

Now you can remove the transitional package and the Firefox snap itself:

sudo apt purge firefox
sudo snap remove firefox
sudo add-apt-repository ppa:mozillateam/ppa
sudo apt update
sudo apt install firefox

Since the package comes from a PPA unattended-upgrades will not upgrade it automatically, unless you enable this origin:

echo 'Unattended-Upgrade::Allowed-Origins:: "LP-PPA-mozillateam:${distro_codename}";' | sudo tee /etc/apt/apt.conf.d/51unattended-upgrades-firefox

Happy browsing!

Update: I have found a few other, similar guides at https://fostips.com/ubuntu-21-10-two-firefox-remove-snap and https://ubuntuhandbook.org/index.php/2022/04/install-firefox-deb-ubuntu-22-04 and I’ve updated the pinning configuration based on them.

23 April, 2022 02:38PM by Réczey Bálint