January 27, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

Using an iPad for note-taking in talks

I've found that using a laptop during conference talks means you either end up doing something else and missing important bits of the talk, or at least look like you're doing something else. But it's extremely helpful to be able to look up the person who is talking, or their project, or expand an acronym that's mentioned or read around the subject.

At December's uksystems21 conference, I experimented with using an iPad as a kind of compromise. Modern iOS versions let you split the display between two apps1, so I put the built-in Notes app on one side and a web browser on the other. I took notes using the Apple Pencil. I've got a "paper-like" rough surface display protector on the front which vastly improves the experience of using the Apple Pencil for writing2.

An example of note-taking and researching talks on an iPad.

An example of note-taking and researching talks on an iPad.

I mostly took notes on the active talk, but I also tweaked my own slides3, looked up supplementary information about the topic and the talker, and things like that. It worked really well: much better than I expected.

Apple's split-screen implementation is clunky but essential to make this work. The textured surface protector is a serious improvement over the normal surface for writing. But most importantly I didn't lose focus over the talks, and I don't think I looked like I did either.

  1. Yes, this is something that some Android vendors have supported for years. I remember playing around with Samsung Galaxy Notes when I was still in IT and being pretty impressed. On the other hand, I'd bet not 1% of those tablets are still running Today. My iPad Mini from that time still is, albeit vastly diminished.
  2. It's still not as good as the Remarkable, but that's a topic for another blog post.
  3. A tiny bit. Not serious reworking. Just nervous last minute tweaks that I could probably have not bothered with at all. I'm one of those people who does that right up to the wire.

27 January, 2022 09:39PM

hackergotchi for Timo Jyrinki

Timo Jyrinki

Unboxing Dell XPS 13 - openSUSE Tumbleweed alongside preinstalled Ubuntu

A look at the 2021 model of Dell XPS 13 - available with Linux pre-installed

I received a new laptop for work - a Dell XPS 13. Dell has been long famous for offering certain models with pre-installed Linux as a supported option, and opting for those is nice for moving some euros/dollars from certain PC desktop OS monopoly towards Linux desktop engineering costs. Notably Lenovo also offers Ubuntu and Fedora options on many models these days (like Carbon X1 and P15 Gen 2).
black box

opened box

accessories and a leaflet about Linux support

laptop lifted from the box, closed

laptop with lid open

Ubuntu running

openSUSE runnin
Obviously a smooth, ready-to-rock Ubuntu installation is nice for most people already, but I need openSUSE, so after checking everything is fine with Ubuntu, I continued to install openSUSE Tumbleweed as a dual boot option. As I’m a funny little tinkerer, I obviously went with some special things. I wanted:
  • Ubuntu to remain as the reference supported OS on a small(ish) partition, useful to compare to if trying out new development versions of software on openSUSE and finding oddities.
  • openSUSE as the OS consuming most of the space.
  • LUKS encryption for openSUSE without LVM.
  • ext4’s new fancy ‘fast_commit’ feature in use during filesystem creation.
  • As a result of all that, I ended up juggling back and forth installation screens a couple of times (even more than shown below, and also because I forgot I wanted to use encryption the first time around).
First boots to pre-installed Ubuntu and installation of openSUSE Tumbleweed as the dual-boot option: 
(if the embedded video is not shown, use a direct link)
Some notes from the openSUSE installation:
  • openSUSE installer’s partition editor apparently does not support resizing or automatically installing side-by-side another Linux distribution, so I did part of the setup completely on my own.
  • Installation package download hanged a couple of times, only passed when I entered a mirror manually. On my TW I’ve also noticed download problems recently, there might be a problem with some mirror I need to escalate.
  • The installer doesn’t very clearly show encryption status of the target installation - it took me a couple of attempts before I even noticed the small “encrypted” column and icon (well, very small, see below), which also did not spell out the device mapper name but only the main partition name. In the end it was going to do the right thing right away and use my pre-created encrypted target partition as I wanted, but it could be a better UX. Then again I was doing my very own tweaks anyway.
  • Let’s not go to the details why I’m so old-fashioned and use ext4 :)
  • openSUSE’s installer does not work fine with HiDPI screen. Funnily the tty consoles seem to be fine and with a big font.
  • At the end of the video I install the two GNOME extensions I can’t live without, Dash to Dock and Sound Input & Output Device Chooser.

27 January, 2022 06:49AM by TJ (noreply@blogger.com)

Russ Allbery

Review: I Didn't Do the Thing Today

Review: I Didn't Do the Thing Today, by Madeleine Dore

Publisher: Avery
Copyright: 2022
ISBN: 0-593-41914-6
Format: Kindle
Pages: 291

At least from my narrow view of it, the world of productivity self-help literature is a fascinating place right now. The pandemic overturned normal work patterns and exacerbated schedule inequality, creating vastly different experiences for the people whose work continued to be in-person and the people whose work could become mostly or entirely remote. Self-help literature, which is primarily aimed at the more affluent white-collar class, primarily tracked the latter disruption: newly-remote work, endless Zoom meetings, the impossibility of child care, the breakdown of boundaries between work and home, and the dawning realization that much of the mechanics of day-to-day office work are neither productive nor defensible.

My primary exposure these days to the more traditional self-help productivity literature is via Cal Newport. The stereotype of the productivity self-help book is a collection of life hacks and list-making techniques that will help you become a more efficient capitalist cog, but Newport has been moving away from that dead end for as long as I've been reading him, and his recent work focuses more on structural issues with the organization of knowledge work. He also shares with the newer productivity writers a willingness to tell people to use the free time they recover via improved efficiency on some life goal other than improved job productivity. But he's still prickly and defensive about the importance of personal productivity and accomplishing things. He gives lip service on his podcast to the value of the critique of productivity, but then usually reverts to characterizing anti-productivity arguments as saying that productivity is a capitalist invention to control workers. (Someone has doubtless said this on Twitter, but I've never seen a serious critique of productivity make this simplistic of an argument.)

On the anti-productivity side, as it's commonly called, I've seen a lot of new writing in the past couple of years that tries to break the connection between productivity and human worth so endemic to US society. This is not a new analysis; disabled writers have been making this point for decades, it's present in both Keynes and in Galbraith's The Affluent Society, and Kathi Weeks's The Problem with Work traces some of its history in Marxist thought. But what does feel new to me is its widespread mainstream appearance in newspaper articles, viral blog posts, and books such as Jenny Odell's and Devon Price's Laziness Does Not Exist. The pushback against defining life around productivity is having a moment.

Entering this discussion is Madeleine Dore's I Didn't Do the Thing Today. Dore is the author of the Extraordinary Routines blog and host of the Routines and Ruts podcast. Extraordinary Routines began as a survey of how various people organize their daily lives. I Didn't Do the Thing Today is, according to the preface, a summary of the thoughts Dore has had about her own life and routines as a result of those interviews.

As you might guess from the subtitle (Letting Go of Productivity Guilt), Dore's book is superficially on the anti-productivity side. Its chapters are organized around gentle critiques of productivity concepts, with titles like "The Hopeless Search for the Ideal Routine," "The Myth of Balance," or "The Harsh Rules of Discipline." But I think anti-productivity is a poor name for this critique; its writers are not opposed to being productive, only to its position as an all-consuming focus and guilt-generating measure of personal worth.

Dore structures most chapters by naming an aspect, goal, or concern of a life defined by productivity, such as wasted time, ambition, busyness, distraction, comparison, or indecision. Each chapter sketches the impact of that idea and then attempts to gently dismantle the grip that it may have on the reader's life. All of these discussions are nuanced; it's rare for Dore to say that one of these aspects has no value, and she anticipates numerous objections. But her overarching goal is to help the reader be more comfortable with imperfection, more willing to live in the moment, and less frustrated with the limitations of life and the human brain. If striving for productivity is like lifting weights, Dore's diagnosis is that we've tried too hard for too long, and have overworked that muscle until it is cramping. This book is a gentle massage to induce the muscle to relax and let go.

Whether this will work is, as with all self-help books, individual. I found it was best read in small quantities, perhaps a chapter per day, since it otherwise began feeling too much the same. I'm also not the ideal audience; Dore is a creative freelancer and primarily interviewed other creative people, which I think has a different sort of productivity rhythm than the work that I do. She's also not a planner to the degree that I am; more on that below. And yet, I found this book worked on me anyway. I can't say that I was captivated all the way through, but I found myself mentally relaxing while I was reading it, and I may re-read some chapters from time to time.

How does this relate to the genre of productivity self-help? With less conflict than I think productivity writers believe, although there seems to be one foundational difference of perspective.

Dore is not opposed to accomplishing things, or even to systems that help people accomplish things. She is more attuned than the typical productivity writer to the guilt and frustration that can accumulate when one has a day in which one does not do the thing, but her goal is not to talk you out of attempting things. It is, instead, to convince you to hold those attempts and goals more lightly, to allow them to move and shift and change, and to not treat a failure to do the thing today as a reason for guilt. This is wholly compatible with standard productivity advice. It's adding nuance at one level of abstraction higher: how tightly to cling to productivity goals, and what to do when they don't work out. Cramping muscles are not strong muscles capable of lifting heavy things. If one can massage out the cramp, one's productivity by even the strict economic definition may improve.

Where I do see a conflict is that most productivity writers are planners, and Dore is not. This is, I think, a significant blind spot in productivity self-help writing. Cal Newport, for example, advocates time-block planning, where every hour of the working day has a job. David Allen advocates a complex set of comprehensive lists and well-defined next actions. Mark Forster builds a flurry of small systems for working through lists. The standard in productivity writing is to to add structure to your day and cultivate the self-discipline required to stick to that structure.

For many people, including me, this largely works. I'm mostly a planner, and when my life gets chaotic, adding more structure and focusing on that structure helps me. But the productivity writers I've read are quite insistent that their style of structure will work for everyone, and on that point I am dubious. Newport, for example, advocates time-block planning for everyone without exception, insisting that it is the best way to structure a day. Dore, in contrast, describes spending years trying to perfect a routine before realizing that elastic possibilities work better for her than routines. For those who are more like Dore than Newport, I Didn't Do the Thing Today is more likely to be helpful than Newport's instructions. This doesn't make Newport's ideas wrong; it simply makes them not universal, something that the productivity self-help genre seems to have trouble acknowledging.

Even for readers like myself who prefer structure, I Didn't Do the Thing Today is a valuable corrective to the emphasis on every-better systems. For those who never got along with too much structure, I think it may strike a chord. The standard self-help caveat still applies: Dore has the most to say to people who are in a similar social class and line of work as her. I'm not sure this book will be of much help to someone who has to juggle two jobs with shift work and child care, where the problem more sharp external constraints than internalized productivity guilt. But for its target audience, I think it's a valuable, calming message. Dore doesn't have a recipe to sort out your life, but may help you feel better about the merits of life unsorted.

Rating: 7 out of 10

27 January, 2022 03:53AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

td 0.0.6 on CRAN: Minor Bugfix

The td package for accessing the twelvedata API for financial data has been updated once more on CRAN and is now at version 0.0.6.

The release comes in response to an email from CRAN who noticed (via r-devel) that I was sloppy (in one spot, it turns out) with a logical expression resulting in an expression of length greather than one. Fixed by wrapping an all() around it—and the package was back at CRAN minutes later thanks to automated processing over their end.

The NEWS entry follows.

Changes in version 0.0.6 (2022-01-26)

  • Correct one equality comparison by wrapping in all() to ensure a length-one logical expression is created

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 January, 2022 12:32AM

January 26, 2022

hackergotchi for Gunnar Wolf

Gunnar Wolf

Progvis — Now in Debian proper! (unstable)

Progvis finally made it into Debian! What is it, you ask? It is a great tool to teach about memory management and concurrency.

I first saw progvis in the poster presentation his author, Filip Strömbäck, did last year at the https://sigcse2021.sigcse.org/, immediately recognizing it as a tool I wanted to use at my classes, and being it free software, make it available for all interested Debian users. Quoting from https://storm-lang.org/index.php?q=06-Programs%2F01-Progvis.md:

This is a program visualization tool aimed at concurrent programs and related issues. The tool itself is mostly language agnostic, and relies on Storm to compile the provided code and provide basic debug information. The generated code is then inspected and instrumented to provide an experience similar to a basic debugger. The tool emphasizes a visual representation of the object hierarchy that is manipulated by the executed program to make it easy to understand how it looks. In particular, a visual representation is beneficial over a text representation since it makes it easier to find shared data that might need to be synchronized in a concurrent program.

As mentioned, the tool is aimed at concurrent programs. Therefore, it allows spawning multiple threads running the same program to see if that affects the program’s execution (this is mostly interesting if global variables are used). Furthermore, any spawned threads also appear in the tool, and the user may control them independently to explore possible race conditions or other synchronization errors. If enabled from the menu bar, the tool keeps track of reads and writes to the data structure in order to highlight basic race conditions in addition to deadlocks.

So, what is this Storm thing? Filip promptly informed me that Progvis is not just a pedagogical tool… Or rather, that it is part of something bigger. Progvis is a program built using https://storm-lang.org/ is more than a compiler; it presents as a framework for creating languages, designed to make easy to implement languages that can be extended with new syntax and semantics. Storm is much more than what I have explored, and can be used as an interactive compiler, a language server used as a service for highlighting and completing in IDEs. But I won’t dig much more into Storm (which is, of course, now https://tracker.debian.org/pkg/storm-lang as well as the libraries built from the same source).

Back to progvis: It implements a very-close-to-C++ language, with some details to better suit its purpose (i.e. instead of using the usual pthread implementation, an own thread model is used; i.e. thread creation is handled via int thread_id = thread_name(funcname, &params) instead of the more complex https://manpages.debian.org/bullseye/manpages-dev/pthread_create.3.en.html function (including details such as the thread object being passed as by reference as a parameter)…

All in all, while I have not yet taken full advantage of this tool in my teaching, it has helped me show somewhat hard to grasp concepts such as:

  • Understanding pointers and indirection

  • How strings are handled

  • Heap allocation vs. stack allocation

  • Shared access to global data

All in all, a great tool. I hope you find it useful and enjoyable as well!

PS- I suggest you to install the progvis-examples package to get started. You will find some dozens of sample programs in /usr/share/doc/progvis-examples/examples; playing with them will help you better understand the tool and be able to better write your own programs.

26 January, 2022 04:43PM

Antoine Beaupré

The Neo-Colonial Internet

This article was translated into french. An interview about this article was also published in Slovene.

I grew up with the Internet and its ethics and politics have always been important in my life. But I have also been involved at other levels, against police brutality, for Food, Not Bombs, worker autonomy, software freedom, etc. For a long time, that all seemed coherent.

But the more I look at the modern Internet -- and the mega-corporations that control it -- and the less confidence I have in my original political analysis of the liberating potential of technology. I have come to believe that most of our technological development is harmful to the large majority of the population of the planet, and of course the rest of the biosphere. And now I feel this is not a new problem.

This is because the Internet is a neo-colonial device, and has been from the start. Let me explain.

What is Neo-Colonialism?

The term "neo-colonialism" was coined by Kwame Nkrumah, first president of Ghana. In Neo-Colonialism, the Last Stage of Imperialism (1965), he wrote:

In place of colonialism, as the main instrument of imperialism, we have today neo-colonialism ... [which] like colonialism, is an attempt to export the social conflicts of the capitalist countries. ...

The result of neo-colonialism is that foreign capital is used for the exploitation rather than for the development of the less developed parts of the world. Investment, under neo-colonialism, increases, rather than decreases, the gap between the rich and the poor countries of the world.

So basically, if colonialism is Europeans bringing genocide, war, and its religion to the Africa, Asia, and the Americas, neo-colonialism is the Americans (note the "n") bringing capitalism to the world.

Before we see how this applies to the Internet, we must therefore make a detour into US history. This matters, because anyone would be hard-pressed to decouple neo-colonialism from the empire under which it evolves, and here we can only name the United States of America.

US Declaration of Independence

Let's start with the United States declaration of independence (1776). Many Americans may roll their eyes at this, possibly because that declaration is not actually part of the US constitution and therefore may have questionable legal standing. Still, it was obviously a driving philosophical force in the founding of the nation. As its author, Thomas Jefferson, stated:

it was intended to be an expression of the American mind, and to give to that expression the proper tone and spirit called for by the occasion

In that aging document, we find the following pearl:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

As a founding document, the Declaration still has an impact in the sense that the above quote has been called an:

"immortal declaration", and "perhaps [the] single phrase" of the American Revolutionary period with the greatest "continuing importance." (Wikipedia)

Let's read that "immortal declaration" again: "all men are created equal". "Men", in that context, is limited to a certain number of people, namely "property-owning or tax-paying white males, or about 6% of the population". Back when this was written, women didn't have the right to vote, and slavery was legal. Jefferson himself owned hundreds of slaves.

The declaration was aimed at the King and was a list of grievances. A concern of the colonists was that the King:

has excited domestic insurrections amongst us, and has endeavoured to bring on the inhabitants of our frontiers, the merciless Indian Savages whose known rule of warfare, is an undistinguished destruction of all ages, sexes and conditions.

This is a clear mark of the frontier myth which paved the way for the US to exterminate and colonize the territory some now call the United States of America.

The declaration of independence is obviously a colonial document, having being written by colonists. None of this is particularly surprising, historically, but I figured it serves as a good reminder of where the Internet is coming from, since it was born in the US.

A Declaration of the Independence of Cyberspace

Two hundred and twenty years later, in 1996, John Perry Barlow wrote a declaration of independence of cyberspace. At this point, (almost) everyone has a right to vote (including women), slavery was abolished (although some argue it still exists in the form of the prison system); the US has made tremendous progress. Surely this text will have aged better than the previous declaration it is obviously derived from. Let's see how it reads today and how it maps to how the Internet is actually built now.

Borders of Independence

One of the key ideas that Barlow brings up is that "cyberspace does not lie within your borders". In that sense, cyberspace is the final frontier: having failed to colonize the moon, Americans turn inwards, deeper into technology, but still in the frontier ideology. And indeed, Barlow is one of the co-founder of the Electronic Frontier Foundation (the beloved EFF), founded six years prior.

But there are other problems with this idea. As Wikipedia quotes:

The declaration has been criticized for internal inconsistencies.[9] The declaration's assertion that 'cyberspace' is a place removed from the physical world has also been challenged by people who point to the fact that the Internet is always linked to its underlying geography.[10]

And indeed, the Internet is definitely a physical object. First controlled and severely restricted by "telcos" like AT&T, it was somewhat "liberated" from that monopoly in 1982 when an anti-trust lawsuit broke up the monopoly, a key historical event that, one could argue, made the Internet possible.

(From there on, "backbone" providers could start competing and emerge, and eventually coalesce into new monopolies: Google has a monopoly on search and advertisement, Facebook on communications for a few generations, Amazon on storage and computing, Microsoft on hardware, etc. Even AT&T is now pretty much as consolidated as it was before.)

The point is: all those companies have gigantic data centers and intercontinental cables. And those are definitely prioritizing the western world, the heart of the empire. Take for example Google's latest 3,900 mile undersea cable: it does not connect Argentina to South Africa or New Zealand, it connects the US to UK and Spain. Hardly a revolutionary prospect.

Private Internet

But back to the Declaration:

Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.

In Barlow's mind, the "public" is bad, and private is good, natural. Or, in other words, a "public construction project" is unnatural. And indeed, the modern "nature" of development is private: most of the Internet is now privately owned and operated.

I must admit that, as an anarchist, I loved that sentence when I read it. I was rooting for "us", the underdogs, the revolutionaries. And, in a way, I still do: I am on the board of Koumbit and work for a non-profit that has pivoted towards censorship and surveillance evasion. Yet I cannot help but think that, as a whole, we have failed to establish that independence and put too much trust in private companies. It is obvious in retrospect, but it was not, 30 years ago.

Now, the infrastructure of the Internet has zero accountability to traditional political entities supposedly representing the people, or even its users. The situation is actually worse than when the US was founded (e.g. "6% of the population can vote"), because the owners of the tech giants are only a handful of people who can override any decision. There's only one Amazon CEO, he's called Jeff Bezos, and he has total control. (Update: Bezos actually ceded the CEO role to Andy Jassy, AWS and Amazon music founder, while remaining executive chairman. I would argue that, as the founder and the richest man on earth, he still has strong control over Amazon.)

Social Contract

Here's another claim of the Declaration:

We are forming our own Social Contract.

I remember the early days, back when "netiquette" was a word, it did feel we had some sort of a contract. Not written in standards of course -- or barely (see RFC1855) -- but as a tacit agreement. How wrong we were. One just needs to look at Facebook to see how problematic that idea is on a global network.

Facebook is the quintessential "hacker" ideology put in practice. Mark Zuckerberg explicitly refused to be "arbiter of truth" which implicitly means he will let lies take over its platforms.

He also sees Facebook as place where everyone is equal, something that echoes the Declaration:

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

(We note, in passing, the omission of gender in that list, also mirroring the infamous "All men are created equal" claim of the US declaration.)

As the Wall Street Journal's (WSJ) Facebook files later shown, both of those "contracts" have serious limitations inside Facebook. There are VIPs who systematically bypass moderation systems including fascists and rapists. Drug cartels and human traffickers thrive on the platform. Even when Zuckerberg himself tried to tame the platform -- to get people vaccinated or to make it healthier -- he failed: "vaxxer" conspiracies multiplied and Facebook got angrier.

This is because the "social contract" behind Facebook and those large companies is a lie: their concern is profit and that means advertising, "engagement" with the platform, which causes increased anxiety and depression in teens, for example.

Facebook's response to this is that they are working really hard on moderation. But the truth is that even that system is severely skewed. The WSJ showed that Facebook has translators for only 50 languages. It's a surprisingly hard to count human languages but estimates range the number of distinct languages between 2500 and 7000. So while 50 languages seems big at first, it's actually a tiny fraction of the human population using Facebook. Taking the first 50 of the Wikipedia list of languages by native speakers we omit languages like Dutch (52), Greek (74), and Hungarian (78), and that's just a few random nations picks from Europe.

As an example, Facebook has trouble moderating even a major language like Arabic. It censored content from legitimate Arab news sources when they mentioned the word al-Aqsa because Facebook associates it with the al-Aqsa Martyrs' Brigades when they were talking about the Al-Aqsa Mosque... This bias against Arabs also shows how Facebook reproduces the American colonizer politics.

The WSJ also pointed out that Facebook spends only 13% of its moderation efforts outside of the US, even if that represents 90% of its users. Facebook spends three more times moderating on "brand safety", which shows its priority is not the safety of its users, but of the advertisers.

Military Internet

Sergey Brin and Larry Page are the Lewis and Clark of our generation. Just like the latter were sent by Jefferson (the same) to declare sovereignty over the entire US west coast, Google declared sovereignty over all human knowledge, with its mission statement "to organize the world's information and make it universally accessible and useful". (It should be noted that Page somewhat questioned that mission but only because it was not ambitious enough, Google having "outgrown" it.)

The Lewis and Clark expedition, just like Google, had a scientific pretext, because that is what you do to colonize a world, presumably. Yet both men were military and had to receive scientific training before they left. The Corps of Discovery was made up of a few dozen enlisted men and a dozen civilians, including York an African American slave owned by Clark and sold after the expedition, with his final fate lost in history.

And just like Lewis and Clark, Google has a strong military component. For example, Google Earth was not originally built at Google but is the acquisition of a company called Keyhole which had ties with the CIA. Those ties were brought inside Google during the acquisition. Google's increasing investment inside the military-industrial complex eventually led Google to workers organizing a revolt although it is currently unclear to me how much Google is involved in the military apparatus. (Update: this November 2021 post says they "will proudly work with the DoD".) Other companies, obviously, do not have such reserve, with Microsoft, Amazon, and plenty of others happily bidding on military contracts all the time.

Spreading the Internet

I am obviously not the first to identify colonial structures in the Internet. In an article titled The Internet as an Extension of Colonialism, Heather McDonald correctly identifies fundamental problems with the "development" of new "markets" of Internet "consumers", primarily arguing that it creates a digital divide which creates a "lack of agency and individual freedom":

Many African people have gained access to these technologies but not the freedom to develop content such as web pages or social media platforms in their own way. Digital natives have much more power and therefore use this to create their own space with their own norms, shaping their online world according to their own outlook.

But the digital divide is certainly not the worst problem we have to deal with on the Internet today. Going back to the Declaration, we originally believed we were creating an entirely new world:

This governance will arise according to the conditions of our world, not yours. Our world is different.

How I dearly wished that was true. Unfortunately, the Internet is not that different from the offline world. Or, to be more accurate, the values we have embedded in the Internet, particularly of free speech absolutism, sexism, corporatism, and exploitation, are now exploding outside of the Internet, into the "real" world.

The Internet was built with free software which, fundamentally, was based on quasi-volunteer labour of an elite force of white men with obviously too much time on their hands (and also: no children). The mythical writing of GCC and Emacs by Richard Stallman is a good example of this, but the entirety of the Internet now seems to be running on random bits and pieces built by hit-and-run programmers working on their copious free time. Whenever any of those fails, it can compromise or bring down entire systems. (Heck, I wrote this article on my day off...)

This model of what is fundamentally "cheap labour" is spreading out from the Internet. Delivery workers are being exploited to the bone by apps like Uber -- although it should be noted that workers organise and fight back. Amazon workers are similarly exploited beyond belief, forbidden to take breaks until they pee in bottles, with ambulances nearby to carry out the bodies. During peak of the pandemic, workers were being dangerously exposed to the virus in warehouses. All this while Amazon is basically taking over the entire economy.

The Declaration culminates with this prophecy:

We will spread ourselves across the Planet so that no one can arrest our thoughts.

This prediction, which first felt revolutionary, is now chilling.

Colonial Internet

The Internet is, if not neo-colonial, plain colonial. The US colonies had cotton fields and slaves, we have disposable cell phones and Foxconn workers. Canada has its cultural genocide, Facebook has his own genocides in Ethiopia, Myanmar, and mob violence in India. Apple is at least implicitly accepting the Uyghur genocide. And just like the slaves of the colony, those atrocities are what makes the empire run.

The Declaration actually ends like this, a quote which I have in my fortune cookies file:

We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.

That is still inspiring to me. But if we want to make "cyberspace" more humane, we need to decolonize it. Work on cyberpeace instead of cyberwar. Establish clear code of conduct, discuss ethics, and question your own privileges, biases, and culture. For me the first step in decolonizing my own mind is writing this article. Breaking up tech monopolies might be an important step, but it won't be enough: we have to do a culture shift as well, and that's the hard part.

Appendix: an apology to Barlow

I kind of feel bad going through Barlow's declaration like this, point by point. It is somewhat unfair, especially since Barlow passed away a few years ago and cannot mount a response (even humbly assuming that he might read this). But then again, he himself recognized he was a bit too "optimistic" in 2009, saying: "we all get older and smarter":

I'm an optimist. In order to be libertarian, you have to be an optimist. You have to have a benign view of human nature, to believe that human beings left to their own devices are basically good. But I'm not so sure about human institutions, and I think the real point of argument here is whether or not large corporations are human institutions or some other entity we need to be thinking about curtailing. Most libertarians are worried about government but not worried about business. I think we need to be worrying about business in exactly the same way we are worrying about government.

And, in a sense, it was a little naive to expect Barlow to not be a colonist. Barlow is, among many things, a cattle rancher who grew up on a colonial ranch in Wyoming. The ranch was founded in 1907 by his great uncle, 17 years after the state joined the Union, and only a generation or two after the Powder River War (1866-1868) and Black Hills War (1876-1877) during which the US took over lands occupied by Lakota, Cheyenne, Arapaho, and other native American nations, in some of the last major First Nations Wars.

Appendix: further reading

There is another article that almost has the same title as this one: Facebook and the New Colonialism. (Interestingly, the <title> tag on the article is actually "Facebook the Colonial Empire" which I also find appropriate.) The article is worth reading in full, but I loved this quote so much that I couldn't resist reproducing it here:

Representations of colonialism have long been present in digital landscapes. (“Even Super Mario Brothers,” the video game designer Steven Fox told me last year. “You run through the landscape, stomp on everything, and raise your flag at the end.”) But web-based colonialism is not an abstraction. The online forces that shape a new kind of imperialism go beyond Facebook.

It goes on:

Consider, for example, digitization projects that focus primarily on English-language literature. If the web is meant to be humanity’s new Library of Alexandria, a living repository for all of humanity’s knowledge, this is a problem. So is the fact that the vast majority of Wikipedia pages are about a relatively tiny square of the planet. For instance, 14 percent of the world’s population lives in Africa, but less than 3 percent of the world’s geotagged Wikipedia articles originate there, according to a 2014 Oxford Internet Institute report.

And they introduce another definition of Neo-colonialism, while warning about abusing the word like I am sort of doing here:

“I’m loath to toss around words like colonialism but it’s hard to ignore the family resemblances and recognizable DNA, to wit,” said Deepika Bahri, an English professor at Emory University who focuses on postcolonial studies. In an email, Bahri summed up those similarities in list form:

  1. ride in like the savior
  2. bandy about words like equality, democracy, basic rights
  3. mask the long-term profit motive (see 2 above)
  4. justify the logic of partial dissemination as better than nothing
  5. partner with local elites and vested interests
  6. accuse the critics of ingratitude

“In the end,” she told me, “if it isn’t a duck, it shouldn’t quack like a duck.”

Another good read is the classic Code and other laws of cyberspace (1999, free PDF) which is also critical of Barlow's Declaration. In "Code is law", Lawrence Lessig argues that:

computer code (or "West Coast Code", referring to Silicon Valley) regulates conduct in much the same way that legal code (or "East Coast Code", referring to Washington, D.C.) does (Wikipedia)

And now it feels like the west coast has won over the east coast, or maybe it recolonized it. In any case, Internet now christens emperors.

26 January, 2022 02:45PM

Russell Coker

Australia/NZ Linux Meetings

I am going to start a new Linux focused FOSS online meeting for people in Australia and nearby areas. People can join from anywhere but the aim will be to support people in nearby areas.

To cover the time zone range for Australia this requires a meeting on a weekend, I’m thinking of the first Saturday of the month at 1PM Melbourne/Sydney time, that would be 10AM in WA and 3PM in NZ. We may have corner cases of daylight savings starting and ending on different days, but that shouldn’t be a big deal as I think those times can vary by an hour either way without being too inconvenient for anyone.

Note that I describe the meeting as Linux focused because my plans include having a meeting dedicated to different versions of BSD Unix and a meeting dedicated to the HURD. But those meetings will be mainly for Linux people to learn about other Unix-like OSs.

One focus I want to have for the meetings is hands-on work, live demonstrations, and short highly time relevant talks. There are more lectures on YouTube than anyone could watch in a lifetime (see the Linux.conf.au channel for some good ones [1]). So I want to run events that give benefits that people can’t gain from watching YouTube on their own.

Russell Stuart and I have been kicking around ideas for this for a while. I think that the solution is to just do it. I know that Saturday won’t work for everyone (no day will) but it will work for many people. I am happy to discuss changing the start time by an hour or two if that seems likely to get more people. But I’m not particularly interested in trying to make it convenient for people in Hawaii or India, my idea is for an Australia/NZ focused event. I would be more than happy to share lecture notes etc with people in other countries who run similar events. As an aside I’d be happy to give a talk for an online meeting at a Hawaiian LUG as the timezone is good for me.

Please pencil in 1PM Melbourne time on the 5th of Feb for the first meeting. The meeting requirements will be a PC with good Internet access running a recent web browser and a ssh client for the hands-on stuff. A microphone or webcam is NOT required, any questions you wish to ask can be done with text if that’s what you prefer.

Suggestions for the name of the group are welcome.

26 January, 2022 06:44AM by etbe

January 25, 2022

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo on CRAN: Upstream Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 950 other packages on CRAN, downloaded over 22.9 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint/vignette) by Conrad and myself has been cited 451 times according to Google Scholar.

This release brings another upstream update 10.8.0, and first bug fix release 10.8.1. As updates by Conrad can come a little quicker than the desired monthly cadence CRAN aims for, we skipped the 10.8.0 release for CRAN only but of course generally provide them via the Rcpp drat repo as well as via general updates to the repo, and full reverse dependency testing (for which results are always logged here).

The full set of changes (since the last CRAN release follows.

Changes in RcppArmadillo version (2022-01-23)

  • Upgraded to Armadillo release 10.8.1 (Realm Raider)

    • fix interaction between OpenBLAS and LAPACK

    • emit warning if find() is incorrectly used to locate NaN elements

Changes in RcppArmadillo version (2022-01-02)

  • Upgraded to Armadillo release 10.8 (Realm Raider)

    • faster handling of symmetric matrices by pinv() and rank()

    • faster handling of diagonal matrices by inv_sympd(), pinv(), rank()

    • expanded norm() to handle integer vectors and matrices

    • added datum::tau to replace

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 January, 2022 08:33PM

hackergotchi for Daniel Pocock

Daniel Pocock

Free technology in housing and construction

A friend recently purchased a new kitchen appliance and found that they could not use some of the features without having a smartphone and app. Even worse, the app insists that they provide an email address and they are forced to disclose their mobile phone number through a text-message "authentication" gimmick. The product is rather large and they had already broken up the packaging so there was no easy way to simply send it back and refuse these conditions.

After hearing about this, I couldn't provide an immediate solution but I felt it would be useful to put together some notes about free software and open hardware products. Hopefully this will help all of us to be more aware of the choices before buying and installing the wrong products.

FreeCAD, house

Design and construction

When building, self-building, renovating or extending a home, software and hardware products are almost indispensable.

For floorplans and CAD drawings there is FreeCAD and LibreCAD. The former, FreeCAD, appears to offer more features for 3D and a BIM workbench for Building Information Modelling. Even if you use architects and engineers to do most of the drawing and design work, it can be really helpful being able to view their drawings at home using one of these tools.

Once you have a plan for a building it is important to make calculations about energy requirements. One of the most well known tools for this is the Passive House Planning Package (PHPP). Some web sites refer to it as open source software but it is neither free nor open source. It is a spreadsheet and there is a charge for downloading it. There are discussions about an equivalent feature for FreeCAD and another discussion in OSArch.

For the construction phase, some of the tools promoted by Open Source Ecology offer the possibility to help with everything from earthworks to decorating.

Open Source Ecology

Products for installation in the building

To meet modern requirements for energy efficiency it is almost essential to have some intelligent devices in the management of heating, hot water and maybe a renewable energy source. From an ethical perspective, it seems vital to ensure these devices are using free, open source technology that can be supported locally into the future.

The Arduino and the more rugged Industruino can be used as building blocks for a range of systems.


Here are some examples of devices that could be managed by an Arduino-derived solution:

  • Water softening system. Requires sensors for water flow and salt tank capacity.
  • Water booster pumps. Typically a pair of pumps are required for N+1 redundancy with an algorithm to operate them in turns. Pressure sensors provide input to activate one or both pumps and control their speed.
  • Thermal store. Sensors read the temperature at multiple levels in the tank. The control unit can activate one or more external heat sources, like a boiler or heat pump, based on defined thresholds, the season and the outdoor temperature. The control units are typically detachable or sold independently of the tank so it is very easy to substitute the proprietary default with an Arduino.
  • Boilers and stoves. Modern boilers and stoves are highly efficient and they try to match fuel consumption to actual demand. It may not be safe or practical to replace their built-in microcontroller. On the other hand, some of them have an expansion port for an overpriced wifi module. This provides an opportunity to integrate an Arduino or some other generic wireless gateway.
  • Solar inverter and battery management. Many homeowners are keen to have stored electricity so they can benefit from cost savings and independence from the grid. The batteries, solar panels, inverters and management controllers are each typically purchased independently. Therefore, this is a domain where it may be quite straightforward to substitute a proprietary controller module with a completely free and open solution. The controller needs to be aware of power collected, power demand, battery capacity, grid capacity, grid pricing for import/export and also battery lifecycle factors.
  • Home automation system. This is typically a central hub that provides a single point of monitoring and control for all the other systems. I've previously written about the free, open source products in this space: Domoticz, Home Assistant and openHAB. These open solutions eliminate all reliance on the cloud.

25 January, 2022 08:00PM

Debian Community News

Coroner's Report: Lucy Wayland & Debian Abuse Culture

As we approach the third anniversary of Lucy Wayland (aardvark)'s death, we want to consider the possibility that Debian contributed to the death. Let us look at the series of events and put the Coroner's report in the correct place in the timeline.

5 December 2018: Favoritism

Lucy Wayland was a Senior Applications Engineer at ARM in Cambridge, UK. Wayland was placed in the lowest rank in Debian: the title Debian Contributor.

Wayland notes the following competencies on her LinkedIn page:

Software engineer with large amounts of experience of working in design, implementation and test of embedded systems, both in-unit and toolchains, especially in the safety-critical domain.

Specialties: Assembler, C, C++ and Ada for embedded, real-time and safety-critical applications. Deep knowledge of fixed-point, floating-point, and associated DSP functionality. ARM NEON/Advanced SIMD.

Lucy Wayland, ARM, Debian, LinkedIn

On 5 December 2018, the Debian leader's girlfriend, Molly de Blanc, who never did any technical work, was given the highest rank, Debian Developer. How would Wayland and all the other women feel? When women see a promotion like that, they feel that their skills are being ignored and the only way to get ahead is to sleep with somebody.

17 December 2018: Wrecking Christmas

Molly de Blanc couldn't do any technical work. She decided to use her new status to intimidate other people. On 17 December 2018 she was involved in the plot that secretly expelled Dr Norbert Preining. They began to blackmail him: he must bow down before them or they would tell everybody he was expelled.

Dr Preining and other victims bravely spoke out publicly. For several weeks, Debian volunteers were exposed to hundreds of negative emails about Molly's blackmail recipes.

24 December 2018: Wrecking Christmas 2.0

On Christmas eve, the blackmailers sent a nasty email to thousands of people asserting that another volunteer is a pedophile. They used their Debian titles to send the defamation and make it look credible.

Christmas is normally a season when organizations thank their volunteers and give them the time and space to relax. Debian stole this rest from people including Lucy Wayland. Wayland started 2019 stressed.

30 January 2019: Wayland's death, the Coroner's report

There is significant research showing that volunteers who are exposed to bullying suffer more than the victims. In other words, Wayland, as a witness, may have suffered more psychological stress than Dr Preining and all the other blackmail victims. Wayland and many other women may have also felt humiliated by the way Molly was promoted.

These are the words of the coroner:

Medical cause of death:

  • 1a) Multi-organ failure
  • 1b) Ischaemic hepatopathy and pulmonary fat embolism
  • 1c) Fall with rib, clavicular and comminuted humeral fracture
  • 2. Alcohol dependence syndrome

How, when and where and for investigations where section 5 (2) of the Coroners and Justice Act 2009 applies, in what circumstances the deceased came by his or her death:

From events precipitated by a fall downstairs at home under the influence of alcohol and suffering alcohol abuse dependence syndrome. The fall resulted in multiple fractures, causing substantial blood loss. The deceased was on the floor for a long time before calling for assistance, resulting in hypotensive shock and liver ischaemia. The deceased then suffered fat emboli in both lungs. Their origin could have been her liver or as a result of the numerous fractures or both. The injury to her liver and the pulmonary fat embolism led to multi-organ failure; on 30th January 2019; at Addenbrooke's Hospital.

Conclusion of the Coroner as to the death: Accident.

Original documents from the Cambridgeshire and Peterborough Coroner Service

We should not assume that Lucy's alcoholism was an intrinsic fault. Research shows that stress, such as the Debian lynchings that started at Christmas, contribute to alcoholism.

Behavior is described as an interaction between genetic constitution and environmental influences. Of the environmental factors affecting an individual, one of the most potent is external stress. Although it generally is held that stress increases drinking, the articles in this issue clearly demonstrate the complexities of this simple construct.

2 February 2019: Molly promotes bullying at FOSDEM

FOSDEM 2019. Molly de Blanc displays a slide suggesting volunteers should be locked up if they don't obey the leader's girlfriend. This is the slide chosen by Molly:

Molly de Blanc, FOSDEM, Debian, volunteers, bars, jail, prison

At this moment, Lucy Wayland's body was lying in the morgue.

11 August 2019: Molly promotes bullying at FrOSCon

Molly was invited to give a keynote speech at FrOSCon.

She displayed a hand-drawn diagram showing three users pushing a developer. It looks a lot like an assault.

Bully/Molly de Blanc: Well we can use our collective power to push others


Molly's slides did not appear spontaneously. These slides are a reflection of the Debian gangster culture. Molly is simply a symptom of the problem.

We believe it is well within reason to suggest that a culture like this is consistent with stress, alcoholism and accidents.

Lucy Wayland. Rest in peace.

25 January, 2022 01:30PM

January 23, 2022

Matthieu Caneill

Debsources, python3, and funky file names

Rumors are running that python2 is not a thing anymore.

Well, I'm certainly late to the party, but I'm happy to report that sources.debian.org is now running python3.

Wait, it wasn't?

Back when development started, python3 was very much a real language, but it was hard to adopt because it was not supported by many libraries. So python2 was chosen, meaning print-based debugging was used in lieu of print()-based debugging, and str were bytes, not unicode.

And things were working just fine. One day python2 EOL was announced, with a date far in the future. Far enough to procrastinate for a long time. Combine this with a codebase that is stable enough to not see many commits, and the fact that Debsources is a volunteer-based project that happens at best on week-ends, and you end up with a dormant software and a missed deadline.

But, as dormant as the codebase is, the instance hosted at sources.debian.org is very popular and gets 200k to 500k hits per day. Largely enough to be worth a proper maintenance and a transition to python3.

Funky file names

While transitioning to python3 and juggling left and right with str, bytes and unicode for internal objects, files, database entries and HTTP content, I stumbled upon a bug that has been there since day 1.

Quick recap if you're unfamiliar with this tool: Debsources displays the content of the source packages in the Debian archive. In other words, it's a bit like GitHub, but for the Debian source code.

And some pieces of software out there, that ended up in Debian packages, happen to contain files whose names can't be decoded to UTF-8. Interestingly enough, there's no such thing as a standard for file names: with a few exceptions that vary by operating system, any sequence of bytes can be a legit file name. And some sequences of bytes are not valid UTF-8.

Of course those files are rare, and using ASCII characters to name a file is a much more common practice than using bytes in a non-UTF-8 character encoding. But when you deal with almost 100 billion files on which you have no control (those files come from free software projects, and make their way into Debian without any renaming), it happens.

Now back to the bug: when trying to display such a file through the web interface, it would crash because it can't convert the file name to UTF-8, which is needed for the HTML representation of the page.


An often valid approach when trying to represent invalid UTF-8 content is to ignore errors, and replace them with ? or . This is what Debsources actually does to display non-UTF-8 file content.

Unfortunately, this best-effort approach is not suitable for file names, as file names are also identifiers in Debsources: among other places, they are part of URLs. If an URL were to use placeholder characters to replace those bytes, there would be no deterministic way to match it with a file on disk anymore.

The representation of binary data into text is a known problem. Multiple lossless solutions exist, such as base64 and its variants, but URLs looking like https://sources.debian.org/src/Y293c2F5LzMuMDMtOS4yL2Nvd3NheS8= are not readable at all compared to https://sources.debian.org/src/cowsay/3.03-9.2/cowsay/. Plus, not backwards-compatible with all existing links.

The solution I chose is to use double-percent encoding: this allows the representation of any byte in an URL, while keeping allowed characters unchanged - and preventing CGI gateways from trying to decode non-UTF-8 bytes. This is the best of both worlds: regular file names get to appear normally and are human-readable, and funky file names only have percent signs and hex numbers where needed.

Here is an example of such an URL: https://sources.debian.org/src/aspell-is/0.51-0-4/%25EDslenska.alias/. Notice the %25ED to represent the percentage symbol itself (%25) followed by an invalid UTF-8 byte (%ED).

Transitioning to this was quite a challenge, as those file names don't only appear in URLs, but also in web pages themselves, log files, database tables, etc. And everything was done with str: made sense in python2 when str were bytes, but not much in python3.

What are those files? What's their network?

I was wondering too. Let's list them!

import os

with open('non-utf-8-paths.bin', 'wb') as f:
    for root, folders, files in os.walk(b'/srv/sources.debian.org/sources/'):
        for path in folders + files:
            except UnicodeDecodeError:
                f.write(root + b'/' + path + b'\n')

Running this on the Debsources main instance, which hosts pretty much all Debian packages that were part of a Debian release, I could find 307 files (among a total of almost 100 billion files).

Without looking deep into them, they seem to fall into 2 categories:

  • File names that are not valid UTF-8, but are valid in a different charset. Not all software is developed in English or on UTF-8 systems.
  • File names that can't be decoded to UTF-8 on purpose, to be used as input to test suites, and assert resilience of the software to non-UTF-8 data.

That last point hits home, as it was clearly lacking in Debsources. A funky file name is now part of its test suite. ;)

23 January, 2022 11:00PM

Antoine Beaupré

Switching from OpenNTPd to Chrony

A friend recently reminded me of the existence of chrony, a "versatile implementation of the Network Time Protocol (NTP)". The excellent introduction is worth quoting in full:

It can synchronise the system clock with NTP servers, reference clocks (e.g. GPS receiver), and manual input using wristwatch and keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network.

It is designed to perform well in a wide range of conditions, including intermittent network connections, heavily congested networks, changing temperatures (ordinary computer clocks are sensitive to temperature), and systems that do not run continuosly, or run on a virtual machine.

Typical accuracy between two machines synchronised over the Internet is within a few milliseconds; on a LAN, accuracy is typically in tens of microseconds. With hardware timestamping, or a hardware reference clock, sub-microsecond accuracy may be possible.

Now that's already great documentation right there. What it is, why it's good, and what to expect from it. I want more. They have a very handy comparison table between chrony, ntp and openntpd.

My problem with OpenNTPd

Following concerns surrounding the security (and complexity) of the venerable ntp program, I have, a long time ago, switched to using openntpd on all my computers. I hadn't thought about it until I recently noticed a lot of noise on one of my servers:

jan 18 10:09:49 curie ntpd[1069]: adjusting local clock by -1.604366s
jan 18 10:08:18 curie ntpd[1069]: adjusting local clock by -1.577608s
jan 18 10:05:02 curie ntpd[1069]: adjusting local clock by -1.574683s
jan 18 10:04:00 curie ntpd[1069]: adjusting local clock by -1.573240s
jan 18 10:02:26 curie ntpd[1069]: adjusting local clock by -1.569592s

You read that right, openntpd was constantly rewinding the clock, sometimes in less than two minutes. The above log was taken while doing diagnostics, looking at the last 30 minutes of logs. So, on average, one 1.5 seconds rewind per 6 minutes!

That might be due to a dying real time clock (RTC) or some other hardware problem. I know for a fact that the CMOS battery on that computer (curie) died and I wasn't able to replace it (!). So that's partly garbage-in, garbage-out here. But still, I was curious to see how chrony would behave... (Spoiler: much better.)

But I also had trouble on another workstation, that one a much more recent machine (angela). First, it seems OpenNTPd would just fail at boot time:

anarcat@angela:~(main)$ sudo systemctl status openntpd
â—� openntpd.service - OpenNTPd Network Time Protocol
     Loaded: loaded (/lib/systemd/system/openntpd.service; enabled; vendor pres>
     Active: inactive (dead) since Sun 2022-01-23 09:54:03 EST; 6h ago
       Docs: man:openntpd(8)
    Process: 3291 ExecStartPre=/usr/sbin/ntpd -n $DAEMON_OPTS (code=exited, sta>
    Process: 3294 ExecStart=/usr/sbin/ntpd $DAEMON_OPTS (code=exited, status=0/>
   Main PID: 3298 (code=exited, status=0/SUCCESS)
        CPU: 34ms

jan 23 09:54:03 angela systemd[1]: Starting OpenNTPd Network Time Protocol...
jan 23 09:54:03 angela ntpd[3291]: configuration OK
jan 23 09:54:03 angela ntpd[3297]: ntp engine ready
jan 23 09:54:03 angela ntpd[3297]: ntp: recvfrom: Permission denied
jan 23 09:54:03 angela ntpd[3294]: Terminating
jan 23 09:54:03 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 09:54:03 angela systemd[1]: openntpd.service: Succeeded.

After a restart, somehow it worked, but it took a long time to sync the clock. At first, it would just not consider any peer at all:

anarcat@angela:~(main)$ sudo ntpctl -s all
0/20 peers valid, clock unsynced

   wt tl st  next  poll          offset       delay      jitter from pool 0.debian.pool.ntp.org
    1  5  2    6s    6s             ---- peer not valid ---- from pool 0.debian.pool.ntp.org
    1  5  2    6s    7s             ---- peer not valid ---- from pool 0.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ---- from pool 0.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ---- from pool 1.debian.pool.ntp.org
    1  4  2    2s    8s             ---- peer not valid ---- from pool 1.debian.pool.ntp.org
    1  4  2    0s    5s             ---- peer not valid ---- from pool 1.debian.pool.ntp.org
    1  5  2    5s    5s             ---- peer not valid ---- from pool 1.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ---- from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ---- from pool 2.debian.pool.ntp.org
    1  4  3    1s    6s             ---- peer not valid ---- from pool 2.debian.pool.ntp.org
    1  4  1    6s    9s             ---- peer not valid ---- from pool 2.debian.pool.ntp.org
    1  5  2    8s    9s             ---- peer not valid ----
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  4  3    2s    6s             ---- peer not valid ----
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  4  2    5s    9s             ---- peer not valid ----
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  4  4    1s    6s             ---- peer not valid ---- from pool 3.debian.pool.ntp.org
    1  5  2    5s    6s             ---- peer not valid ---- from pool 3.debian.pool.ntp.org
    1  4  2    0s    6s             ---- peer not valid ---- from pool 3.debian.pool.ntp.org
    1  4  1    2s    9s             ---- peer not valid ---- from pool 3.debian.pool.ntp.org
    1  4  3    4s    7s             ---- peer not valid ----

Then it would accept them, but still wouldn't sync the clock:

anarcat@angela:~(main)$ sudo ntpctl -s all
20/20 peers valid, clock unsynced

   wt tl st  next  poll          offset       delay      jitter from pool 0.debian.pool.ntp.org
    1  8  2    5s    6s         0.672ms    13.507ms     0.442ms from pool 0.debian.pool.ntp.org
    1  7  2    4s    8s         1.260ms    13.388ms     0.494ms from pool 0.debian.pool.ntp.org
    1  7  1    3s    5s        -0.390ms    47.641ms     1.537ms from pool 0.debian.pool.ntp.org
    1  7  2    1s    6s        -0.573ms    15.012ms     1.845ms from pool 1.debian.pool.ntp.org
    1  7  2    3s    8s        -0.178ms    21.691ms     1.807ms from pool 1.debian.pool.ntp.org
    1  7  2    4s    8s        -5.742ms    70.040ms     1.656ms from pool 1.debian.pool.ntp.org
    1  7  2    0s    7s         0.170ms    21.035ms     1.914ms from pool 1.debian.pool.ntp.org
    1  7  3    5s    8s        -2.626ms    20.862ms     2.032ms from pool 2.debian.pool.ntp.org
    1  7  2    6s    8s         0.123ms    20.758ms     2.248ms from pool 2.debian.pool.ntp.org
    1  8  3    4s    5s         2.043ms    14.138ms     1.675ms from pool 2.debian.pool.ntp.org
    1  6  1    0s    7s        -0.027ms    14.189ms     2.206ms from pool 2.debian.pool.ntp.org
    1  7  2    1s    5s        -1.777ms    53.459ms     1.865ms
2001:678:8::123 from pool 2.debian.pool.ntp.org
    1  6  2    1s    8s         0.195ms    14.572ms     2.624ms
2606:4700:f1::1 from pool 2.debian.pool.ntp.org
    1  7  3    6s    9s         2.068ms    14.102ms     1.767ms
2607:5300:205:200::1991 from pool 2.debian.pool.ntp.org
    1  6  2    4s    9s         0.254ms    21.471ms     2.120ms
2607:5300:201:3100::345c from pool 2.debian.pool.ntp.org
    1  7  4    5s    9s        -1.706ms    21.030ms     1.849ms from pool 3.debian.pool.ntp.org
    1  7  2    0s    7s         8.907ms    75.070ms     2.095ms from pool 3.debian.pool.ntp.org
    1  7  2    6s    9s        -1.729ms    53.823ms     2.193ms from pool 3.debian.pool.ntp.org
    1  7  1    1s    7s        -1.265ms    46.355ms     4.171ms from pool 3.debian.pool.ntp.org
    1  7  3    4s    8s         1.732ms    35.792ms     2.228ms

It took a solid five minutes to sync the clock, even though the peers were considered valid within a few seconds:

jan 23 15:58:41 angela systemd[1]: Started OpenNTPd Network Time Protocol.
jan 23 15:58:58 angela ntpd[84086]: peer now valid
jan 23 15:58:58 angela ntpd[84086]: peer now valid
jan 23 15:58:58 angela ntpd[84086]: peer now valid
jan 23 15:58:58 angela ntpd[84086]: peer now valid
jan 23 15:58:59 angela ntpd[84086]: peer now valid
jan 23 15:58:59 angela ntpd[84086]: peer now valid
jan 23 15:58:59 angela ntpd[84086]: peer now valid
jan 23 15:58:59 angela ntpd[84086]: peer 2607:5300:201:3100::345c now valid
jan 23 15:59:00 angela ntpd[84086]: peer 2606:4700:f1::1 now valid
jan 23 15:59:00 angela ntpd[84086]: peer now valid
jan 23 15:59:01 angela ntpd[84086]: peer now valid
jan 23 15:59:01 angela ntpd[84086]: peer now valid
jan 23 15:59:01 angela ntpd[84086]: peer now valid
jan 23 15:59:01 angela ntpd[84086]: peer now valid
jan 23 15:59:02 angela ntpd[84086]: peer now valid
jan 23 15:59:04 angela ntpd[84086]: peer now valid
jan 23 15:59:05 angela ntpd[84086]: peer now valid
jan 23 15:59:05 angela ntpd[84086]: peer 2001:678:8::123 now valid
jan 23 15:59:05 angela ntpd[84086]: peer now valid
jan 23 15:59:07 angela ntpd[84086]: peer 2607:5300:205:200::1991 now valid
jan 23 16:03:47 angela ntpd[84086]: clock is now synced

That seems kind of odd. It was also frustrating to have very little information from ntpctl about the state of the daemon. I understand it's designed to be minimal, but it could inform me on his known offset, for example. It does tell me about the offset with the different peers, but not as clearly as one would expect. It's also unclear how it disciplines the RTC at all.

Compared to chrony

Now compare with chrony:

jan 23 16:07:16 angela systemd[1]: Starting chrony, an NTP client/server...
jan 23 16:07:16 angela chronyd[87765]: chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG)
jan 23 16:07:16 angela chronyd[87765]: Initial frequency 3.814 ppm
jan 23 16:07:16 angela chronyd[87765]: Using right/UTC timezone to obtain leap second data
jan 23 16:07:16 angela chronyd[87765]: Loaded seccomp filter
jan 23 16:07:16 angela systemd[1]: Started chrony, an NTP client/server.
jan 23 16:07:21 angela chronyd[87765]: Selected source (2.debian.pool.ntp.org)
jan 23 16:07:21 angela chronyd[87765]: System clock TAI offset set to 37 seconds

First, you'll notice there's none of that "clock synced" nonsense, it picks a source, and then... it's just done. Because the clock on this computer is not drifting that much, and openntpd had (presumably) just sync'd it anyways. And indeed, if we look at detailed stats from the powerful chronyc client:

anarcat@angela:~(main)$ sudo chronyc tracking
Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:07:21 2022
System time     : 0.000000311 seconds slow of NTP time
Last offset     : +0.000807989 seconds
RMS offset      : 0.000807989 seconds
Frequency       : 3.814 ppm fast
Residual freq   : -24.434 ppm
Skew            : 1000000.000 ppm
Root delay      : 0.013200894 seconds
Root dispersion : 65.357254028 seconds
Update interval : 1.4 seconds
Leap status     : Normal

We see that we are nanoseconds away from NTP time. That was ran very quickly after starting the server (literally in the same second as chrony picked a source), so stats are a bit weird (e.g. the Skew is huge). After a minute or two, it looks more reasonable:

Reference ID    : CE6C0083 (ntp1.torix.ca)
Stratum         : 2
Ref time (UTC)  : Sun Jan 23 21:09:32 2022
System time     : 0.000487002 seconds slow of NTP time
Last offset     : -0.000332960 seconds
RMS offset      : 0.000751204 seconds
Frequency       : 3.536 ppm fast
Residual freq   : +0.016 ppm
Skew            : 3.707 ppm
Root delay      : 0.013363549 seconds
Root dispersion : 0.000324015 seconds
Update interval : 65.0 seconds
Leap status     : Normal

Now it's learning how good or bad the RTC clock is ("Frequency"), and is smoothly adjusting the System time to follow the average offset (RMS offset, more or less). You'll also notice the Update interval has risen, and will keep expanding as chrony learns more about the internal clock, so it doesn't need to constantly poll the NTP servers to sync the clock. In the above, we're 487 micro seconds (less than a milisecond!) away from NTP time.

(People interested in the explanation of every single one of those fields can read the excellent chronyc manpage. That thing made me want to nerd out on NTP again!)

On the machine with the bad clock, chrony also did a 1.5 second adjustment, but just once, at startup:

jan 18 11:54:33 curie chronyd[2148399]: Selected source (2.debian.pool.ntp.org) 
jan 18 11:54:33 curie chronyd[2148399]: System clock wrong by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock was stepped by -1.606546 seconds 
jan 18 11:54:31 curie chronyd[2148399]: System clock TAI offset set to 37 seconds 

Then it would still struggle to keep the clock in sync, but not as badly as openntpd. Here's the offset a few minutes after that above startup:

System time     : 0.000375352 seconds slow of NTP time

And again a few seconds later:

System time     : 0.001793046 seconds slow of NTP time

I don't currently have access to that machine, and will update this post with the latest status, but so far I've had a very good experience with chrony on that machine, which is a testament to its resilience, and it also just works on my other machines as well.


On top of "just working" (as demonstrated above), I feel that chrony's feature set is so much superior... Here's an excerpt of the extras in chrony, taken from comparison table:

  • source frequency tracking
  • source state restore from file
  • temperature compensation
  • ready for next NTP era (year 2036)
  • replace unreachable / falseticker servers
  • aware of jitter
  • RTC drift tracking
  • RTC trimming
  • Restore time from file w/o RTC
  • leap seconds correction, in slew mode
  • drops root privileges

I even understand some of that stuff. I think.

So kudos to the chrony folks, I'm switching.


One thing to keep in mind in the above, however is that it's quite possible chrony does as bad of a job as openntpd on that old machine, and just doesn't tell me about it. For example, here's another log sample from another server (marcos):

jan 23 11:13:25 marcos ntpd[1976694]: adjusting clock frequency by 0.451035 to -16.420273ppm

I get those basically every day, which seems to show that it's at least trying to keep track of the hardware clock.

In other words, it's quite possible I have no idea what I'm talking about and you definitely need to take this article with a grain of salt. I'm not an NTP expert.

Switching to chrony

Because the default configuration in chrony (at least as shipped in Debian) is sane (good default peers, no open network by default), installing it is as simple as:

apt install chrony

And because it somehow conflicts with openntpd, that also takes care of removing that cruft as well.

23 January, 2022 10:18PM

January 22, 2022

hackergotchi for Steve Kemp

Steve Kemp

Visiting the UK was difficult, but worth it

So in my previous post I mentioned that we were going to spend the Christmas period in the UK, which we did.

We spent a couple of days there, meeting my parents, and family. We also persuaded my sister to drive us to Scarborough so that we could hang out on the beach for an afternoon.

Finland has lots of lakes, but it doesn't have proper waves. So it was surprisingly good just to wade in the sea and see waves! Unfortunately our child was a wee bit too scared to ride on a donkey!

Unfortunately upon our return to Finland we all tested positive for COVID-19, me first, then the child, and about three days later my wife. We had negative tests in advance of our flights home, so we figure that either the tests were broken, or we were infected in the airplane/airport.

Thankfully things weren't too bad, we stayed indoors for the appropriate length of time, and a combination of a couple of neighbours and online shopping meant we didn't run out of food.

Since I've been back home I've been automating AWS activities with aws-utils, and updating my simple host-automation system, marionette.

Marionette is something that was inspired by puppet, the configuration management utility, but it runs upon localhost only. Despite the small number of integrated primitives it actually works surprisingly well, and although I don't expect it will ever become popular it was an interesting research project.

The aws-utilities? They were specifically put together because I've worked in a few places where infrastructure is setup with terraform, or cloudformation, but there are always the odd thing that is configured manually. Typically we'll have an openvpn gateway which uses a manually maintained IP allow-list, or some admin-server which has a security-group maintained somewhat manually.

Having the ability to update a bunch of rules with your external IP, as a single command, across a number of AWS accounts/roles, and a number of security-groups is an enormous time-saver when your home IP changes.

I'd quite like to add more things to that collection, but there's no particular rush.

22 January, 2022 05:45AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Homebrewing recipes

Looking at my blog, it seems I haven't written anything about homebrewing in a while. In fact, the last time I did was when I had a carboy blow out on me in the middle of the night...

Fear not, I haven't stopped brewing since then. I have in fact decided to publish my homebrew recipes. Not on this blog though, as it would get pretty repetitive.

So here are my recipes. So far, I've brewed around 30 different beers!

The format is pretty simple (no fancy HTML, just plain markdown) and although I'm not the most scientific brewer, you should be able to replicate some of those if that's what you want to try.


22 January, 2022 04:35AM by Louis-Philippe Véronneau

Goodbye Nexus 5

I've blogged a few times already about my Nexus 5, the Android device I have/had been using for 8 years. Sadly, it died a few weeks ago, when the WiFi chip stopped working. I could probably have attempted a mainboard swap, but at this point, getting a new device seemed like the best choice.

In a world where most Android devices are EOL after less than 3 years, it is amazing I was able to keep this device for so long, always running the latest Android version with the latest security patch. The Nexus 5 originally shipped with Android 4.4 and when it broke, I was running Android 11, with the November security patch! I'm very grateful to the FOSS Android community that made this possible, especially the LineageOS community.

I've replaced my Nexus 5 by a used Pixel 3a, mostly because of the similar form factor, relatively affordable price and the presence of a headphone jack. Google also makes flashing a custom ROM easy, although I had more trouble with this than I first expected.

The first Pixel 3a I bought on eBay was a scam: I ordered an "Open Box" phone and it arrived all scratched1 and with a broken rear camera. The second one I got (from the Amazon Renewed program) arrived in perfect condition, but happened to be a Verizon model. As I found out, Verizon locks the bootloader on their phones, making it impossible to install LineageOS2. The vendor was kind enough to let me return it.

As they say, third time's the charm. This time around, I explicitly bought a phone on eBay listed with a unlocked bootloader. I'm very satisfied with my purchase, but all in all, dealing with all the returns and the shipping was exhausting.

Hopefully this phone will last as long as my Nexus 5!

  1. There was literally a whole layer missing at the back, as if someone had sanded the phone... 

  2. Apparently, and "Unlocked phone" means it is "SIM unlocked", i.e. you can use it with any carrier. What I should have been looking for is a "Factory Unlocked phone", one where the bootloader isn't locked :L 

22 January, 2022 04:35AM by Louis-Philippe Véronneau

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.2 on CRAN: Updates

The second release of the still fairly new qlcal package arrivied at CRAN today.

qlcal is based on the calendaring subset of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.

This release brings a further package simplification from removing a few more files not needed for just calendaring, as well as an update 2022 calendar for China from the just-release 1.25 version of QuantLib.

Changes in version 0.0.2 (2022-01-21)

  • Further minimize set of files needed for calendaring

  • Update China calendar from QuantLib 1.25 release

See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 January, 2022 03:03AM

January 21, 2022

hackergotchi for Neil McGovern

Neil McGovern

Further investments in desktop Linux

This was originally posted on the GNOME Foundation news feed

The GNOME Foundation was supported during 2020-2021 by a grant from Endless Network which funded the Community Engagement Challenge, strategy consultancy with the board, and a contribution towards our general running costs. At the end of last year we had a portion of this grant remaining, and after the success of our work in previous years directly funding developer and infrastructure work on GTK and Flathub, we wanted to see whether we could use these funds to invest in GNOME and the wider Linux desktop platform.

We’re very pleased to announce that we got approval to launch three parallel contractor engagements, which started over the past few weeks. These projects aim to improve our developer experience, make more applications available on the GNOME platform, and move towards equitable and sustainable revenue models for developers within our ecosystem. Thanks again to Endless Network for their support on these initiatives.

Flathub – Verified apps, donations and subscriptions (Codethink and James Westman)

This project is described in detail on the Flathub Discourse but goal is to add a process to verify first-party apps on Flathub (ie uploaded by a developer or an authorised representative) and then make it possible for those developers to collect donations or subscriptions from users of their applications. We also plan to publish a separate repository that contains only these verified first-party uploads (without any of the community contributed applications), as well as providing a repository with only free and open source applications, allowing users to choose what they are comfortable installing and running on their system.

Creating the user and developer login system to manage your apps will also set us up well for future enhancements, such managing tokens for direct binary uploads (eg from a CI/CD system hosted elsewhere, as is already done with Mozilla Firefox and OBS) and making it easier to publish apps from systems such as Electron which can be hard to use within a flatpak-builder sandbox. For updates on this project you can follow the Discourse thread, check out the work board on GitHub or join us on Matrix.

PWAs – Integrating Progressive Web Apps in GNOME (Phaedrus Leeds)

While everyone agrees that native applications can provide the best experience on the GNOME desktop, the web platform, and particularly PWAs (Progressive Web Apps) which are designed to be downloadable as apps and offer offline functionality, makes it possible for us to offer equivalent experiences to other platforms for app publishers who have not specifically targeted GNOME. This allows us to attract and retain users by giving them the choice of using applications from a wider range of publishers than are currently directly targeting the Linux desktop.

The first phase of the GNOME PWA project involves adding back support to Software for web apps backed by GNOME Web, and making this possible when Web is packaged as a Flatpak.  So far some preparatory pull requests have been merged in Web and libportal to enable this work, and development is ongoing to get the feature branches ready for review.

Discussions are also in progress with the Design team on how best to display the web apps in Software and on the user interface for web apps installed from a browser. There has also been discussion among various stakeholders about what web apps should be included as available with Software, and how they can provide supplemental value to users without taking priority over apps native to GNOME.

Finally, technical discussion is ongoing in the portal issue tracker to ensure that the implementation of a new dynamic launcher portal meets all security and robustness requirements, and is potentially useful not just to GNOME Web but Chromium and any other app that may want to install desktop launchers. Adding support for the launcher portal in upstream Chromium, to facilitate Chromium-based browsers packaged as a Flatpak, and adding support for Chromium-based web apps in Software are stretch goals for the project should time permit.

GTK4 / Adwaita – To support the adoption of Gtk4 by the community (Emmanuele Bassi)

With the release of GTK4 and renewed interest in GTK as a toolkit, we want to continue improving the developer experience and ease of use of GTK and ensure we have a complete and competitive offering for developers considering using our platform. This involves identifying missing functionality or UI elements that applications need to move to GTK4, as well as informing the community about the new widgets and functionality available.

We have been working on documentation and bug fixes for GTK in preparation for the GNOME 42 release and have also started looking at the missing widgets and API in Libadwaita, in preparation for the next release. The next steps are to work with the Design team and the Libadwaita maintainers and identify and implement missing widgets that did not make the cut for the 1.0 release.

In the meantime, we have also worked on writing a beginners tutorial for the GNOME developers documentation, including GTK and Libadwaita widgets so that newcomers to the platform can easily move between the Interface Guidelines and the API references of various libraries. To increase the outreach of the effort, Emmanuele has been streaming it on Twitch, and published the VOD on YouTube as well. 

21 January, 2022 03:31PM by Neil McGovern

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2021 edition

For the third time now, I've asked Société de Transport de Montréal, Montreal's transit agency, for the foot traffic data of Montreal's subway. I think this has become an annual thing now :)

The original blog post and the 2019-2020 edition can be read here:

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.


  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. It's pretty much the same code as last year. I've also added a converter script this time around. I takes the manually cleaned 2021 source data and turns it into something that can be merged with the global dataset. I had one last year and deleted it, for some reason...

21 January, 2022 05:00AM by Louis-Philippe Véronneau

January 20, 2022

Sven Hoexter

Running OpenWRT x86 in qemu

Sometimes it's nice for testing purpose to have the OpenWRT userland available locally. Since there is an x86 build available one can just run it within qemu.

wget https://downloads.openwrt.org/releases/21.02.1/targets/x86/64/openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
gunzip openwrt-21.02.1-x86-64-generic-squashfs-combined.img.gz
qemu-img convert -f raw -O qcow2 openwrt-21.02.1-x86-64-generic-squashfs-combined.img openwrt-21.02.1.qcow2
qemu-img resize openwrt-21.02.1.qcow2 200M
qemu-system-x86_64 -M q35 \
  -drive file=openwrt-21.02.1.qcow2,id=d0,if=none,bus=0,unit=0 \
  -device ide-hd,drive=d0,bus=ide.0 -nic user,hostfwd=tcp::5556-:22
# you've to change the network configuration to retrieve an IP via
# dhcp for the lan bridge br-lan
vi /etc/config/network
  - change option proto 'static' to 'dhcp'
  - remove IP address and netmask setting
/etc/init.d/network restart
# now you should've an ip out of
ssh root@localhost -p 5556
# remember ICMP does not work but otherwise you should have
# IP networking available
opkg update
opkg install curl

20 January, 2022 08:20PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.15: Regular Update

A new release 0.4.15 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

The release of RQuantLib comes four months after the previous release, and brings a momitor update for the just-released QuantLib 1.2.5 version along with a few small cleanups to calendars and daycounters.

Changes in RQuantLib version 0.4.15 (2022-01-19)

  • Changes in RQuantLib code:

    • Calendar support has been updated and completed to current QuantLib standards (Dirk in #161)

    • More daycounters have been added (Kai Lin in #163 fixing #162, #164)

    • The bonds pricers were update to changes in QuantLib 1.25 (Dirk)

  • Changes in RQuantLib package and setup:

    • Some package metadata was removed from the README.md (Dirk)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 January, 2022 12:44PM

Russ Allbery

DocKnot 7.01

Continuing to flush out bugs in the recent changes to my static web site generator.

I had missed some Unicode implications for how output from external programs was handled, and also missed Unicode decoding of the output from Pod::Thread, since Pod::Simple always encodes its output even if that output is to a scalar. I also missed an implication for how symlinks were handled in Path::Iterator::Rule, causing docknot spin to fail to copy files into the output tree that were symlinks in the input tree. Both of those bugs are fixed in this release.

I also fixed a minor output issue from the \size command, which was using SI units when it meant IEC units.

You can get the latest release from CPAN or from the DocKnot distribution page.

20 January, 2022 05:17AM

January 19, 2022

Joerg Jaspert

Funny CPU usage

Munin plugin and it’s CPU usage (shell fixup)

So at work we do have a munin server running, and one of the graphs we do for every system is a network statistics one with a resolution of 1 second. That’s a simple enough script to have, and it is working nicely - on 98% of our machines. You just don’t notice the data gatherer at all, so that we also have some other graphs done with a 1 second resolution. For some, this really helps.


The basic code for this is simple. There is a bunch of stuff to start the background gathering, some to print out the config, and some to hand out the data when munin wants it. Plenty standard.

The interesting bit that goes wrong and uses too much CPU on one Linux Distribution is this:

run_acquire() {
   echo "$$" > ${pidfile}

   while :; do
     TSTAMP=$(date +%s)
     echo ${IFACE}_tx.value ${TSTAMP}:$(cat /sys/class/net/${IFACE}/statistics/tx_bytes ) >> ${cache}
     echo ${IFACE}_rx.value ${TSTAMP}:$(cat /sys/class/net/${IFACE}/statistics/rx_bytes ) >> ${cache}
     # Sleep for the rest of the second
     sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))

That code works, and none of Debian wheezy, stretch and buster as well as RedHat 6 or 7 shows anything, it just works, no noticable load generated.

Now, Oracle Linux 7 thinks differently. The above code run there generates between 8 and 15% CPU usage (on fairly recent Intel CPUs, but that shouldn’t matter). (CPU usage measured with the highly accurate use of top and looking what it tells…)



Ok, well, the code above isn’t all the nicest shell, actually. There is room for improvement. But beware, the older the bash, the less one can fix it.

  • So, first of, there are two useless uses of cat. Bash can do that for us, just use the $(< /PATH/TO/FILE ) way.
  • Oh, Bash5 knows the epoch directly, we can replace the date call for the timestamp and use ${EPOCHSECONDS}
  • Too bad Bash4 can’t do that. But hey, it’s builtin printf can help out, a nice TSTAMP=$(printf ‘%(%s)T\n’ -1) works.
  • Unfortunately, Bash4.2 and later, not 4.1, and meh, we have a 4.1 system, so that has to stay with the date call there.

Taking that, we end up with 3 different possible versions, depending on the Bash on the system.

obtain5() {
  ## Purest bash version, Bash can tell us epochs directly
  echo ${IFACE}_tx.value ${EPOCHSECONDS}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${EPOCHSECONDS}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))

obtain42() {
  ## Bash cant tell us epochs directly, but the builtin printf can
  TSTAMP=$(printf '%(%s)T\n' -1)
  echo ${IFACE}_tx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))

obtain41() {
  ## Bash needs help from a tool to get epoch, means one exec() all the time
  TSTAMP=$(date +%s)
  echo ${IFACE}_tx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/tx_bytes) >> ${cache}
  echo ${IFACE}_rx.value ${TSTAMP}:$(</sys/class/net/${IFACE}/statistics/rx_bytes) >> ${cache}
  # Sleep for the rest of the second
  sleep 0.$(printf '%04d' $((10000 - 10#$(date +%4N))))

run_acquire() {
   echo "$$" > ${pidfile}

   case ${BASH_VERSINFO[0]} in
     5) while :; do
     4) if [[ ${BASHVERSION[1]} -ge 2 ]]; then
          while :; do
          while :; do

Does it help?

Oh yes, it does. Oracle Linux 7 appears to use Bash 4.2, so uses obtain42 and hey, removing one date and two cat calls, and it has a sane CPU usage of 0 (again, highly accurate number generated from top…). Appears OL7 is doing heck-what-do-i-know extra, when calling other tools, for whatever gains, removing that does help (who would have thought).

(None of RedHat or Oracle Linux has SELinux turned on, so that one shouldn’t bite. But it is clear OL7 doing something extra for everything that bash spawns.)

19 January, 2022 08:56PM

January 18, 2022

hackergotchi for Daniel Pocock

Daniel Pocock

Ashling Murphy, Jill Meagher, can apps counter harassment of women?

On 10 January, the British Government endorsed an app for women's safety. The app allows women to broadcast their movements to somebody else. Similar apps already exist in Saudi Arabia, keeping husbands and fathers in control of womens' lives. Inside the UK app, we might find binary code copied directly from the Saudi app, sugar coated to look like a tool of empowerment.

Two days later and a woman in Ireland, Ashling Murphy, was murdered in broad daylight. The Irish press was quick to compare this with the murder of Sarah Everard in London but the tragedy that came to my mind was the 2012 murder of Jill Meagher. Meagher emigrated from Ireland to Melbourne, much like my own mother. Meagher was abducted in a main road and shopping district where I used to go almost every day. I was born in the same district.

Murphy's death is also a good moment to contemplate the actions of Brittany Higgins, the woman who stood up to the Australian government after they covered up her rape on the defence minister's sofa. It appears Murphy demonstrated similar courage and strength: after strangling her to death, the prime suspect checked himself into hospital. When Higgins called out both the rape and cover-up, Australia's defence minister checked in to hospital.

Higgins waived her anonymity while it appears Murphy fought with all she had. Both of these women have made a huge sacrifice that will make other women safer.

Brittany Higgins Brittany Higgins

Meagher's murder received blanket news coverage in Australia, partly because Meagher had been employed in the media. It quickly led to debate about the phenomena of the single white female victim. Some commentators pointed out that teenage migrants from Sudan and Ethiopia had disappeared without any public interest in their plight. This phenomena doesn't diminish victims like Meagher and Murphy yet it is important to remember their deaths represent a wider problem.

Irish Police (Garda Síochána) may have already examined data from mobile phone towers to discover the suspect's movements. This data may reveal whether it was a random attack or whether he had been following women every day, making notes about their behavior, trying to anticipate their movements. Ironically, Ireland's tech sector includes large offices for Facebook and Google, where a predominantly male workforce creates algorithms to monitor and predict the decisions and movements of their targets. Frances Haugen's brave testimony before the US Congress confirms these men actively make choices that hurt women and children.

Public leaders have been quick to proclaim new policies to protect women. Personally, I feel that cultural change may do more. For example, if a tech monopoly uses coercive and deceptive practices to trap people in their service, we need to be in the habit of equating that with the men who use substances or deception to control women.

One of my female interns in the Outreachy program wrote about a sponsor, Google, stalking her. In response, all she received were threats and insults. Google set about discrediting Renata and I just as Facebook has tried to discredit Frances Haugen. The Google employees claim they are the victims of harassment and abuse: how can they equate themselves with the trauma experienced by women like Ashling, Jill and Brittany?

Google, Stalking, Harassment, women, interns, Outreachy

18 January, 2022 09:00AM

hackergotchi for Joey Hess

Joey Hess

January 17, 2022

Russ Allbery

DocKnot 7.00

The recent 6.01 release of my static web site generator was kind of a buggy mess, which uncovered a bunch of holes in my test suite and immediately turned up problems when I tried to use it to rebuild my actual web site. Most of the problems were Unicode-related; this release hopefully sorts out Unicode properly and handles it consistently.

Other bugs fixed include processing of old-style pointers in a spin input tree, several rather obvious bugs in the new docknot release command, and a few long-standing issues with docknot dist that should make its results more consistent and reliable.

I also got on a roll and finished the Path::Tiny transition in DocKnot, so now (nearly) all paths are internally represented as Path::Tiny objects. This meant changing some APIs, hence the version bump to 7.00.

For anyone who still does a lot of Perl, I highly recommend the Path::Tiny module. If you also write Python, you will be reminded (in a good way) of Python's pathlib module, which I now use whenever possible.

You can get the latest version of DocKnot from CPAN or from its distribution page.

17 January, 2022 09:41PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Different types of Backups

In my previous post, I explained how I recently set up backups for my home server to be synced using Amazon's services. I received a (correct) comment on that by Iustin Pop which pointed out that while it is reasonably cheap to upload data into Amazon's offering, the reverse -- extracting data -- is not as cheap.

He is right, in that extracting data from S3 Glacier Deep Archive costs over an order of magnitude more than it costs to store it there on a monthly basis -- in my case, I expect to have to pay somewhere in the vicinity of 300-400 USD for a full restore. However, I do not consider this to be a major problem, as these backups are only to fulfill the rarer of the two types of backups cases.

There are two reasons why you should have backups.

The first is the most common one: "oops, I shouldn't have deleted that file". This happens reasonably often; people will occasionally delete or edit a file that they did not mean to, and then they will want to recover their data. At my first job, a significant part of my job was to handle recovery requests from users who had accidentally deleted a file that they still needed.

Ideally, backups to handle this type of situation are easily accessible to end users, and are performed reasonably frequently. A system that automatically creates and deletes filesystem snapshots (such as the zfsnap script for ZFS snapshots, which I use on my server) works well. The crucial bit here is to ensure that it is easier to copy an older version of a file than it is to start again from scratch -- if a user must file a support request that may or may not be answered within a day or so, it is likely they will not do so for a file they were working on for only half a day, which means they lose half a day of work in such a case. If, on the other hand, they can just go into the snapshots directory themselves and it takes them all of two minutes to copy their file, then they will also do that for files they only created half an hour ago, so they don't even lose half an hour of work and can get right back to it. This means that backup strategies to mitigate the "oops I lost a file" case ideally do not involve off-site file storage, and instead are performed online.

The second case is the much rarer one, but (when required) has the much bigger impact: "oops the building burned down". Variants of this can involve things like lightning strikes, thieves, earth quakes, and the like; in all cases, the point is that you want to be able to recover all your files, even if every piece of equipment you own is no longer usable.

That being the case, you will first need to replace that equipment, which is not going to be cheap, and it is also not going to be an overnight thing. In order to still be useful after you lost all your equipment, they must also be stored off-site, and should preferably be offline backups, too. Since replacing your equipment is going to cost you time and money, it's fine if restoring the backups is going to take a while -- you can't really restore from backup any time soon anyway. And since you will lose a number of days of content that you can't create when you can only fall back on your off-site backups, it's fine if you also lose a few days of content that you will have to re-create.

All in all, the two types of backups have opposing requirements: "oops I lost a file" backups should be performed often and should be easily available; "oops I lost my building" backups should not be easily available, and are ideally done less often, so you don't pay a high amount of money for storage of your off-sites.

In my opinion, if you have good "lost my file" backups, then it's also fine if the recovery of your backups are a bit more expensive. You don't expect to have to ever pay for these; you may end up with a situation where you don't have a choice, and then you'll be happy that the choice is there, but as long as you can reasonably pay for the worst case scenario of a full restore, it's not a case you should be worried about much.

As such, and given that a full restore from Amazon Storage Gateway is going to be somewhere between 300 and 400 USD for my case -- a price I can afford, although it's not something I want to pay every day -- I don't think it's a major issue that extracting data is significantly more expensive than uploading data.

But of course, this is something everyone should consider for themselves...

17 January, 2022 03:43PM

hackergotchi for Matthew Garrett

Matthew Garrett

Boot Guard and PSB have user-hostile defaults

Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

17 January, 2022 04:37AM

January 16, 2022

hackergotchi for Chris Lamb

Chris Lamb

Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on — after all, nobody needs to hear that The Godfather is a good movie.)

It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to “make a tour of the landmarks of the first century of cinema,” and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever — the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here.

Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) — the latter being an almost perfect example of late-20th century cultural exhaustion.

But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.


Dogtooth (2009)

A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well.

Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and Félix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.


Holy Motors (2012)

There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Buñuel and famed artist Salvador Dalí. A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest.

This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!")

Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work.

Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.


Manchester by the Sea (2016)

An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.


McCabe & Mrs. Miller (1971)

Roger Ebert called this movie “one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come.” But whilst it is difficult to disagree with his sentiment, Ebert's choice of “sad” is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some.

Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone — the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure.

As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: “This is not the kind of movie where the characters are introduced. They are all already here.” Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.)

On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.


Detour (1945)

Detour was filmed in less than a week, and it's difficult to decide — out of the actors and the screenplay — which is its weakest point.... Yet it still somehow seemed to drag me in.

The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control.

It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour — its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema.

In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)


Safe (1995)

Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she “doesn't feel right,” Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist.

Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies.

On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.)

As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.


The Driver (1978)

Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978.

The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective.

If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together — a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well.

To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.


The Souvenir (2019)

The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film.

Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.


The Wrestler (2008)

Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback.

The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler.

In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s.

Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat...

But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.


Thief (1981)

Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity.

Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie.

The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps.

I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.


Uncut Gems (2019)

The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?)

No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices — mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what.

Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.


Woman in the Dunes (1964)

I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life.

Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random — it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.


A Quiet Place (2018)

Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here.

The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet — that is to say, by staying stoic — suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.)

I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

16 January, 2022 06:44PM

hackergotchi for Daniel Pocock

Daniel Pocock

Novak Djokovic, Sport, Politics, Harassment, Defamation and Misdirection

I was recently filming some people at an event. Some participants were really happy to be recorded but other people told me they did not consent so we simply didn't record them.

In the circus that has followed Novak Djokovic this week many people have descended into groupthink. What's good for me is good for you. Australian newspapers have conducted surveys of public opinion. Their reports are absurd: you can not have a referendum to vote for somebody to undergo a medical procedure, even if the risk of vaccination is known to be incredibly low. The only vote that matters is the one vote from a person making a decision about their own body. The WHO confirmed this.

In early August 2021, Australian athletes were returning home from the Tokyo Olympics carrying a fistful of medals. Australia had finished in 6th place. Their success dominated the media 24 hours per day. In this context, few people noticed when justice officials announced two significant prosecutions: the Prime Minister's personal pastor and the Parliament serial rapist.

Hillsong, covid party, lockdown party, civil unrest, Novak Djokovic, Alex Hawke

Oddly enough, the Hillsong church had to make an apology for their own Covid misconduct on Friday morning. The immigration minister, who is part of the Hillsong congregation, conveniently buried the Hillsong news when he declared Novak Djokovic to be the source of civil unrest. This is farcical: John McEnroe was far more controversial, he was expelled from the tournament in 1990 but he never suffered political interference.

It is not easy to defend Djokovic after revelations he flaunted isolation rules in his own country. Nonetheless, he had been selected as a scapegoat even before those breaches were known.

More is to come: the PM's pastor Brian Houston abuse case goes to court on 27 January. It is the same day that television screens across Australia will be tuned into coverage of women's tennis semi-finals.

British newspapers have given heavy coverage to the association between Prince Andrew and the Epstein saga. Yet the Australian press, faced with strict laws about defamation, can not give the same level of attention to the relationship between the Prime Minister and Hillsong allegations.

What we have just seen this week can be summarized as the defamation and harassment of a foreigner, the distraction from more serious issues and most critical of all, the trivialization of consent.

16 January, 2022 01:30PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

Backing up my home server with Bacula and Amazon Storage Gateway

I have a home server.

Initially conceived and sized so I could digitize my (rather sizeable) DVD collection, I started using it for other things; I added a few play VMs on it, started using it as a destination for the deja-dup-based backups of my laptop and the time machine-based ones of the various macs in the house, and used it as the primary location of all the photos I've taken with my cameras over the years (currently taking up somewhere around 500G) as well as those that were taking at our wedding (another 100G). To add to that, I've copied the data that my wife had on various older laptops and external hard drives onto this home server as well, so that we don't lose the data should something happen to one or more of these bits of older hardware.

Needless to say, the server was running full, so a few months ago I replaced the 4x2T hard drives that I originally put in the server with 4x6T ones, and there was much rejoicing.

But then I started considering what I was doing. Originally, the intent was for the server to contain DVD rips of my collection; if I were to lose the server, I could always re-rip the collection and recover that way (unless something happened that caused me to lose both at the same time, of course, but I consider that sufficiently unlikely that I don't want to worry about it). Much of the new data on the server, however, cannot be recovered like that; if the server dies, I lose my photos forever, with no way of recovering them. Obviously that can't be okay.

So I started looking at options to create backups of my data, preferably in ways that make it easily doable for me to automate the backups -- because backups that have to be initiated are backups that will be forgotten, and backups that are forgotten are backups that don't exist. So let's not try that.

When I was still self-employed in Belgium and running a consultancy business, I sold a number of lower-end tape libraries for which I then configured bacula, and I preferred a solution that would be similar to that without costing an arm and a leg. I did have a look at a few second-hand tape libraries, but even second hand these are still way outside what I can budget for this kind of thing, so that was out too.

After looking at a few solutions that seemed very hackish and would require quite a bit of handholding (which I don't think is a good idea), I remembered that a few years ago, I had a look at the Amazon Storage Gateway for a customer. This gateway provides a virtual tape library with 10 drives and 3200 slots (half of which are import/export slots) over iSCSI. The idea is that you install the VM on a local machine, you connect it to your Amazon account, you connect your backup software to it over iSCSI, and then it syncs the data that you write to Amazon S3, with the ability to archive data to S3 Glacier or S3 Glacier Deep Archive. I didn't end up using it at the time because it required a VMWare virtualization infrastructure (which I'm not interested in), but I found out that these days, they also provide VM images for Linux KVM-based virtual machines (amongst others), so that changes things significantly.

After making a few calculations, I figured out that for the amount of data that I would need to back up, I would require a monthly budget of somewhere between 10 and 20 USD if the bulk of the data would be on S3 Glacier Deep Archive. This is well within my means, so I gave it a try.

The VM's technical requirements state that you need to assign four vCPUs and 16GiB of RAM, which just so happens to be the exact amount of RAM and CPU that my physical home server has. Obviously we can't do that. I tried getting away with 4GiB and 2 vCPUs, but that didn't work; the backup failed out after about 500G out of 2T had been written, due to the VM running out of resources. On the VM's console I found complaints that it required more memory, and I saw it mention something in the vicinity of 7GiB instead, so I decided to try again, this time with 8GiB of RAM rather than 4. This worked, and the backup was successful.

As far as bacula is concerned, the tape library is just a (very big...) normal tape library, and I got data throughput of about 30M/s while the VM's upload buffer hadn't run full yet, with things slowing down to pretty much my Internet line speed when it had. With those speeds, Bacula finished the backup successfully in "1 day 6 hours 43 mins 45 secs", although the storage gateway was still uploading things to S3 Glacier for a few hours after that.

All in all, this seems like a viable backup solution for large(r) amounts of data, although I haven't yet tried to perform a restore.

16 January, 2022 09:06AM

Russell Coker

SSD Endurance

I previously wrote about the issue of swap potentially breaking SSD [1]. My conclusion was that swap wouldn’t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes. In that post the most writes I could see was 128GB written per day on a 120G Intel SSD (writing the entire device once a day).

My post about swap and SSD was based on the assumption that you could get many thousands of writes to the entire device which was incorrect. Here’s a background on the terminology from WD [2]. So in the case of the 120G Intel SSD I was doing over 1 DWPD (Drive Writes Per Day) which is in the middle of the range of SSD capability, Intel doesn’t specify the DWPD or TBW (Tera Bytes Written) for that device.

The most expensive and high end NVMe device sold by my local computer store is the Samsung 980 Pro which has a warranty of 150TBW for the 250G device and 600TBW for the 1TB device [3]. That means that the system which used to have an Intel SSD would have exceeded the warranty in 3 years if it had a 250G device.

My current workstation has been up for just over 7 days and has averaged 110GB written per day. It has some light VM use and the occasional kernel compile, a fairly typical developer workstation. It’s storage is 2*Crucial 1TB NVMe devices in a BTRFS RAID-1, the NVMe devices are the old series of Crucial ones and are rated for 200TBW which means that they can be expected to last for 5 years under the current load. This isn’t a real problem for me as the performance of those devices is lower than I hoped for so I will buy faster ones before they are 5yo anyway.

My home server (and my wife’s workstation) is averaging 325GB per day on the SSDs used for the RAID-1 BTRFS filesystem for root and for most data that is written much (including VMs). The SSDs are 500G Samsung 850 EVOs [4] which are rated at 150TBW which means just over a year of expected lifetime. The SSDs are much more than a year old, I think Samsung stopped selling them more than a year ago. Between the 2 SSDs SMART reports 18 uncorrectable errors and “btrfs device stats” reports 55 errors on one of them. I’m not about to immediately replace them, but it appears that they are well past their prime.

The server which runs my blog (among many other things) is averaging over 1TB written per day. It currently has a RAID-1 of hard drives for all storage but it’s previous incarnation (which probably had about the same amount of writes) had a RAID-1 of “enterprise” SSDs for the most written data. After a few years of running like that (and some time running with someone else’s load before it) the SSDs became extremely slow (sustained writes of 15MB/s) and started getting errors. So that’s a pair of SSDs that were burned out.


The amounts of data being written are steadily increasing. Recent machines with more RAM can decrease storage usage in some situations but that doesn’t compare to the increased use of checksummed and logged filesystems, VMs, databases for local storage, and other things that multiply writes. The amount of writes allowed under warranty isn’t increasing much and there are new technologies for larger SSD storage that decrease the DWPD rating of the underlying hardware.

For the systems I own it seems that they are all going to exceed the rated TBW for the SSDs before I have other reasons to replace them, and they aren’t particularly high usage systems. A mail server for a large number of users would hit it much earlier.

RAID of SSDs is a really good thing. Replacement of SSDs is something that should be planned for and a way of swapping SSDs to less important uses is also good (my parents have some SSDs that are too small for my current use but which work well for them). Another thing to consider is that if you have a server with spare drive bays you could put some extra SSDs in to spread the wear among a larger RAID-10 array. Instead of having a 2*SSD BTRFS RAID-1 for a server you could have 6*SSD to get a 3* longer lifetime than a regular RAID-1 before the SSDs wear out (BTRFS supports this sort of thing).

Based on these calculations and the small number of errors I’ve seen on my home server I’ll add a 480G SSD I have lying around to the array to spread the load and keep it running for a while longer.

16 January, 2022 05:33AM by etbe

Russ Allbery

Review: The Brightest Fell

Review: The Brightest Fell, by Seanan McGuire

Series: October Daye #11
Publisher: DAW
Copyright: 2017
ISBN: 0-698-18352-5
Format: Kindle
Pages: 353

This is the eleventh book in the October Daye urban fantasy series, not counting various novellas and side stories. You really cannot start here, particularly given how many ties this book has to the rest of the series.

I would like to claim there's some sort of plan or strategy in how I read long series, but there are just a lot of books to read and then I get distracted and three years have gone by. The advantage of those pauses, at least for writing reviews, is that I return to the series with fresh eyes and more points of comparison. My first thought this time around was "oh, these books aren't that well written, are they," followed shortly thereafter by staying up past midnight reading just one more chapter.

Plot summaries are essentially impossible this deep into a series, when even the names of the involved characters can be a bit of a spoiler. What I can say is that we finally get the long-awaited confrontation between Toby and her mother, although it comes in an unexpected (and unsatisfying) form. This fills in a few of the gaps in Toby's childhood, although there's not much there we didn't already know. It fills in considerably more details about the rest of Toby's family, most notably her pure-blood sister.

The writing is indeed not great. This series is showing some of the signs I've seen in other authors (Mercedes Lackey, for instance) who wrote too many books per year to do each of them justice. I have complained before about McGuire's tendency to reuse the same basic plot structure, and this instance seemed particularly egregious. The book opens with Toby enjoying herself and her found family, feeling like they can finally relax. Then something horrible happens to people she cares about, forcing her to go solve the problem. This in theory requires her to work out some sort of puzzle, but in practice is fairly linear and obvious because, although I love Toby as a character, she can't puzzle her way out of a wet sack. Everything is (mostly) fixed in the end, but there's a high cost to pay, and everyone ends the book with more trauma.

The best books of this series are the ones where McGuire manages to break with this formula. This is not one of them. The plot is literally on magical rails, since The Brightest Fell skips even pretending that Toby is an actual detective (although it establishes that she's apparently still working as one in the human world, a detail that I find baffling) and gives her a plot compass that tells her where to go. I don't really mind this since I read this series for emotional catharsis rather than Toby's ingenuity, but alas that's mostly missing here as well. There is a resolution of sorts, but it's the partial and conditional kind that doesn't include awful people getting their just deserts.

This is also not a good series entry for world-building. McGuire has apparently been dropping hints for this plot back at least as far as Ashes of Honor. I like that sort of long-term texture to series like this, but the unfortunate impact on this book is a lot of revisiting of previous settings and very little in the way of new world-building. The bit with the pixies was very good; I wanted more of that, not the trip to an Ashes of Honor setting to pick up a loose end, or yet another significant scene in Borderland Books.

As an aside, I wish authors would not put real people into their books as characters, even when it's with permission as I'm sure it was here. It's understandable to write a prominent local business into a story as part of the local color (although even then I would rather it not be a significant setting in the story), but having the actual owner and staff show up, even in brief cameos, feels creepy and weird to me. It also comes with some serious risks because real people are not characters under the author's control. (All the content warnings for that link, which is a news story from three years after this book was published.)

So, with all those complaints, why did I stay up late reading just one more chapter? Part of the answer is that McGuire writes very grabby books, at least for me. Toby is a full-speed-ahead character who is constantly making things happen, and although the writing in this book had more than the usual amount of throat-clearing and rehashing of the same internal monologue, the plot still moved along at a reasonable clip. Another part of the answer is that I am all-in on these characters: I like them, I want them to be happy, and I want to know what's going to happen next. It helps that McGuire has slowly added characters over the course of a long series and given most of them a chance to shine. It helps even more that I like all of them as people, and I like the style of banter that McGuire writes. Also, significant screen time for the Luidaeg is never a bad thing.

I think this was the weakest entry in the series in a while. It wrapped up some loose ends that I wasn't that interested in wrapping up, introduced a new conflict that it doesn't resolve, spent a bunch of time with a highly unpleasant character I didn't enjoy reading about, didn't break much new world-building ground, and needed way more faerie court politics. But some of the banter was excellent, the pixies and the Luidaeg were great, and I still care a lot about these characters. I am definitely still reading.

Followed by Nights and Silences.

Continuing a pattern from Once Broken Faith, the ebook version of The Brightest Fell includes a bonus novella. (I'm not sure if it's also present in the print version.)

"Of Things Unknown": As is usual for the short fiction in this series, this is a side story from the perspective of someone other than Toby. In this case, that's April O'Leary, first introduced all the way back in A Local Habitation, and the novella focuses on loose ends from that novel. Loose ends are apparently the theme of this book.

This was... fine. I like April, I enjoyed reading a story from her perspective, and I'm always curious to see how Toby looks from the outside. I thought the plot was strained and the resolution a bit too easy and painless, and I was not entirely convinced by April's internal thought processes. It felt like McGuire left some potential for greater plot complications on the table here, and I found it hard to shake the impression that this story was patching an error that McGuire felt she'd made in the much earlier novel. But it was nice to have an unambiguously happy ending after the more conditional ending of the main story. (6)

Rating: 6 out of 10

16 January, 2022 03:06AM

DocKnot 6.01

This release of my static site generator and software release manager finishes incorporating the last piece of my old release script that I was still using: copying a new software release into a software distribution archive tree, updating symlinks, updating the version database used to generate my web pages, and archiving the old version.

I also added a new docknot update-spin command that updates an input tree for the spin static site generator, fixing any deprecations or changes in the input format. Currently, all this does is convert the old-style *.rpod pointer files to new-style *.spin pointers.

This release also has a few other minor bug fixes, including for an embarrassing bug that required docknot spin be run from a package source tree because it tried to load per-package metadata (even though it doesn't use that data).

You can get the latest release from CPAN or from the DocKnot distribution page.

16 January, 2022 01:34AM

January 14, 2022

hackergotchi for Norbert Preining

Norbert Preining

Future of “my” packages in Debian

After having been (again) demoted (timed perfectly to my round birthday!) based on flimsy arguments, I have been forced to rethink the level of contribution I want to do for Debian. Considering in particular that I have switched my main desktop to dual-boot into Arch Linux (all on the same btrfs fs with subvolumes, great!) and have run Arch now for several days exclusively, I think it is time to review the packages I am somehow responsible for (full list of packages).

After about 20 years in Debian, time to send off quite some stuff that has accumulated over time.

KDE/Plasma, frameworks, Gears, and related packages

All these packages are group maintained, so there is not much to worry about. Furthermore, a few new faces have joined the team and are actively working on the packages, although mostly on Qt6. I guess that with me not taking action, frameworks, gears, and plasma will fall back over time (frameworks: Debian 5.88 versus current 5.90, gears: Debian 21.08 versus current 21.12, plasma uptodate at the moment).

With respect to my packages on OBS, they will probably also go stale over time. Using Arch nowadays I lack the development tools necessary to build Debian packages, and above all, the motivation.

I am sorry for all those who have learned to rely on my OBS packages over the last years, bringing modern and uptodate KDE/Plasma to Debian/stable, please direct your complaints at the responsible entities in Debian.


As I have written already here, I have reduced my involvement quite a lot, and nowadays Fabio and Joshua are doing the work. But both are not even DM (AFAIR) and I am the only one doing uploads (I got DM upload permissions for it). But I am not sure how long I will continue doing this. This also means that in the near future, Cinnamon will also go stale.

TeX related packages

Hilmar has DM upload permissions and is very actively caring for the packages, so I don’t see any source of concern here. New packages will need to find a new uploader, though. With myself also being part of upstream, I can surely help out in the future with difficult problems.

Calibre and related packages

Yokota-san (another DM I have sponsored) has DM upload permissions and is very actively caring for the packages, so also here there is not much of concern.


This is already badly outdated, and I recommend using the OBS builds which are current and provide binaries for Ubuntu and Debian for various versions.


Here fortunately a new generation of developers has taken over maintenance and everything is going smoothly, much better than I could have done, yeah to that!

Qalculate related packages

These are group maintained, but unfortunately nobody else but me has touched the repos for quite some time. I fear that the packages will go stale rather soon.


I have recently salvaged this package, and use it daily, but I guess it needs to be orphaned sooner or later.


While I am also part of upstream here, I guess it will be orphaned.


Julia is group maintained, but unfortunately nobody else but me has touched the repo for quite some time, and we are already far behind the normal releases (and julia got removed from testing). While go stale/orphaned. I recommend installing upstream binaries.


Another package that is group maintained in the Python team, but with only me as uploader I guess it will go stale and effectively be orphaned soon.


Has already by orphaned.


No upstream development, so not much to do, but will be orphaned, too.

14 January, 2022 02:17AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.8: Updated, Strict Headers

rcpp logo

The Rcpp team is thrilled to share the news of the newest release 1.0.8 of Rcpp which hit CRAN today, and has already been uploaded to Debian as well. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 in July 2020. As a reminder, interim ‘dev’ or ‘rc’ releases will alwasys be available in the Rcpp drat repo; this cycle there were once again seven (!!) – times two as we also tested the modified header (more below). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies.

Rcpp has become the most popular way of enhancing R with C or C++ code. Right now, around 2478 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 242 in BioConductor.

This release finally brings a change we have worked on quite a bit over the last few months. The idea of enforcing the setting of STRICT_R_HEADERS was prososed years ago in 2016 and again in 2018. But making such a chance against a widely-deployed code base has repurcussions, and we were not ready then. Last April, this was revisited in issue #1158. Over the course of numerous lengthy runs of tests of a changed Rcpp package against (essentially) all reverse-dependencies (i.e. packages which use Rcpp) we identified ninetyfour packages in total which needed a change. We provided either a patch we emailed, or a GitHub pull request, to all ninetyfour. And we are happy to say that eighty cases were resolved via a new CRAN upload, with a seven more having merged the pull request but not yet uploaded.

Hence, we could make the case to CRAN (who were always CC’ed on the monthly ‘nag’ emails we sent to maintainers of packages needing a change) that an upload was warranted. And after a brief period for their checks and inspection, our January 11 release of Rcpp 1.0.8 arrived on CRAN on January 13.

So with that, a big and heartfelt Thank You! to all eighty maintainers for updating their packages to permit this change at the Rcpp end, to CRAN for the extra checking, and to everybody else who I bugged with the numerous emails and updated to the seemingly never-ending issue #1158. We all got this done, and that is a Good Thing (TM).

Other than the aforementioned change which will not automatically set STRICT_R_HEADERS (unless opted out which one can), a number of nice pull request by a number of contributors are included in this release:

  • Iñaki generalized use of finalizers for external pointers in #1180
  • Kevin ensured include paths are always quoted in #1189
  • Dirk added new headers to allow a more fine-grained choice of Rcpp feature for faster builds in #1191
  • Travers Ching extended the function signature generator to allow for a default R argument in #1184 and #1187
  • Dirk extended documentation, removed old example code, updated references and refreshed CI setup in several PRs (see below)

The full list of details follows.

Changes in Rcpp release version 1.0.8 (2022-01-11)

  • Changes in Rcpp API:

    • STRICT_R_HEADERS is now enabled by default, see extensive discussion in #1158 closing #898.

    • A new #define allows default setting of finalizer calls for external pointers (Iñaki in #1180 closing #1108).

    • Rcpp:::CxxFlags() now quotes the include path generated, (Kevin in #1189 closing #1188).

    • New header files Rcpp/Light, Rcpp/Lighter, Rcpp/Lightest and default Rcpp/Rcpp for fine-grained access to features (and compilation time) (Dirk #1191 addressing #1168).

  • Changes in Rcpp Attributes:

    • A new option signature allows customization of function signatures (Travers Ching in #1184 and #1187 fixing #1182)
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ has a new entry on how not to grow a vector (Dirk in #1167).

    • Some long-spurious calls to RNGSope have been removed from examples (Dirk in #1173 closing #1172).

    • DOI reference in the bibtex files have been updated per JSS request (Dirk in #1186).

  • Changes in Rcpp Deployment:

    • Some continuous integration components have been updated (Dirk in #1174, #1181, and #1190).

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2822 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 January, 2022 01:03AM

January 13, 2022

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Douglas Andrew Torrance (dtorrance)
  • Mark Lee Garrett (lee)

The following contributors were added as Debian Maintainers in the last two months:

  • Lukas Matthias Märdian
  • Paulo Roberto Alves de Oliveira
  • Sergio Almeida Cipriano Junior
  • Julien Lamy
  • Kristian Nielsen
  • Jeremy Paul Arnold Sowden
  • Jussi Tapio Pakkanen
  • Marius Gripsgard
  • Martin Budaj
  • Peymaneh
  • Tommi Petteri Höynälänmaa


13 January, 2022 04:00PM by Jean-Pierre Giraud

January 12, 2022

hackergotchi for Michael Prokop

Michael Prokop

Revisiting 2021


Uhm yeah, so this shirt didn’t age well. :) Mainly to recall what happened, I’m once again revisiting my previous year (previous edition: 2020).

2021 was quite challenging overall. It started with four weeks of distance learning at school. Luckily at least at school things got back to "some kind of normal" afterwards. The lockdowns turned out to be an excellent opportunity for practising Geocaching though, and that’s what I started to do with my family. It’s a great way to grab some fresh air, get to know new areas, and spend time with family and friends – I plan to continue doing this. :)

We bought a family season ticket for Freibäder (open-air baths) in Graz; this turned out to be a great investment – I enjoyed the open air swimming with family, as well as going for swimming laps on my own very much, and plan to do the same in 2022. Due to the lockdowns and the pandemics, the weekly Badminton sessions sadly didn’t really take place, so I pushed towards the above-mentioned outdoor swimming and also some running; with my family we managed to do some cycling, inline skating and even practiced some boulder climbing.

For obvious reasons plenty of concerts I was looking forward didn’t take place. With my parents we at least managed to attend a concert performance of Puccinis Tosca with Jonas Kaufmann at Schloßbergbühne Kasematten/Graz, and with the kids we saw "Robin Hood" in Oper Graz and "Pippi Langstrumpf" at Studiobühne of Oper Graz. The lack of concerts and rehearsals once again and still severely impacts my playing the drums, including at HTU BigBand Graz. :-/

Grml-wise we managed to publish release 2021.07, codename JauKerl. Debian-wise we got version 11 AKA bullseye released as new stable release in August.

For 2021 I planned to and also managed to minimize buying (new) physical stuff, except for books and other reading stuff. Speaking of reading, 2021 was nice — I managed to finish more than 100 books (see “Mein Lesejahr 2021“), and I’d like to keep the reading pace.

Now let’s hope for better times in 2022!

12 January, 2022 05:30PM by mika

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Training apps

I've been using various training apps (and their associated web sites) since 2010 now, forward-porting data to give me twelve years of logs. (My primary migration path has been CardioTrianer → Endomondo → Strava.) However, it strikes me that they're just becoming worse and worse, and I think I've figured out why: What I want is a training site with some social functions, but what companies are creating are social networks. Not social networks about training; just social networks.

To be a bit more concrete: I want something that's essentially a database. I want to quickly search for workouts in a given area and of a given length, and then bring up key information, compare, contrast, get proper graphs (not something where you can't see the difference between 3:00/km and 4:00/km!), and so on. (There's a long, long list of features and bugs I'd like to get fixed, but I won't list them all.)

But Strava is laser-focused on what's happened recently; there's a stream with my own workouts and my friends', just like a social network, and that's pretty much the main focus (and they have not even tried to solve the stream problem; if I have a friend that's too active, I don't see anything else). I'd need to pay a very steep price (roughly $110/year, the same price of a used GPS watch!) to even get a calendar; without that, I need to go back through a super-slow pagination UI 20 and 20 workouts at a time to even see something older.

Garmin Connect is somewhat better here; at least I can query on length and such. (Not so strange; Garmin is in the business of selling devices, not making social networks.) But it's very oriented around one specific watch brand, and it's far from perfect either. My big issue is that nobody's even trying, it seems. But I guess there's simply no money in that.

12 January, 2022 05:27PM

January 11, 2022

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

ThinkPad AMD Debian

After a hiatus of 6 years, it was nice to be back with the ThinkPad. This blog post briefly touches upon my impressions with the current generation ThinkPad T14 Gen2 AMD variant.

ThinkPad T14 Gen2 AMD

ThinkPad T14 Gen2 AMD


It took 8 weeks to get my hands on the machine. Given the pandemic, restrictions and uncertainities, not sure if I should call it an ontime delivery. This was a CTO - Customise-to-order; so was nice to get rid of things I really didn’t care/use much. On the other side, it also meant I could save on some power. It also came comparatively cheaper overall.

  • No fingerprint reader
  • No Touch screen

There’s still parts where Lenovo could improve. Or less frustate a customer. I don’t understand why a company would provide a full customization option on their portal, while at the same time, not provide an explicit option to choose the make/model of the hardware one wants. Lenovo deliberately chooses to not show/specify which WiFi adapter one could choose. So, as I suspected, I ended up with a MEDIATEK Corp. Device 7961 wifi adapter.


For the first time in my computing life, I’m now using AMD at the core. I was pretty frustrated with annoying Intel Graphics bugs, so decided to take the plunge and give AMD/ATI a shot, knowing that the radeon driver does have decent support. So far, on the graphics side of things, I’m glad that things look bright. The stock in-kernel radeon driver has been working perfect for my needs and I haven’t had to tinker even once so far, in my 30 days of use.

On the overall system performance, I have not done any benchmarks nor do I want to do. But wholly, the system performance is smooth.


This is where things need more improvement on the AMD side. This AMD laptop terribly draws a lot of power in suspend mode. And it isn’t just this machine, but also the previous T14 Gen1 which has similar problems. I’m not sure if this is a generic ThinkPad problem, or an AMD specific problem. But coming from the Dell XPS 13 9370 Intel, this does draw a lot lot more power. So much, that I chose to use hibernation instead.

Similarly, on the thermal side, this machine doesn’t cool down well as compared the the Dell XPS Intel one. On an idle machine, its temperature are comparatively higher. Looking at powertop reports, it does show to consume an average of 10 watts power even while idle.

I’m hoping these are Linux ingeration issues and that Lenovo/AMD will improve things in the coming months. But given the user feedback on the ThinkPad T14 Gen1 thread, it may just be wishful thinking.


The overall hardware support has been surprisingly decent. The MediaTek WiFi driver had some glitches but with Linux 5.15+, things have considerably improved. And I hope the trend will continue with forthcoming Linux releases. My previous device driver experience with MediaTek wasn’t good but I took the plunge, considering that in the worst scenario I’d have the option to swap the card.

There’s a lot of marketing about Linux + Intel. But I took a jibe with Linux + AMD. There are glitches but nothing so far that has been a dealbreaker. If anything, I wish Lenovo/AMD would seriously work on the power/thermal issues.


Other than what’s mentioned above, I haven’t had any serious issues. I may have had some rare occassional hangs but they’ve been so infrequent that I haven’t spent time to investigate those.

Upon receiving the machine, my biggest requirement was how to switch my current workstation from Dell XPS to Lenovo ThinkPad. I’ve been using btrfs for some time now. And over the years, built my own practise on how to structure it. Things like, provisioning [sub]volumes, based on use cases is one thing I see. Like keeping separate subvols for: cache/temporary data, copy-on-write data , swap etc. I wish these things could be simplified; either on the btrfs tooling side or some different tool on top of it.

Below is filtered list of subvols created over years, that were worthy of moving to the new machine.

rrs@priyasi:~$ cat btrfs-volume-layout 
ID 550 gen 19166 top level 5 path home/foo/.cache
ID 552 gen 1522688 top level 5 path home/rrs
ID 553 gen 1522688 top level 552 path home/rrs/.cache
ID 555 gen 1426323 top level 552 path home/rrs/rrs-home/Libvirt-Images
ID 618 gen 1522672 top level 5 path var/spool/news
ID 634 gen 1522670 top level 5 path var/tmp
ID 635 gen 1522688 top level 5 path var/log
ID 639 gen 1522226 top level 5 path var/cache
ID 992 gen 1522670 top level 5 path disk-tmp
ID 1018 gen 1522688 top level 552 path home/rrs/NoBackup
ID 1196 gen 1522671 top level 5 path etc
ID 23721 gen 775692 top level 5 path swap
18:54 ♒ � ♅ ♄ ⛢     ☺ 😄    

btrfs send/receive

This did come in handy but I sorely missed some feature. Maybe they aren’t there, or are there and I didn’t look close enough. Over the years, different attributes were set to different subvols. Over time I forget what feature was added where. But from a migration point of view, it’d be nice to say, “Take this volume and take it with all its attributes”. I didn’t find that functionality in send/receive.

There’s get/set-property which I noticed later but by then it was late. So some sort of tooling, ideally something like btrfs migrate or somesuch would be nicer.

In the file system world, we already have nice tools to take care of similar scenarios. Like with rsync, I can request it to carry all file attributes.

Also, iirc, send/receive works only on ro volumes. So there’s more work one needs to do in:

  1. create ro vol
  2. send
  3. receive
  4. don’t forget to set rw property
  5. And then somehow find out other properties set on each individual subvols and [re]apply the same on the destination

I wish this all be condensed into a sub-command.

For my own sake, for this migration, the steps used were:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/TOSHIBA/Migrate/ | cut -d ' ' -f9 | grep -v ROOTVOL | grep -v etc | grep -v btrbk`; do echo $volume; sud
o btrfs send /media/user/TOSHIBA/$volume | sudo btrfs receive /media/user/BTRFSROOT/ ; done            
At subvol /media/user/TOSHIBA/Migrate/snapshot_disk-tmp
At subvol snapshot_disk-tmp
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_foo_.cache
At subvol snapshot-home_foo_.cache
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs
At subvol snapshot-home_rrs
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_.cache
At subvol snapshot-home_rrs_.cache
ERROR: crc32 mismatch in command
At subvol /media/user/TOSHIBA/Migrate/snapshot-home_rrs_rrs-home_Libvirt-Images
At subvol snapshot-home_rrs_rrs-home_Libvirt-Images
ERROR: crc32 mismatch in command
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_spool_news
At subvol snapshot-var_spool_news
At subvol /media/user/TOSHIBA/Migrate/snapshot-var_lib_machines
At subvol snapshot-var_lib_machines
..... snipped .....

And then, follow-up with:

user@debian:~$ for volume in `sudo btrfs sub list /media/user/BTRFSROOT/ | cut -d ' ' -f9`; do echo $volume; sudo btrfs property set -ts /media/user/BTRFSROOT/$volume ro false; done
ERROR: Could not open: No such file or directory

And then finally, renaming everything to match proper:

user@debian:/media/user/BTRFSROOT$ for x in snapshot*; do vol=$(echo $x | cut -d '-' -f2 | sed -e "s|_|/|g"); echo $x $vol; sudo mv $x $vol; done
snapshot-var_lib_machines var/lib/machines
snapshot-var_lib_machines_Apertisv2020ospackTargetARMHF var/lib/machines/Apertisv2020ospackTargetARMHF
snapshot-var_lib_machines_Apertisv2021ospackTargetARM64 var/lib/machines/Apertisv2021ospackTargetARM64
snapshot-var_lib_machines_Apertisv2022dev3ospackTargetARMHF var/lib/machines/Apertisv2022dev3ospackTargetARMHF
snapshot-var_lib_machines_BusterArm64 var/lib/machines/BusterArm64
snapshot-var_lib_machines_DebianBusterTemplate var/lib/machines/DebianBusterTemplate
snapshot-var_lib_machines_DebianJessieTemplate var/lib/machines/DebianJessieTemplate
snapshot-var_lib_machines_DebianSidTemplate var/lib/machines/DebianSidTemplate
snapshot-var_lib_machines_DebianSidTemplate_var_lib_portables var/lib/machines/DebianSidTemplate/var/lib/portables
snapshot-var_lib_machines_DebSidArm64 var/lib/machines/DebSidArm64
snapshot-var_lib_machines_DebSidArmhf var/lib/machines/DebSidArmhf
snapshot-var_lib_machines_DebSidMips var/lib/machines/DebSidMips
snapshot-var_lib_machines_JenkinsApertis var/lib/machines/JenkinsApertis
snapshot-var_lib_machines_v2019 var/lib/machines/v2019
snapshot-var_lib_machines_v2019LinuxSupport var/lib/machines/v2019LinuxSupport
snapshot-var_lib_machines_v2020 var/lib/machines/v2020
snapshot-var_lib_machines_v2021dev3Slim var/lib/machines/v2021dev3Slim
snapshot-var_lib_machines_v2021dev3SlimTarget var/lib/machines/v2021dev3SlimTarget
snapshot-var_lib_machines_v2022dev2OspackMinimal var/lib/machines/v2022dev2OspackMinimal
snapshot-var_lib_portables var/lib/portables
snapshot-var_log var/log
snapshot-var_spool_news var/spool/news
snapshot-var_tmp var/tmp


Entirely independent of this, but indirectly related. I use snapper as my snapshotting tool. It worked perfect on my previous machine. While everything got migrated, the only thing that fell apart was snapper. It just wouldn’t start/run proper. Funny thing is that I just removed the snapper configs and reinitialized with the exact same config again, and voila snapper was happy.


That was pretty much it. With the above and then also migrating /boot and then just chroot to install the boot loader. At some time, I’d like to explore other boot options but given that that is such a non-essential task, it is low on the list.

The good part was that I booted into my new machine with my exact workstation setup as it was. All the way to the user cache and the desktop session. So it was nice on that part.

But I surely think there’s room for a better migration experience here. If not directly as btrfs migrate, then maybe as an independent tool. The problem is that such a tool is going to be used once in years, so I didn’t find the motivation to write one. But this surely would be a good use case for the distribution vendors.

11 January, 2022 02:07PM by Ritesh Raj Sarraf (rrs@researchut.com)

January 10, 2022

Free Software Fellowship

Ubuntu underage girl: child sex or child prostitution?

When Italian PM Silvio Berlusconi was prosecuted for his bunga bunga parties and a relationship with a teenage girl called Ruby, he didn't deny sleeping with her but he went out of his way to deny paying for it. At one point, he told the press that he doesn't need to pay because the women throw themselves at him.

If the payment for sex is not made immediately, can we still call it prostitution? The law does not require payment to be immediate. Here is the UK legal text from the Sexual Offences Act 2003:

54(1)(a)any financial advantage, including the discharge of an obligation to pay or the provision of goods or services (including sexual services) gratuitously or at a discount; or
54(1)(b)the goodwill of any person which is or appears likely, in time, to bring financial advantage.

Women have seen how $6,000 internships have been offered in Albania.

In the Ubuntu underage girl scandal, the Ubuntu employee, who is also a Mozilla tech speaker, has frequently been in a position of power over women. Women know they have to please these men if they want free trips. Let's see some examples.

We often talk about gatekeepers in software projects. What is the difference between a gatekeeper and a pimp? Women and money. Here we can see an example of how these gatekeepers were organizing women to attend the controversial FOSSCamp on a Greek island. One of the men is writing to the Debian project leader, Chris Lamb and telling him the name of the next woman in the artificial queue:

Subject: Re: Debian at FOSScamp - funding request
Date: Sun, 13 Aug 2017
From: Giannis Konstantinidis <giannis@konstantinidis.cc>
To: Chris Lamb <lamby@debian.org>, Silva Arapi <silva.arapi@gmail.com>
CC: leader@debian.org, treasurer@debian.ch, auditor@debian.org, Redon Skikuli <redon@skikuli.com>, ping@anisakuci.com

Hey everyone,
just wish to inform you that unfortunately, due to unforeseen external factors, I won't be able to make it. I'd like to thank the Debian community for the generous support. We will stay in touch.

To make sure Debian makes the maximum possible impact at FOSSCamp, I'd like to sugggest Anisa Kuci (cc'ed ) takes my place. Anisa has been a longtime experienced member of Open Labs Hackerspace, co-organized OSCAL and is very much interested in further contributing to Debian.

Thanks once more. I wish the best success to Debian and your participation FOSSCamp.

Kind regards,
-Giannis K.

Women know there is a queue and they are not permitted to ask for these funds directly. Welcome to Albania.

Here we see another example, in a Wikimedia funding request, Elio Qoshi is writing a reference for one of the women:

Elio: I support the proposal to have Wiki presence at FOSSCamp as it's all happening in a relaxed and laid back environment where participants can learn on their own pace and contribute as well. Nafie (Shehu) has been enthusiastic about leading efforts and has always delivered great results in past events. Her curiosity is her greatest strength and I think FOSSCamp is a great opportunity for her to gain experience, but also help others help Wikipedia as well. I am happily supporting her efforts. --[[User:ElioQoshi|ElioQoshi]] ([[User talk:ElioQoshi|talk]]) 21:19, 19 July 2017 (UTC)

Finally, we have the example of the woman getting a job as a system administrator in Elio's company Ura Design:

Ura Design, Elio Qoshi, Renata Gegaj, Anja Xhakani, Ergi Shkelzeni, Anxhelo Lushka

Therefore, after the underage relationship, in time, the woman had financial benefit. We are not simply talking about underage sex, we may be talking about child prostitution. This does not mean the girl was shared around, even if Elio only used her for himself, it fits the definition.

All the women in this group have seen how this process works. In time, they can also get free trips and become Outreachies.

If women were coding or creating packages, they wouldn't need to go through Elio Qoshi and his mates to request travel funds. If the women are not coding, it appears that the men are offering them an alternative path.

Here are the CPS Guidelines for prosecutors:

The context is frequently one of abuse of power, used by those that incite and control prostitution - the majority of whom are men - to control the sellers of sex - the majority of whom are women.

The CPS charging practice is to tackle those who recruit others into prostitution for their own gain or someone else’s, by charging offences of causing, inciting or controlling prostitution for gain, or trafficking for sexual exploitation.

10 January, 2022 09:00PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Grading using the Wacom Intuos S

I've been teaching economics for a few semesters already and, slowly but surely, I'm starting to get the hang of it. Having to deal with teaching remotely hasn't been easy though and I'm really hoping the winter semester will be in-person again.

Although I worked way too much last semester1, I somehow managed to transition to using a graphics tablet. I bought a Wacom Intuos S tablet (model CTL-4100) in late August 2021 and overall, I have been very happy with it. Wacom Canada offers a small discount for teachers and I ended up paying 115 CAD (~90 USD) for the tablet, an overall very reasonable price.

Unsurprisingly, the Wacom support on Linux is very good and my tablet worked out of the box. The only real problem I had was by default, the tablet sometimes boots up in Android mode, making it unusable. This is easily solved by pressing down on the pad's first and last buttons for a few seconds, until the LED turns white.

The included stylus came with hard plastic nibs, but I find them too slippery. I eventually purchased hard felt nibs, which increase the friction and makes for a more paper-like experience. They are a little less durable, but I wrote quite a fair bit and still haven't gone through a single one yet.

Learning curve

Learning how to use a graphical tablet took me at least a few weeks! When writing on a sheet of paper, the eyes see what the hand writes directly. This is not the case when using a graphical tablet: you are writing on a surface and see the result on your screen, a completely different surface. This dissociation takes a bit of practise to master, but after going through more than 300 pages of notes, it now feels perfectly normal.

Here is a side-by-side comparison of my very average hand-writing2:

  1. on paper
  2. using the tablet, the first week
  3. using the tablet, after a couple of months

Comparison of my writing, on paper, using the tablet and using the tablet after a few weeks

I still prefer the result of writing on paper, but I think this is mostly due to me not using the pressure sensitivity feature. The support in xournal wasn't great, but now that I've tried it in xournalpp (more on this below), I think I will be enabling it in the future. The result on paper is also more consistent, but I trust my skills will improve over time.

Pressure sensitivity on vs off

Use case

The first use case I have for the tablet is grading papers. I've been asking my students to submit their papers via Moodle for a few semesters already, but until now, I was grading them using PDF comments. The experience wasn't great3 and was rather slow compared to grading physical copies.

I'm also a somewhat old-school teacher: I refuse to teach using slides. Death by PowerPoint is real. I write on the blackboard a lot4 and I find it much easier to prepare my notes by hand than by typing them, as the end result is closer to what I actually end up writing down on the board.

Writing notes by hand on sheets of paper is a chore too, especially when you revisit the same materiel regularly. Being able to handwrite digital notes gives me a lot more flexibility and it's been great.

So far, I have been using xournal to write notes and grade papers, and although it is OK, it has a bunch of quirks I dislike. I was waiting for xournalpp to be packaged in Debian, and it now is5! I'm looking forward to using it next semester.

Towards a better computer monitor

I have also been feeling the age of my current computer monitor. I am currently using an old 32" 1080p TV from LG and up until now, I had been able to deal with the drawbacks. The colors are pretty bad and 1080p for such a large display isn't great, but I got used to it.

What I really noticed when I started using my graphics tablet was the input lag. It's bad enough that there's a clear jello effect when writing and it eventually gives me a headache. It's so bad I usually prefer to work on my laptop, which has a nicer but noticeably smaller panel.

I'm currently looking to replace this aging TV6 by something more modern. I have been waiting out since I would like to buy something that will last me another 10 years if possible. Sadly, 32" high refresh rate 4K monitors aren't exactly there yet and I haven't found anything matching my criteria. I would probably also need a new GPU, something that is not easy to come by these days.

  1. I worked at two colleges at the same time, teaching 2 different classes (one of which I was giving for the first time...) to 6 groups in total. I averaged more than 60h per week for sure. 

  2. Yes, I only write in small caps. Students love it, as it's much easier to read on the blackboard. 

  3. Although most PDF readers support displaying comments, some of my more clueless students still had trouble seeing them and I had to play tech support more than I wanted. 

  4. Unsurprisingly, my students also love it. One of the most common feedback I get at the end of the semester is they hate slides too and are very happy I'm one of the few teachers who writes on the board. 

  5. Many thanks to Barak A. Pearlmutter for maintaining this package. 

  6. It dates back from 2010, when my mom replaced our old CRT by a flat screen. FullHD TVs were getting affordable and I wasn't sad to see our tiny 20-something inches TV go. I eventually ended up with the LG flatscreen a few years later when I moved out in my first apartment and my mom got something better. 

10 January, 2022 05:00AM by Louis-Philippe Véronneau

January 09, 2022

hackergotchi for Daniel Pocock

Daniel Pocock

Novak Djokovic & Codes of Conduct

Watching the spectacle between Novak Djokovic and Australian bureaucracy made me feel quite bad about the country where I grew up.

The facts are simple:

  • Just as there are too many Codes of Conduct in the free software world today , there are now too many sets of rules about Covid. Novak had to contemplate rules from the tennis federation, rules from the state government, rules from the federal government, rules from each airline and rules from health departments, both federal and state. It is a mess.
  • Athletes were given official documents telling them they could come to Australia if they had recently recovered from a prior Covid infection.
  • Novak Djokovic was completely honest with the Australian authorities. Everybody knows he is anti-vax. He also told the authorities he had Covid in December. People who tell the truth deserve to be treated with respect, even if we don't agree with them.

Court documents show that Australian Border Force officials tried to pressure Novak into accepting deportation after his 25 hour journey. They denied him access to lawyers and documents. Let us put that in context: Victoria Police do an excellent job promoting safety on the roads in our state. One of their campaigns tells us that lack of sleep is equivalent to intoxication and drugs. Therefore, if the border police ask for a traveller to give consent to a serious topic like deportation after 25 hours without proper sleep, it is not real consent. Their insistence is on par with date rape.

The Tampa affair in 2001 was just a few weeks before an election. The next election in Australia has to be between February and May. Novak Djokovic is the new Tampa. Around the world, the incarceration of Novak has provoked ridicule and anger at Australia's apartheid-like immigration system. Yet in Australia, the Government is hoping to win votes from bullying a foreign athlete.

Humiliating foreign athletes in competition is acceptable. In cricket, it is fairly common. Humiliating visiting athletes at the border is not acceptable.

When we tried to travel to Australia, we found ourselves dealing with officials of the Australian Border Force who can't even spell their job title or the name of their agency. Reading their forms and their questions, I felt betrayed and I felt ashamed to be an Australian. Contemplating the way people have died waiting for their visa, I felt like I was dealing with nazis. How can such people be competent to make decisions about careers, families, Covid and the world's best tennis player?

I attach some examples:

From: QLD PP Processing <qld.pp.processing@immi.gov.au>
Sent: 3 September 201x

Please see the attached information.

We prefer contact with this office concerning your application to be by email. We try to respond to all email enquiries within seven (7) working days. If you do not have access to email or need to contact us urgently, refer to the details below.

Yours sincerely


Position Number: 60022884

Partner Permanet

Permanent Partner Processing Centre - Queensland

Department of Immigration and Border Protection

and another example:

From: QLD PP Processing <qld.pp.processing@immi.gov.au> Sent: 23 September 201x

Thank you for your e-mail responding to the letter I sent you via e-mail on the 03/09/2014. Please be advised that under Australian migration law I am required to assess your application against the legal criteria in regulation ...

Partner Permanent
Department of Immigration and Boarder Protection
Telephone: (07) 3136 7239
Email: [redacted]

Please note that I only work Tuesday, Wednesday and Thursday’s.

There is a video of a candidate in one previous election, boasting about his love for apartheid. In a four minute interview, he can't remember any of his policies, except the harassment of immigrants.

The quarantine hotel where Novak is imprisoned is in the middle of the University precinct

I studied computer science and engineering in buildings barely 200 meters away from Novak's prison, I walked past that hotel almost every day

Novak has conjured an anti-vax mob in the street barely 500 meters from the Doherty Institute. That was the first lab in the world to cultivate Covid and sequence the genome outside China. Their brilliance in health is on par with Novak's brilliance in tennis.

Novak is a leader in sport and Australians have great respect for that. The best leaders are willing to listen to all sides of the story. While Novak is in this unique corner of Melbourne, I hope he will take the time to seek the opinion and advice of world leaders on pandemics and vaccination. In equal measure, I hope to see Novak playing in the tournament without further excuses from the Boarder Force officials.

Novak Djokavic

09 January, 2022 09:15PM

Russell Coker

Video Conferencing (LCA)

I’ve just done a tech check for my LCA lecture. I had initially planned to do what I had done before and use my phone for recording audio and video and my PC for other stuff. The problem is that I wanted to get an external microphone going and plugging in a USB microphone turned off the speaker in the phone (it seemed to direct audio to a non-existent USB audio output). I tried using bluetooth headphones with the USB microphone and that didn’t work. Eventually a viable option seemed to be using USB headphones on my PC with the phone for camera and microphone. Then it turned out that my phone (Huawei Mate 10 Pro) didn’t support resolutions higher than VGA with Chrome (it didn’t have the “advanced” settings menu to select resolution), this is probably an issue of Android build features. So the best option is to use a webcam on the PC, I was recommended a Logitech C922 but OfficeWorks only has a Logitech C920 which is apparently OK.

The free connection test from freeconference.com [1] is good for testing out how your browser works for videoconferencing. It tests each feature separately and is easy to run.

After buying the C920 webcam I found that it sometimes worked and sometimes caused a kernel panic like the following (partial panic log included for the benefit of people Googling this Logitech C920 problem):

[95457.805417] BUG: kernel NULL pointer dereference, address: 0000000000000000
[95457.805424] #PF: supervisor read access in kernel mode
[95457.805426] #PF: error_code(0x0000) - not-present page
[95457.805429] PGD 0 P4D 0 
[95457.805431] Oops: 0000 [#1] SMP PTI
[95457.805435] CPU: 2 PID: 75486 Comm: v4l2src0:src Not tainted 5.15.0-2-amd64 #1  Debian 5.15.5-2
[95457.805438] Hardware name: HP ProLiant ML110 Gen9/ProLiant ML110 Gen9, BIOS P99 02/17/2017
[95457.805440] RIP: 0010:usb_ifnum_to_if+0x3a/0x50 [usbcore]
[95457.805481] Call Trace:
[95457.805485]  usb_hcd_alloc_bandwidth+0x23d/0x360 [usbcore]
[95457.805507]  usb_set_interface+0x127/0x350 [usbcore]
[95457.805525]  uvc_video_start_transfer+0x19c/0x4f0 [uvcvideo]
[95457.805532]  uvc_video_start_streaming+0x7b/0xd0 [uvcvideo]
[95457.805538]  uvc_start_streaming+0x2d/0xf0 [uvcvideo]
[95457.805543]  vb2_start_streaming+0x63/0x100 [videobuf2_common]
[95457.805550]  vb2_core_streamon+0x54/0xb0 [videobuf2_common]
[95457.805555]  uvc_queue_streamon+0x2a/0x40 [uvcvideo]
[95457.805560]  uvc_ioctl_streamon+0x3a/0x60 [uvcvideo]
[95457.805566]  __video_do_ioctl+0x39b/0x3d0 [videodev]

It turns out that Ubuntu Launchpad bug #1827452 has great information on this problem [2]. Apparently if the device decides it doesn’t have enough power then it will reconnect and get a different USB bus device number and this often happens when the kernel is initialising it. There’s a race condition in the kernel code in which the code to initialise the device won’t realise that the device has been detached and will dereference a NULL pointer and then mess up other things in USB device management. The end result for me is that all USB devices become unusable in this situation, commands like “lsusb” hang, and a regular shutdown/reboot hangs because it can’t kill the user session because something is blocked on USB.

One of the comments on the Launchpad bug is that a powered USB hub can alleviate the problem while a USB extension cable (which I had been using) can exacerbate it. Officeworks currently advertises only one powered USB hub, it’s described as “USB 3” but also “maximum speed 480 Mbps” (USB 2 speed). So basically they are selling a USB 2 hub for 4* the price that USB 2 hubs used to sell for.

When debugging this I used the “cheese” webcam utility program and ran it in a KVM virtual machine. The KVM parameters “-device qemu-xhci -usb -device usb-host,hostbus=1,hostaddr=2” (where 1 and 2 are replaced by the Bus and Device numbers from “lsusb”) allow the USB device to be passed through to the VM. Doing this meant that I didn’t have to reboot my PC every time a webcam test failed.

For audio I’m using the Sades Wand gaming headset I wrote about previously [3].

09 January, 2022 07:20AM by etbe

François Marier

Removing an alias/domain from a Let's Encrypt certificate managed by certbot

I recently got an error during a certbot renewal:

Challenge failed for domain echo.fmarier.org
Failed to renew certificate jabber-gw.fmarier.org with error: Some challenges have failed.
The following renewals failed:
  /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem (failure)
1 renew failure(s), 0 parse failure(s)

due to the fact that I had removed the DNS entry for echo.fmarier.org.

I tried to find a way to remove that name from the certificate before renewing it, but it seems like the only way to do it is to create a new certificate without that alternative name.

First, I looked for the domains included in the certificate:

$ certbot certificates
  Certificate Name: jabber-gw.fmarier.org
    Serial Number: 31485424904a33fb2ab43ab174b4b146512
    Key Type: RSA
    Domains: jabber-gw.fmarier.org echo.fmarier.org fmarier.org
    Expiry Date: 2022-01-04 05:28:57+00:00 (VALID: 29 days)
    Certificate Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/fullchain.pem
    Private Key Path: /etc/letsencrypt/live/jabber-gw.fmarier.org/privkey.pem

Then, deleted the existing certificate:

$ certbot delete jabber-gw.fmarier.org

and finally created a new certificate with all other names except for the obsolete one:

$ certbot certonly -d jabber-gw.fmarier.org -d fmarier.org --duplicate

09 January, 2022 06:03AM

hackergotchi for Matthew Garrett

Matthew Garrett

Pluton is not (currently) a threat to software freedom

At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

09 January, 2022 12:59AM

January 08, 2022

hackergotchi for Jonathan Dowland

Jonathan Dowland

2021 in Fiction

Cover for *This is How You Lose the Time War*
Cover for *Robot*
Cover for *The Glass Hotel*

Following on from last year's round-up of my reading, here's a look at the fiction I enjoyed in 2021.

I managed to read 42 books in 2021, up from 31 last year. That's partly to do with buying an ereader: 33/36% of my reading (by pages/by books) was ebooks. I think this demonstrates that ebooks have mostly complemented paper books for me, rather than replacing them.

My book of the year (although it was published in 2019) was This is How You Lose the Time War by Amal El-Mohtar and Max Gladstone: A short epistolary love story between warring time travellers and quite unlike anything else I've read for a long time. Other notables were The Glass Hotel by Emily St John Mandel and Robot by Adam Wiśniewski-Snerg.

The biggest disappointment for me was The Ministry for the Future by Kim Stanley Robinson (KSR), which I haven't even finished. I love KSRs writing: I've written about him many times on this blog, at least in 2002, 2006 and 2009, I think I've read every other novel he's published and most of his short stories. But this one was too much of something for me. He's described this novel a the end-point of a particular journey and approach to writing he's taken, which I felt relieved to learn, assuming he writes any more novels (and I really hope that he does) they will likely be in a different "mode".

My "new author discovery" for 2021 was Chris Beckett: I tore through Two Tribes and America City before promptly buying all his other work. He fits roughly into the same bracket as Adam Roberts and Christopher Priest, two of my other favourite authors.

5 of the books I read (12%) were from my "backlog" of already-purchased physical books. I'd like to try and reduce my Backlog further so I hope to push this figure up next year.

I made a small effort to read more diverse authors this year. 24% of the books I read (by book count and page count) were by women. 15% by page count were (loosely) BAME (19% by book count). Again I'd like to increase these numbers modestly in 2022.

Unlike 2020, I didn't complete any short story collections in 2021! This is partly because there was only one issue of Interzone published in all of 2021, a double-issue which I haven't yet finished. This is probably a sad date point in terms of Interzone's continued existence, but it's not dead yet.

08 January, 2022 09:32PM

John Goerzen

Make the Internet Yours Again With an Instant Mesh Network

I’m going to lead with the technical punch line, and then explain it:

Yggdrasil Network is an opportunistic mesh that can be deployed privately or as part of a global-scale network. Each node gets a stable IPv6 address (or even an entire /64) that is derived from its public key and is bound to that node as long as the node wants it (of course, it can generate a new keypair anytime) and is valid wherever the node joins the mesh. All traffic is end-to-end encrypted.

Yggdrasil will automatically discover peers on a LAN via broadcast beacons, and requires zero configuration to peer in such a way. It can also run as an overlay network atop the public Internet. Public peers serve as places to join the global network, and since it’s a mesh, if one device on your LAN joins the global network, the others will automatically have visibility on it also, thanks to the mesh routing.

It neatly solves a lot of problems of portability (my ssh sessions stay live as I move networks, for instance), VPN (incoming ports aren’t required since local nodes can connect to a public peer via an outbound connection), security, and so forth.

Now on to the explanation:

The Tyranny of IP rigidity

Every device on the Internet, at one time, had its own globally-unique IP address. This number was its identifier to the world; with an IP address, you can connect to any machine anywhere. Even now, when you connect to a computer to download a webpage or send a message, under the hood, your computer is talking to the other one by IP address.

Only, now it’s hard to get one. The Internet protocol we all grew up with, version 4 (IPv4), didn’t have enough addresses for the explosive growth we’ve seen. Internet providers and IT departments had to use a trick called NAT (Network Address Translation) to give you a sort of fake IP address, so they could put hundreds or thousands of devices behind a single public one. That, plus the mobility of devices — changing IPs whenever they change locations — has meant that a fundamental rule of the old Internet is now broken:

Every participant is an equal peer. (Well, not any more.)

Nowadays, you can’t you host your own website from your phone. Or share files from your house. (Without, that is, the use of some third-party service that locks you down and acts as an intermediary.)

Back in the 90s, I worked at a university, and I, like every other employee, had a PC on my desk with an unfirewalled public IP. I installed a webserver, and poof – instant website. Nowadays, running a website from home is just about impossible. You may not have a public IP, and if you do, it likely changes from time to time. And even then, your ISP probably blocks you from running servers on it.

In short, you have to buy your way into the resources to participate on the Internet.

I wrote about these problems in more detail in my article Recovering Our Lost Free Will Online.

Enter Yggdrasil

I already gave away the punch line at the top. But what does all that mean?

  • Every device that participates gets an IP address that is fully live on the Yggdrasil network.
  • You can host a website, or a mail server, or whatever you like with your Yggdrasil IP.
  • Encryption and authentication are smaller (though not nonexistent) worries thanks to the built-in end-to-end encryption.
  • You can travel the globe, and your IP will follow you: onto a plane, from continent to continent, wherever. Yggdrasil will find you.
  • I’ve set up /etc/hosts on my laptop to use the Yggdrasil IPs for other machines on my LAN. Now I can just “ssh foo” and it will work — from home, from a coffee shop, from a 4G tether, wherever. Now, other tools like tinc can do this, obviously. And I could stop there; I could have a completely closed, private Yggdrasil network.

    Or, I can join the global Yggdrasil network. Each device, in addition to accepting peers it finds on the LAN, can also be configured to establish outbound peering connections or accept inbound ones over the Internet. Put a public peer or two in your configuration and you’ve joined the global network. Most people will probably want to do that on every device (because why not?), but you could also do that from just one device on your LAN. Again, there’s no need to explicitly build routes via it; your other machines on the LAN will discover the route’s existence and use it.

    This is one of many projects that are working to democratize and decentralize the Internet. So far, it has been quite successful, growing to over 2000 nodes. It is the direct successor to the earlier cjdns/Hyperboria and BATMAN networks, and aims to be a proof of concept and a viable tool for global expansion.

    Finally, think about how much easier development is when you don’t have to necessarily worry about TLS complexity in every single application. When you don’t have to worry about port forwarding and firewall penetration. It’s what the Internet should be.

    08 January, 2022 03:57AM by John Goerzen

    January 07, 2022

    Ingo Juergensmann

    Moving my repositories from Github to Codeberg.org

    Some weeks ago I moved my repositories from Github (evil, Microsoft, blabla) to Codeberg. Codeberg is a non-profit organisation located in Germany. When you really dislike Microsoft products it is somewhat a natural reaction (at least for me) to move away from Github, which was bought by Microsoft, to some more independent service provider for hosting source code. Nice thing with Codeberg is as well that it offers a migration tool from Github to Codeberg. Additionally Codeberg is also on Mastodon. If you are looking for a good service hosting your git repositories and want to move away from Github as well, please give Codeberg a try.

    So, please update your git settings to https://github.com/ingoj to https://codeberg.org/Windfluechter (or the specific repo).

    07 January, 2022 10:50PM by ij

    January 06, 2022

    Jacob Adams

    Linux Hibernation Documentation

    Recently I’ve been curious about how hibernation works on Linux, as it’s an interesting interaction between hardware and software. There are some notes in the Arch wiki and the kernel documentation (as well as some kernel documentation on debugging hibernation and on sleep states more generally), and of course the ACPI Specification

    The Formal Definition

    ACPI (Advanced Configuration and Power Interface) is, according to the spec, “an architecture-independent power management and configuration framework that forms a subsystem within the host OS” which defines “a hardware register set to define power states.”

    ACPI defines four global system states G0, working/on, G1, sleeping, G2, soft off, and G3, mechanical off1. Within G1 there are 4 sleep states, numbered S1 through S4. There are also S0 and S5, which are equivalent to G0 and G2 respectively2.


    According to the spec, the ACPI S1-S4 states all do the same thing from the operating system’s perspective, but each saves progressively more power, so the operating system is expected to pick the deepest of these states when entering sleep. However, most operating systems3 distinguish between S1-S3, which are typically referred to as sleep or suspend, and S4, which is typically referred to as hibernation.

    S1: CPU Stop and Cache Wipe

    The CPU caches are wiped and then the CPU is stopped, which the spec notes is equivalent to the WBINVD instruction followed by the STPCLK signal on x86. However, nothing is powered off.

    S2: Processor Power off

    The system stops the processor and most system clocks (except the real time clock), then powers off the processor. Upon waking, the processor will not continue what it was doing before, but instead use its reset vector4.

    S3: Suspend/Sleep (Suspend-to-RAM)

    Mostly equivalent to S2, but hardware ensures that only memory and whatever other hardware memory requires are powered.

    S4: Hibernate (Suspend-to-Disk)

    In this state, all hardware is completely powered off and an image of the system is written to disk, to be restored from upon reapplying power. Writing the system image to disk can be handled by the operating system if supported, or by the firmware.

    Linux Sleep States

    Linux has its own set of sleep states which mostly correspond with ACPI states.


    This is a software only sleep that puts all hardware into the lowest power state it can, suspends timekeeping, and freezes userspace processes.

    All userspace and some kernel threads5, except those tagged with PF_NOFREEZE, are frozen before the system enters a sleep state. Frozen tasks are sent to the __refrigerator(), where they set TASK_UNINTERRUPTIBLE and PF_FROZEN and infinitely loop until PF_FROZEN is unset6.

    This prevents these tasks from doing anything during the imaging process. Any userspace process running on a different CPU while the kernel is trying to create a memory image would cause havoc. This is also done because any filesystem changes made during this would be lost and could cause the filesystem and its related in-memory structures to become inconsistent. Also, creating a hibernation image requires about 50% of memory free, so no tasks should be allocating memory, which freezing also prevents.


    This is equivalent to ACPI S1.


    This is equivalent to ACPI S3.


    Hibernation is mostly equivalent to ACPI S4 but does not require S4, only requiring “low-level code for resuming the system to be present for the underlying CPU architecture” according to the Linux sleep state docs.

    To hibernate, everything is stopped and the kernel takes a snapshot of memory. Then, the system writes out the memory image to disk. Finally, the system either enters S4 or turns off completely.

    When the system restores power it boots a new kernel, which looks for a hibernation image and loads it into memory. It then overwrites itself with the hibernation image and jumps to a resume area of the original kernel7. The resumed kernel restores the system to its previous state and resumes all processes.

    Hybrid Suspend

    Hybrid suspend does not correspond to an official ACPI state, but instead is effectively a combination of S3 and S4. The system writes out a hibernation image, but then enters suspend-to-RAM. If the system wakes up from suspend it will discard the hibernation image, but if the system loses power it can safely restore from the hibernation image.

    1. The difference between soft and mechanical off is that mechanical off is “entered and left by a mechanical means (for example, turning off the system’s power through the movement of a large red switch)” 

    2. It’s unclear to me why G and S states overlap like this. I assume this is a relic of an older spec that only had S states, but I have not as yet found any evidence of this. If someone has any information on this, please let me know and I’ll update this footnote. 

    3. Of the operating systems I know of that support ACPI sleep states (I checked Windows, Mac, Linux, and the three BSDs8), only MacOS does not allow the user to deliberately enable hibernation, instead supporting a hybrid suspend it calls safe sleep 

    4. “The reset vector of a processor is the default location where, upon a reset, the processor will go to find the first instruction to execute. In other words, the reset vector is a pointer or address where the processor should always begin its execution. This first instruction typically branches to the system initialization code.” Xiaocong Fan, Real-Time Embedded Systems, 2015 

    5. All kernel threads are tagged with PF_NOFREEZE by default, so they must specifically opt-in to task freezing. 

    6. This is not from the docs, but from kernel/freezer.c which also notes “Refrigerator is place where frozen processes are stored :-).” 

    7. This is the operation that requires “special architecture-specific low-level code”. 

    8. Interestingly NetBSD has a setting to enable hibernation, but does not actually support hibernation 

    06 January, 2022 12:00AM

    January 05, 2022

    Reproducible Builds

    Reproducible Builds in December 2021

    Welcome to the December 2021 report from the Reproducible Builds project! In these reports, we try and summarise what we have been up to over the past month, as well as what else has been occurring in the world of software supply-chain security.

    As a quick recap of what reproducible builds is trying to address, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. As always, if you would like to contribute to the project, please get in touch with us directly or visit the Contribute page on our website.

    Early in December, Julien Voisin blogged about setting up a rebuilderd instance in order to reproduce Tails images. Working on previous work from 2018, Julien has now set up a public-facing instance which is providing build attestations.

    As Julien dryly notes in his post, “Currently, this isn’t really super-useful to anyone, except maybe some Tails developers who want to check that the release manager didn’t backdoor the released image.” Naturally, we would contend — sincerely — that this is indeed useful.

    The secure/anonymous Tor browser now supports reproducible source releases. According to the project’s changelog, version of Tor can now build reproducible tarballs via the make dist-reprod command. This issue was tracked via Tor issue #26299.

    Fabian Keil posted a question to our mailing list this month asking how they might analyse differences in images produced with the FreeBSD and ElectroBSD’s mkimg and makefs commands:

    After rebasing ElectroBSD from FreeBSD stable/11 to stable/12
    I recently noticed that the "memstick" images are unfortunately
    still not 100% reproducible.

    Fabian’s original post generated a short back-and-forth with Chris Lamb regarding how diffoscope might be able to support the particular format of images generated by this command set.


    diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploading versions 195, 196, 197 and 198 to Debian, as well as made the following changes:

    • Support showing Ordering differences only within .dsc field values. []
    • Add support for ‘XMLb’ files. []
    • Also add, for example, /usr/lib/x86_64-linux-gnu to our local binary search path. []
    • Support OCaml versions 4.11, 4.12 and 4.13. []
    • Drop some unnecessary has_same_content_as logging calls. []
    • Replace token variable with an anonymously-named variable instead to remove extra lines. []
    • Don’t use the runtime platform’s native endianness when unpacking .pyc files. This fixes test failures on big-endian machines. []

    Mattia Rizzolo also made a number of changes to diffoscope this month as well, such as:

    • Also recognize GnuCash files as XML. []
    • Support the pgpdump PGP packet visualiser version 0.34. []
    • Ignore the new Lintian tag binary-with-bad-dynamic-table. []
    • Fix the Enhances field in debian/control. []

    Finally, Brent Spillner fixed the version detection for Black ‘uncompromising code formatter’ [], Jelle van der Waa added an external tool reference for Arch Linux [] and Roland Clobus added support for reporting when the GNU_BUILD_ID field has been modified []. Thank you for your contributions!

    Distribution work

    In Debian this month, 70 reviews of packages were added, 27 were updated and 41 were removed, adding to our database of knowledge about specific issues. A number of issue types were created as well, including:

    strip-nondeterminism version 1.13.0-1 was uploaded to Debian unstable by Holger Levsen. It included contributions already covered in previous months as well as new ones from Mattia Rizzolo, particularly that the dh_strip_nondeterminism Debian integration interface uses the new get_non_binnmu_date_epoch() utility when available: this is important to ensure that strip-nondeterminism does not break some kinds of binNMUs.

    In the world of openSUSE, however, Bernhard M. Wiedemann posted his monthly reproducible builds status report.

    In NixOS, work towards the longer-term goal of making the graphical installation image reproducible is ongoing. For example, Artturin made the gnome-desktop package reproducible.

    Upstream patches

    The Reproducible Builds project attempts to fix as many currently-unreproducible packages as possible. In December, we wrote a large number of such patches, including:

    Testing framework

    The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

    • Holger Levsen:

      • Run the Debian scheduler less often. []
      • Fix the name of the Debian ‘testing’ suite name. []
      • Detect builds that are rescheduling due to problems with the diffoscope container. []
      • No longer special-case particular machines having a different /boot partition size. []
      • Automatically fix failed apt-daily and apt-daily-upgrade services [], failed e2scrub_all.service & user@ systemd units [][] as well as ‘generic’ build failures [].
      • Simplify a script to powercycle arm64 architecture nodes hosted at/by codethink.co.uk. []
      • Detect if the udd-mirror.debian.net service is down. []
      • Various miscellaneous node maintenance. [][]
    • Roland Clobus (Debian ‘live’ image generation):

      • If the latest snapshot is not complete yet, try to use the previous snapshot instead. []
      • Minor: whitespace correction + comment correction. []
      • Use unique folders and reports for each Debian version. []
      • Turn off debugging. []
      • Add a better error description for incorrect/missing arguments. []
      • Report non-reproducible issues in Debian sid images. []

    Lastly, Mattia Rizzolo updated the automatic logfile parsing rules in a number of ways (eg. to ignore a warning about the Python setuptools deprecation) [][] and Vagrant Cascadian adjusted the config for the Squid caching proxy on a node. []

    If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

    05 January, 2022 02:44PM

    hackergotchi for Thomas Lange

    Thomas Lange

    FAI.me service now support backports for Debian 11 (bullseye)

    The FAI.me service for creating customized installation and cloud images now supports a backports kernel for the stable release Debian 11 (aka bullseye). If you enable the backports option, you will currently get kernel 5.14. This will help you if you have newer hardware that is not support by the default kernel 5.10. The backports option is also still available for the images when using the old Debian 10 (buster) release.

    The URL of the FAI.me service is



    05 January, 2022 11:49AM

    January 04, 2022

    Russell Coker

    Terrorists Inspired by Fiction

    The Tom Clancy book Debt of Honor published in August 1994 first introduced the concept of a heavy passenger aircraft being used as a weapon by terrorists against a well defended building. In April 1994 there was an attempt to hijack and deliberately crash FedEx flight 705. It’s possible for a book to be changed 4 months before publication, but it seems unlikely that a significant plot point in a series of books was changed in such a small amount of time so it’s likely that Tom Clancy got the idea first. There have been other variations on that theme, such as the Yokosuka_MXY-7 Kamakazi flying bomb (known by the Allies as “Baka” which is Japanese for idiot). But Tom Clancy seemed to pioneer the idea of a commercial passenger jet being subverted for the purpose of ground attack.

    7 years after Tom Clancy’s book was published the 911 hijackings happened.

    The TV series Black Mirror first aired in 2011, and the first episode was about terrorists kidnapping a princess and demanding that the UK PM perform an indecent act with a pig for her release. While the plot was a little extreme (the entire series is extreme) the basic concept of sexual extortion based on terrorist acts is something that could be done in real life, and if terrorists were inspired by this they are taking longer than expected to do it.

    Most democracies seem to end up with two major parties that are closely matched. Even if a government was strict about not negotiating with terrorists it seems likely that terrorists demanding that a politician perform an unusual sex act on TV would change things, supporters would be divided into groups that support and oppose negotiating. Discussions wouldn’t be as civil as when the negotiation involves money or freeing prisoners. If an election result was perceived to have been influenced by such terrorism then supporters of the side that lost would claim it to be unfair and reject the result. If the goal of terrorists was to cause chaos then that would be one way of achieving it, and they have had over 10 years to consider this possibility.

    Are we overdue for a terror attack inspired by Black Mirror?

    04 January, 2022 11:00PM by etbe

    Jelmer Vernooij

    Personal Streaming Audio Server

    For a while now, I’ve been looking for a good way to stream music from my home music collection on my phone.

    There are quite a few options for music servers that support streaming. However, Android apps that can stream music from one of those servers tend to be unmaintained, clunky or slow (or more than one of those).

    It is possible to use something that runs in a web server, but that means no offline caching - which can be quite convenient in spots without connectivity, such as the Underground or other random bits of London with poor cell coverage.


    Most music servers today support some form of the subsonic API.

    I’ve tried a couple, with mixed results:

    • supysonic; Python. Slow. Ran into some issues with subsonic clients. No real web UI.
    • gonic; Go. Works well & fast enough. Minimal web UI, i.e. no ability to play music from a browser.
    • airsonic; Java. Last in a chain of (abandoned) forks. More effort to get to work, and resource intensive.

    Eventually, I’ve settled on Navidrome. It’s got a couple of things going for it:

    • Good subsonic implementation that worked with all the Android apps I used it with.
    • Great Web UI for use in a browser

    I run Navidrome in Kubernetes. It’s surprisingly easy to get going. Here’s the deployment I’m using:

    apiVersion: apps/v1
    kind: Deployment
     name: navidrome
     replicas: 1
         app: navidrome
           app: navidrome
           - name: navidrome
             image: deluan/navidrome:latest
             imagePullPolicy: Always
                 cpu: ".5"
                 memory: "2Gi"
                 cpu: "0.1"
                 memory: "10M"
               - containerPort: 4533
               - name: navidrome-data-volume
                 mountPath: /data
               - name: navidrome-music-volume
                 mountPath: /music
               - name: ND_SCANSCHEDULE
                 value: 1h
               - name: ND_LOGLEVEL
                 value: info
               - name: ND_SESSIONTIMEOUT
                 value: 24h
               - name: ND_BASEURL
                 value: /navidrome
                  path: /navidrome/app
                  port: 4533
                initialDelaySeconds: 30
                periodSeconds: 3
                timeoutSeconds: 90
            - name: navidrome-data-volume
               path: /srv/navidrome
               type: Directory
            - name: navidrome-music-volume
                path: /srv/media/music
                type: Directory
    apiVersion: v1
    kind: Service
      name: navidrome
        - port: 4533
          name: web
        app: navidrome
      type: ClusterIP

    At the moment, this deployment is still tied to the machine with my music on it since it relies on hostPath volumes, but I’m planning to move that to ceph in the future.

    I then expose this service on /navidrome on my private domain (here replaced with example.com) using an Ingress:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
      name: navidrome
      ingressClassName: nginx
      - host: example.com
          - backend:
                name: navidrome
                  name: web
            path: /navidrome(/|$)(.*)
            pathType: Prefix


    On the desktop, I usually just use navidrome’s web interface. Clementine’s support for subsonic is also okay. sublime-music is meant to be a music player specifically for Subsonic, but I’ve not really found it stable enough for day-to-day usage.

    There are various Android clients for Subsonic, but I’ve only really considered the Open Source ones that are hosted on F-Droid. Most of those are abandoned, but D-Sub works pretty well - as does my preferred option, Subtracks.

    04 January, 2022 06:00PM by Jelmer Vernooij

    hackergotchi for Jonathan McDowell

    Jonathan McDowell

    Upgrading from a CC2531 to a CC2538 Zigbee coordinator

    Previously I setup a CC2531 as a Zigbee coordinator for my home automation. This has turned out to be a good move, with the 4 gang wireless switch being particularly useful. However the range of the CC2531 is fairly poor; it has a simple PCB antenna. It’s also a very basic device. I set about trying to improve the range and scalability and settled upon a CC2538 + CC2592 device, which feature an MMCX antenna connector. This device also has the advantage that it’s ARM based, which I’m hopeful means I might be able to build some firmware myself using a standard GCC toolchain.

    For now I fetched the JetHome firmware from https://github.com/jethome-ru/zigbee-firmware/tree/master/ti/coordinator/cc2538_cc2592 (JH_2538_2592_ZNP_UART_20211222.hex) - while it’s possible to do USB directly with the CC2538 my board doesn’t have those bits so going the external USB UART route is easier.

    The device had some existing firmware on it, so I needed to erase this to force a drop into the boot loader. That means soldering up the JTAG pins and hooking it up to my Bus Pirate for OpenOCD goodness.

    OpenOCD config
    source [find interface/buspirate.cfg]
    buspirate_port /dev/ttyUSB1
    buspirate_mode normal
    buspirate_vreg 1
    buspirate_pullup 0
    transport select jtag
    source [find target/cc2538.cfg]
    Steps to erase
    $ telnet localhost 4444
    Trying ::1...
    Connected to localhost.
    Escape character is '^]'.
    Open On-Chip Debugger
    > mww 0x400D300C 0x7F800
    > mww 0x400D3008 0x0205
    > shutdown
    shutdown command invoked
    Connection closed by foreign host.

    At that point I can switch to the UART connection (on PA0 + PA1) and flash using cc2538-bsl:

    $ git clone https://github.com/JelmerT/cc2538-bsl.git
    $ cc2538-bsl/cc2538-bsl.py -p /dev/ttyUSB1 -e -w -v ~/JH_2538_2592_ZNP_UART_20211222.hex
    Opening port /dev/ttyUSB1, baud 500000
    Reading data from /home/noodles/JH_2538_2592_ZNP_UART_20211222.hex
    Firmware file: Intel Hex
    Connecting to target...
    CC2538 PG2.0: 512KB Flash, 32KB SRAM, CCFG at 0x0027FFD4
    Primary IEEE Address: 00:12:4B:00:22:22:22:22
        Performing mass erase
    Erasing 524288 bytes starting at address 0x00200000
        Erase done
    Writing 524256 bytes starting at address 0x00200000
    Write 232 bytes at 0x0027FEF88
        Write done
    Verifying by comparing CRC32 calculations.
        Verified (match: 0x74f2b0a1)

    I then wanted to migrate from the old device to the new without having to repair everything. So I shut down Home Assistant and backed up the CC2531 network information using zigpy-znp (which is already installed for Home Assistant):

    python3 -m zigpy_znp.tools.network_backup /dev/zigbee > cc2531-network.json

    I copied the backup to cc2538-network.json and modified the coordinator_ieee to be the new device’s MAC address (rather than end up with 2 devices claiming the same MAC if/when I reuse the CC2531) and did:

    python3 -m zigpy_znp.tools.network_restore --input cc2538-network.json /dev/ttyUSB1

    The old CC2531 needed unplugged first, otherwise I got an RuntimeError: Network formation refused, RF environment is likely too noisy. Temporarily unscrew the antenna or shield the coordinator with metal until a network is formed. error.

    After that I updated my udev rules to map the CC2538 to /dev/zigbee and restarted Home Assistant. To my surprise it came up and detected the existing devices without any extra effort on my part. However that resulted in 2 coordinators being shown in the visualisation, with the old one turning up as unk_manufacturer. Fixing that involved editing /etc/homeassistant/.storage/core.device_registry and removing the entry which had the old MAC address, removing the device entry in /etc/homeassistant/.storage/zha.storage for the old MAC and then finally firing up sqlite to modify the Zigbee database:

    $ sqlite3 /etc/homeassistant/zigbee.db
    SQLite version 3.34.1 2021-01-20 14:10:07
    Enter ".help" for usage hints.
    sqlite> DELETE FROM devices_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM endpoints_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM in_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM neighbors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11' OR device_ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM node_descriptors_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> DELETE FROM out_clusters_v6 WHERE ieee = '00:12:4b:00:11:11:11:11';
    sqlite> .quit

    So far it all seems a bit happier than with the CC2531; I’ve been able to pair a light bulb that was previously detected but would not integrate, which suggests the range is improved.

    (This post another in the set of “things I should write down so I can just grep my own website when I forget what I did to do foo”.)

    04 January, 2022 03:50PM

    Russell Coker

    Big Smart TVs

    Recently a relative who owned a 50″ Plasma TV asked me for advice on getting a new TV. Looking at the options all the TVs seem to be smart TVs (running Android with built in support for YouTube and Netflix) and most of them seem to be 4K resolution. 4K doesn’t provide much benefit now as most people don’t have BlueRay DVD players and discs, there aren’t a lot of 4K YouTube videos, and most streaming services don’t offer 4K resolution. But as 4K doesn’t cost much more it doesn’t make sense not to get it.

    I gave my relative a list of good options from Kogan (the Australian company that has the cheapest consumer electronics) and they chose a 65″ 4K Smart TV from Kogan. That only cost $709 plus delivery which is reasonably affordable for something that will presumably last for a long time and be used by many people.

    Netflix on a web browser won’t do more than FullHD resolution unless you use Edge on Windows 10. But Netflix on the smart tv has a row advertising 4K shows which indicates that 4K is supported. There are some 4K videos on YouTube but not a lot at this time.


    It turns out that 65″ is very big. It didn’t fit on the table that had been used for the 50″ Plasma TV.

    Rtings.com has a good article about TV size vs distance [1]. According to their calculations if you want to sit 2 meters away from a TV and have a 30 degree field of view (recommended for “mixed” use) then a 45″ TV is ideal.

    According to their calculations on pixel sizes, if you have a FullHD display (or the common modern case a FullHD signal displayed on a 4K monitor) that is between 1.8 and 2.5 meters away from you then a 45″ TV is the largest that will be useful. To take proper advantage of a monitor larger than 45″ at a distance of 2 meters you need a 4K signal. If you have a 4K signal then you can get best results by having a 45″ monitor less than 1.8 meters away from you. As most TV watching involves less than 3 people it shouldn’t be inconvenient to be less than 1.8 meters away from the TV.

    The 65″ TV weighs 21Kg according to the specs, that isn’t a huge amount for something small, but for something a large and inconvenient as a 65″ TV it’s impossible for one person to safely move. Kogan sells 43″ TVs that weigh 6KG, that’s something that most adults could move with one hand. I think that a medium size TV that can be easily moved to a convenient location would probably give an equivalent viewing result to an extremely large TV that can’t be moved at all. I currently have a 40″ LCD TV, the only reason I have that is because a friend didn’t need it, the previous 32″ TV that I used was adequate for my needs. Most of my TV viewing is on a 28″ monitor, which I find adequate for 2 or 3 people. So I generally wouldn’t recommend a 65″ TV for anyone.

    Android for TVs

    Android wasn’t designed for TVs and doesn’t work that well on them. Having buttons on the remote for Netflix and YouTube is handy, but it would be nice if there were programmable buttons for other commonly used apps or a way to switch between the last few apps (like ALT-TAB on a PC).

    One good feature of Android for TV is that it can display a set of rows of shows (similar to the Netflix method of displaying) where each row is from a different app. The apps I’ve installed on that TV which support the row view are Netflix, YouTube, YouTube Music, ABC iView (that’s Australian ABC), 7plus, 9now, and SBS on Demand. That’s nice, now we just need channel 10’s app to support that to have coverage for all Australian free TV stations in the Android TV interface.


    It’s a nice TV and it generally works well. Android is OK for TV use but far from great. It is running Android version 9, maybe a newer version of Android works better on TVs.

    It’s too large for reasonable people to use in a home. I’ve seen smaller TVs used for 20 people in an office in a video conference. It’s cheap enough that most people can afford it, but it’s easier and more convenient to have something smaller and lighter.

    04 January, 2022 11:37AM by etbe

    January 03, 2022

    Paul Wise

    FLOSS Activities December 2021


    This month I didn't have any particular focus. I just worked on issues in my info bubble.




    • Spam: reported 166 Debian mailing list posts
    • Patches: reviewed libpst upstream patches
    • Debian packages: sponsored nsis, memtest86+
    • Debian wiki: RecentChanges for the month
    • Debian BTS usertags: changes for the month
    • Debian screenshots:


    • libpst: setup GitHub presence, migrate from hg to git, requested details from bug reporters
    • plac: cleaned up git repo anomalies
    • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages: stardict, node-carto
    • Debian wiki: unblock IP addresses, approve accounts


    • Respond to queries from Debian users and contributors on the mailing lists and IRC


    The purple-discord, python-plac, sptag, smart-open, libpst, memtest86+, oci-python-sdk work was sponsored. All other work was done on a volunteer basis.

    03 January, 2022 11:43PM

    Debian Community News

    Albanian women, Brazilian women & Debian Outreachy racism under Chris Lamb

    We previously looked at the vast amounts of money spent on travel for Albanian women to come to DebConf19 in Curitiba and many other events.

    Before DebConf19, Debian tried to organize a warm-up event, MiniDebConf Curitiba in the location that would host DebConf proper.

    Local women had found Chris Lamb so difficult to deal with that they had to start their own crowdfunding campaign to get there. Lamb only had eyes for Albanian women like the woman who won an Outreachy internship.

    Renata blogs about the crowdfunding campaign for five women: Alice, Anna e So, Miriam Retka, Ana Paula and Luciana.

    Alice, Anna e So, Miriam Retka, Ana Paula, Luciana

    As a reminder, the Albanian women received free travel and accommodation two years in a row, both DebConf18 (Taiwan) and DebConf19 (Brazil). It looks like Debian prefers the European appearance of Albanian girls. This is an example of racism in diversity funding.

    DebConf18 Albanian women

    Lior Kaplan, Outreachy, intern, relationship, student, Debconf18, Taipei

    DebConf19 same Albanian women funded again

    DebConf19, Curitiba, Brazil, dinner, Chris Lamb, albanian women

    Women who had to raise their own funds for Curitiba

    This was Chris Lamb's impression after his first visit to Albania:

    Date: 24 May 2017
    From: Chris Lamb

    Just to underline this. It was *extremely* remarkable and commendable that not only did the demographic skew of the organisers about 15-20 years younger than a typical conference, I would wager the gender split was around 70-80% female:male.

    The Albanian open source community is very healthy indeed.

    These are the women from Brazil, they had to raise their own funds for travel. None of them got a seat at the table with Chris Lamb.

    Alice, Anna e So, Miriam Retka, Ana Paula, Luciana

    03 January, 2022 10:00PM

    Ian Jackson

    Debian’s approach to Rust - Dependency handling

    tl;dr: Faithfully following upstream semver, in Debian package dependencies, is a bad idea.


    I have been involved in Debian for a very long time. And I’ve been working with Rust for a few years now. Late last year I had cause to try to work on Rust things within Debian.

    When I did, I found it very difficult. The Debian Rust Team were very helpful. However, the workflow and tooling require very large amounts of manual clerical work - work which it is almost impossible to do correctly since the information required does not exist. I had wanted to package a fairly straightforward program I had written in Rust, partly as a learning exercise. But, unfortunately, after I got stuck in, it looked to me like the effort would be wildly greater than I was prepared for, so I gave up.

    Since then I’ve been thinking about what I learned about how Rust is packaged in Debian. I think I can see how to fix some of the problems. Although I don’t want to go charging in and try to tell everyone how to do things, I felt I ought at least to write up my ideas. Hence this blog post, which may become the first of a series.

    This post is going to be about semver handling. I see problems with other aspects of dependency handling and source code management and traceability as well, and of course if my ideas find favour in principle, there are a lot of details that need to be worked out, including some kind of transition plan.

    How Debian packages Rust, and build vs runtime dependencies

    Today I will be discussing almost entirely build-dependencies; Rust doesn’t (yet?) support dynamic linking, so built Rust binaries don’t have Rusty dependencies.

    However, things are a bit confusing because even the Debian “binary” packages for Rust libraries contain pure source code. So for a Rust library package, “building” the Debian binary package from the Debian source package does not involve running the Rust compiler; it’s just file-copying and format conversion. The library’s Rust dependencies do not need to be installed on the “build” machine for this.

    So I’m mostly going to be talking about Depends fields, which are Debian’s way of talking about runtime dependencies, even though they are used only at build-time. The way this works is that some ultimate leaf package (which is supposed to produce actual executable code) Build-Depends on the libraries it needs, and those Depends on their under-libraries, so that everything needed is installed.

    What do dependencies mean and what are they for anyway?

    In systems where packages declare dependencies on other packages, it generally becomes necessary to support “versioned” dependencies. In all but the most simple systems, this involves an ordering (or similar) on version numbers and a way for a package A to specify that it depends on certain versions of B.

    Both Debian and Rust have this. Rust upstream crates have version numbers and can specify their dependencies according to semver. Debian’s dependency system can represent that.

    So it was natural for the designers of the scheme for packaging Rust code in Debian to simply translate the Rust version dependencies to Debian ones. However, while the two dependency schemes seem equivalent in the abstract, their concrete real-world semantics are totally different.

    These different package management systems have different practices and different meanings for dependencies. (Interestingly, the Python world also has debates about the meaning and proper use of dependency versions.)

    The epistemological problem

    Consider some package A which is known to depend on B. In general, it is not trivial to know which versions of B will be satisfactory. I.e., whether a new B, with potentially-breaking changes, will actually break A.

    Sometimes tooling can be used which calculates this (eg, the Debian shlibdeps system for runtime dependencies) but this is unusual - especially for build-time dependencies. Which versions of B are OK can normally only be discovered by a human consideration of changelogs etc., or by having a computer try particular combinations.

    Few ecosystems with dependencies, in the Free Software community at least, make an attempt to precisely calculate the versions of B that are actually required to build some A. So it turns out that there are three cases for a particular combination of A and B: it is believed to work; it is known not to work; and: it is not known whether it will work.

    And, I am not aware of any dependency system that has an explicit machine-readable representation for the “unknown” state, so that they can say something like “A is known to depend on B; versions of B before v1 are known to break; version v2 is known to work”. (Sometimes statements like that can be found in human-readable docs.)

    That leaves two possibilities for the semantics of a dependency A depends B, version(s) V..W: Precise: A will definitely work if B matches V..W, and Optimistic: We have no reason to think B breaks with any of V..W.

    At first sight the latter does not seem useful, since how would the package manager find a working combination? Taking Debian as an example, which uses optimistic version dependencies, the answer is as follows: The primary information about what package versions to use is not only the dependencies, but mostly in which Debian release is being targeted. (Other systems using optimistic version dependencies could use the date of the build, i.e. use only packages that are “current”.)



    People involved in version management

    Package developers,
    downstream developers/users.

    Package developers,
    downstream developer/users,
    distribution QA and release managers.

    Package developers declare versions V and dependency ranges V..W so that

    It definitely works.

    A wide range of B can satisfy the declared requirement.

    The principal version data used by the package manager

    Only dependency versions.

    Contextual, eg, Releases - set(s) of packages available.

    Version dependencies are for

    Selecting working combinations (out of all that ever existed).

    Sequencing (ordering) of updates; QA.

    Expected use pattern by a downstream

    Downstream can combine any
    declared-good combination.

    Use a particular release of the whole system. Mixing-and-matching requires additional QA and remedial work.

    Downstreams are protected from breakage by

    Pessimistically updating versions and dependencies whenever anything might go wrong.

    Whole-release QA.

    A substantial deployment will typically contain

    Multiple versions of many packages.

    A single version of each package, except where there are actual incompatibilities which are too hard to fix.

    Package updates are driven by

    Depending package updates the declared metadata.
    Depended-on package is updated in the repository for the work-in-progress release.

    So, while Rust and Debian have systems that look superficially similar, they contain fundamentally different kinds of information. Simply representing the Rust versions directly into Debian doesn’t work.

    What is currently done by the Debian Rust Team is to manually patch the dependency specifications, to relax them. This is very labour-intensive, and there is little automation supporting either decisionmaking or actually applying the resulting changes.

    What to do

    Desired end goal

    To update a Rust package in Debian, that many things depend on, one need simply update that package.

    Debian’s sophisticated build and CI infrastructure will try building all the reverse-dependencies against the new version. Packages that actually fail against the new dependency are flagged as suffering from release-critical problems.

    Debian Rust developers then update those other packages too. If the problems turn out to be too difficult, it is possible to roll back.

    If a problem with a depending packages is not resolved in a timely fashion, priority is given to updating core packages, and the depending package falls by the wayside (since it is empirically unmaintainable, given available effort).

    There is no routine manual patching of dependency metadata (or of anything else).

    Radical proposal

    Debian should not precisely follow upstream Rust semver dependency information. Instead, Debian should optimistically try the combinations of packages that we want to have. The resulting breakages will be discovered by automated QA; they will have to be fixed by manual intervention of some kind, but usually, simply updating the depending package will be sufficient.

    This no longer ensures (unlike the upstream Rust scheme) that the result is expected to build and work if the dependencies are satisfied. But as discussed, we don’t really need that property in Debian. More important is the new property we gain: that we are able to mix and match versions that we find work in practice, without a great deal of manual effort.

    Or to put it another way, in Debian we should do as a Rust upstream maintainer does when they do the regular “update dependencies for new semvers” task: we should update everything, see what breaks, and fix those.

    (In theory a Rust upstream package maintainer is supposed to do some additional checks or something. But the practices are not standardised and any checks one does almost never reveal anything untoward, so in practice I think many Rust upstreams just update and see what happens. The Rust upstream community has other mechanisms - often, reactive ones - to deal with any problems. Debian should subscribe to those same information sources, eg RustSec.)

    Nobbling cargo

    Somehow, when cargo is run to build Rust things against these Debian packages, cargo’s dependency system will have to be overridden so that the version of the package that is actually selected by Debian’s package manager is used by cargo without complaint.

    We probably don’t want to change the Rust version numbers of Debian Rust library packages, so this should be done by either presenting cargo with an automatically-massaged Cargo.toml where the dependency version restrictions are relaxed, or by using a modified version of cargo which has special option(s) to relax certain dependencies.

    Handling breakage

    Rust packages in Debian should already be provided with autopkgtests so that ci.debian.net will detect build breakages. Build breakages will stop the updated dependency from migrating to the work-in-progress release, Debian testing.

    To resolve this, and allow forward progress, we will usually upload a new version of the dependency containing an appropriate Breaks, and either file an RC bug against the depending package, or update it. This can be done after the upload of the base package.

    Thus, resolution of breakage due to incompatibilities will be done collaboratively within the Debian archive, rather than ad-hoc locally. And it can be done without blocking.

    My proposal prioritises the ability to make progress in the core, over stability and in particular over retaining leaf packages. This is not Debian’s usual approach but given the Rust ecosystem’s practical attitudes to API design, versioning, etc., I think the instability will be manageable. In practice fixing leaf packages is not usually really that hard, but it’s still work and the question is what happens if the work doesn’t get done. After all we are always a shortage of effort - and we probably still will be, even if we get rid of the makework clerical work of patching dependency versions everywhere (so that usually no work is needed on depending packages).

    Exceptions to the one-version rule

    There will have to be some packages that we need to keep multiple versions of. We won’t want to update every depending package manually when this happens. Instead, we’ll probably want to set a version number split: rdepends which want version <X will get the old one.

    Details - a sketch

    I’m going to sketch out some of the details of a scheme I think would work. But I haven’t thought this through fully. This is still mostly at the handwaving stage. If my ideas find favour, we’ll have to do some detailed review and consider a whole bunch of edge cases I’m glossing over.

    The dependency specification consists of two halves: the depending .deb‘s Depends (or, for a leaf package, Build-Depends) and the base .debVersion and perhaps Breaks and Provides.

    Even though libraries vastly outnumber leaf packages, we still want to avoid updating leaf Debian source packages simply to bump dependencies.

    Dependency encoding proposal

    Compared to the existing scheme, I suggest we implement the dependency relaxation by changing the depended-on package, rather than the depending one.

    So we retain roughly the existing semver translation for Depends fields. But we drop all local patching of dependency versions.

    Into every library source package we insert a new Debian-specific metadata file declaring the earliest version that we uploaded. When we translate a library source package to a .deb, the “binary” package build adds Provides for every previous version.

    The effect is that when one updates a base package, the usual behaviour is to simply try to use it to satisfy everything that depends on that base package. The Debian CI will report the build or test failures of all the depending packages which the API changes broke.

    We will have a choice, then:

    Breakage handling - update broken depending packages individually

    If there are only a few packages that are broken, for each broken dependency, we add an appropriate Breaks to the base binary package. (The version field in the Breaks should be chosen narrowly, so that it is possible to resolve it without changing the major version of the dependency, eg by making a minor source change.)

    When can then do one of the following:

    • Update the dependency from upstream, to a version which works with the new base. (Assuming there is one.) This should be the usual response.

    • Fix the dependency source code so that builds and works with the new base package. If this wasn’t just a backport of an upstream change, we should send our fix upstream. (We should prefer to update the whole package, than to backport an API adjustment.)

    • File an RC bug against the dependency (which will eventually trigger autoremoval), or preemptively ask for the Debian release managers to remove the dependency from the work-in-progress release.

    Breakage handling - declare new incompatible API in Debian

    If the API changes are widespread and many dependencies are affected, we should represent this by changing the in-Debian-source-package metadata to arrange for fewer Provides lines to be generated - withdrawing the Provides lines for earlier APIs.

    Hopefully examination of the upstream changelog will show what the main compat break is, and therefore tell us which Provides we still want to retain.

    This is like declaring Breaks for all the rdepends. We should do it if many rdepends are affected.

    Then, for each rdependency, we must choose one of the responses in the bullet points above. In practice this will often be a mass bug filing campaign, or large update campaign.

    Breakage handling - multiple versions

    Sometimes there will be a big API rewrite in some package, and we can’t easily update all of the rdependencies because the upstream ecosystem is fragmented and the work involved in reconciling it all is too substantial.

    When this happens we will bite the bullet and include multiple versions of the base package in Debian. The old version will become a new source package with a version number in its name.

    This is analogous to how key C/C++ libraries are handled.

    Downsides of this scheme

    The first obvious downside is that assembling some arbitrary set of Debian Rust library packages, that satisfy the dependencies declared by Debian, is no longer necessarily going to work. The combinations that Debian has tested - Debian releases - will work, though. And at least, any breakage will affect only people building Rust code using Debian-supplied libraries.

    Another less obvious problem is that because there is no such thing as Build-Breaks (in a Debian binary package), the per-package update scheme may result in no way to declare that a particular library update breaks the build of a particular leaf package. In other words, old source packages might no longer build when exposed to newer versions of their build-dependencies, taken from a newer Debian release. This is a thing that already happens in Debian, with source packages in other languages, though.

    Semver violation

    I am proposing that Debian should routinely compile Rust packages against dependencies in violation of the declared semver, and ship the results to Debian’s millions of users.

    This sounds quite alarming! But I think it will not in fact lead to shipping bad binaries, for the following reasons:

    The Rust community strongly values safety (in a broad sense) in its APIs. An API which is merely capable of insecure (or other seriously bad) use is generally considered to be wrong. For example, such situations are regarded as vulnerabilities by the RustSec project, even if there is no suggestion that any actually-broken caller source code exists, let alone that actually-broken compiled code is likely.

    The Rust community also values alerting programmers to problems. Nontrivial semantic changes to APIs are typically accompanied not merely by a semver bump, but also by changes to names or types, precisely to ensure that broken combinations of code do not compile.

    Or to look at it another way, in Debian we would simply be doing what many Rust upstream developers routinely do: bump the versions of their dependencies, and throw it at the wall and hope it sticks. We can mitigate the risks the same way a Rust upstream maintainer would: when updating a package we should of course review the upstream changelog for any gotchas. We should look at RustSec and other upstream ecosystem tracking and authorship information.

    Difficulties for another day

    As I said, I see some other issues with Rust in Debian.

    • I think the library “feature flagencoding scheme is unnecessary. I hope to explain this in a future essay.

    • I found Debian’s approach to handling the source code for its Rust packages quite awkward; and, it has some troubling properties. Again, I hope to write about this later.

    • I get the impression that updating rustc in Debian is a very difficult process. I haven’t worked on this myself and I don’t feel qualified to have opinions about it. I hope others are thinking about how to make things easier.

    Thanks all for your attention!

    comment count unavailable comments

    03 January, 2022 06:35PM

    Thorsten Alteholz

    My Debian Activities in December 2021

    FTP master

    This month I accepted 412 and rejected 44 packages. The overall number of packages that got accepted was 423.

    Debian LTS

    This was my ninetieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

    This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

    • [DLA 2846-1] raptor2 security update for one CVE
    • [DLA 2845-1] libsamplerate security update for one CVE
    • [DLA 2859-1] zziplib security update for one CVE
    • [DLA 2858-1] libzip security update for one CVE
    • [DLA 2869-1] xorg-server security update for three CVEs
    • [#1002912] for graphicsmagick in Buster
    • [debdiff] for sphinxearch/buster to maintainer and sec team
    • [debdiff] for zziplib/buster to maintainer
    • [debdiff] for zziplib/bullseye to maintainer
    • [debdiff] for raptor2/bullseye to maintainer

    I also started to work on libarchive

    Further I worked on packages in NEW on security-master. In order to faster process such packages, I added a notification when work arrived there.

    Last but not least I did some days of frontdesk duties.

    Debian ELTS

    This month was the forty-second ELTS month.

    During my allocated time I uploaded:

    • ELA-527-1 for libsamplerate
    • ELA-528-1 for raptor2
    • ELA-529-1 for ufraw
    • ELA-532-1 for zziplib
    • ELA-534-1 for xorg-server

    Last but not least I did some days of frontdesk duties.

    Debian Astro

    Related to my previous article about fun with telescopes I uploaded new versions or did source uploads for:

    Besides the indi-stuff I also uploaded

    Other stuff

    I celebrated christmas :-).

    03 January, 2022 12:23PM by alteholz

    hackergotchi for Joachim Breitner

    Joachim Breitner

    Telegram bots in Python made easy

    A while ago I set out to get some teenagers interested in programming, and thought about a good way to achieve that. A way that allows them to get started with very little friction, build something that’s relevant in their currently live quickly, and avoids frustration.

    They were old enough to have their own smartphone, and they were already happily chatting with their friends, using the Telegram messenger. I have already experimented a bit with writing bots for Telegram (e.g. @Umklappbot or @Kaleidogen), and it occurred to me that this might be a good starting point: Chat bot interactions have a very simple data model: message in, response out, all simple text. Much simpler than anything graphical or even web programming. In a way it combines the simplicity of the typical initial programming exercises on the command-line with the impact and relevance of web programming.

    But of course “real” bot programming is still too hard – installing a programming environment, setting up a server, deploying, dealing with access tokens, understanding the Telegram Bot API and mapping it to your programming language.

    The IDE
    The IDE

    So I built a browser-based Python programming environments for Telegram bots that takes care of all of that. You simply write a single Python function, click the “Deploy” button, and the bot is live. That’s it!

    This environment provides a much simpler “API” for the bots: Define a function like the following:

      def private_message(sender, text):
         return "Hello!"

    This gets called upon a message, and if it returns a String, that’s the response. That’s it! Not enough to build any kind of Telegram bot, but sufficient for many fun applications.

    A chatbot
    A chatbot

    In fact, my nephew and niece use this to build a simple interactive fiction game, where the player says where they are going (“house”, ”forest”, “lake”) and thus explore the story, and in the end kill the dragon. And my girlfriend created a shopping list bot that we are using “productively”.

    If you are curious, you can follow the instructions to create your own bot. There you can also find the source code and instructions for hosting your own instance (on Amazon Web Services).

    Help with the project (e.g. improving the sandbox for running untrustworthy python code; making the front-end work better) is of course highly appreciated, too. The frontend is written in PureScript, and the backend in Python, building on Amazon lambda and Amazon DynamoDB.

    03 January, 2022 10:20AM by Joachim Breitner (mail@joachim-breitner.de)