January 26, 2021

Russ Allbery

Review: The City We Became

Review: The City We Became, by N.K. Jemisin

Series: The Great Cities Trilogy #1
Publisher: Orbit
Copyright: March 2020
ISBN: 0-316-50985-X
Format: Kindle
Pages: 449

At an unpredictable point after a city has accumulated enough history, character, and sense of itself, it is born as a living creature. It selects an avatar who both embodies the city and helps it be born. But in that moment of birth, a city is at its most vulnerable, and that is when the Enemy attacks.

The birth of cities and the Enemy attacks have happened for millennia, so the cities that have survived the process have developed a system. The most recently born goes to assist the avatar with the birthing process and help fight off Enemy attacks. But the process has become riskier and the last two cities have failed to be born, snuffed out in natural disasters despite that support.

Now, it's New York City's turn. It selects its avatar and survives the initial assault with the help of São Paulo. But something still goes wrong, and it is severely injured in the process. Complicating matters, now there are five more avatars, one for each borough, who will have to unite to fight off the Enemy. And, for the first time, the Enemy has taken human form and is attacking with reason and manipulation and intelligence, not just force.

The City We Became has a great premise: take the unique sense of place that defines a city and turn it into a literalized character in a fantasy novel. The avatars are people who retain their own lives and understanding (with one exception that I'll get to in a moment), but gain an awareness of the city they represent. They can fight and repair their city through sympathetic magic and metaphor made real. The prelude that introduces this concept (adapted from Jemisin's earlier short story "The City Born Great") got too gonzo for me, but once Jemisin settles into the main story and introduces avatars with a bit more distance from the city they represent, the premise clicked.

The execution, on the other hand, I thought was strained. The biggest problem is that the premise requires an ensemble cast of five borough avatars, the primary avatar, São Paulo, and the Enemy. That's already a lot, but for the story to work each avatar has to be firmly grounded in their own unique experience of New York, which adds family members, colleagues, and roommates. That's too much to stuff into one novel, which means characters get short shrift. For example, Padmini, the avatar of Queens, gets a great introductory scene and a beautiful bit of characterization that made her one of my favorite characters, but then all but disappears for the remainder of the book. She's in the scenes, but not in a way that matters. Brooklyn and Aislyn get moments of deep characterization, but there's so much else going on that they felt rushed. And what ever happened to Manny's roommate?

The bulk of the characterization in this book goes to Broncha, the Bronx avatar, a Lenape woman and a tough-as-nails administrator of a community art museum and maker space. The dynamics between her and her co-workers, her mentorship of Veneza, and her early encounters with the Woman in White are my favorite parts of the book. I thought she and Brooklyn were a useful contrast: two very different ways of finding a base of power in the city without letting go of one's ideals.

But before we get to Broncha, we first meet Manny, the Manhattan avatar. Thematically, I thought what Jemisin did here was extremely clever. Manny's past is essentially erased at the start of the book, making him the reader insert character to start making sense of this world. This parallels the typical tourist experience of arriving in Manhattan and trying to use it to make sense of New York. He's disconnected from the rest of the city because he's the dangerous newcomer with power but not a lot of understanding, which works with my model of the political dynamics of Manhattan.

Unfortunately, he's not an interesting person. I appreciated what was happening at the metaphorical layer, but Manny veers between action hero and exposition prompt, and his amnesia meant I never got enough sense of him as a character to care that much about what happened to him. I thought his confrontation with the Woman in White near the start of the book, which establishes the major villain of the book, felt clunky and forced compared to her later encounters with the other characters.

The Woman in White, though, is a great villain. It's clear from earlier on that the Enemy is Lovecraftian, but the Woman in White mixes mad scientist glee, manic enthusiasm, and a child-like amusement at the weirdness of humanity into the more typical tropes of tentacles, corruption, and horrific creatures. One of my qualms about reading this book is that I'm not a horror fan and don't enjoy the mental images of unspeakable monsters, but the Woman in White puts such a fascinating spin on them that I enjoyed the scenes in which she appeared. I think the book was at its best when she was trying to psychologically manipulate the characters or attack them with corrupted but pre-existing power structures. I was less interested when it turned into an action-movie fight against animated monsters.

The other place Jemisin caught me by surprise is too much of a spoiler to describe in detail (and skip the next paragraph in its entirety if you want to avoid all spoilers):

Jemisin didn't take the moral conflict of the book in the direction I was expecting. This book is more interested in supporting the people who are already acting ethically than in redeeming people who make bad choices. That produces a deus ex machina ending that's a bit abrupt, but I appreciated the ethical stance.

Overall, I thought the premise was great but the execution was unsteady and a bit overstuffed. There are some great characters and some great scenes, but to me they felt disjointed and occasionally rushed. You also need to enjoy characters taking deep pride in the feel of a specific place and advocating for it with the vigor of a sports rivalry, along with loving descriptions of metaphors turned into magical waves of force. But, if you can roll with that, there are moments of real awe. Jemisin captured for me the joy that comes from a deeply grounded sense of connection to a place.

Recommended, albeit with caveats, if you're in the mood for reading about people who love the city they live in.

This is the first book of a planned trilogy and doesn't resolve the main conflict, but it reaches a satisfying conclusion. The title of the next book has not yet been announced at the time of this review.

Rating: 7 out of 10

26 January, 2021 04:11AM

January 25, 2021

Enrico Zini

nspawn-runner: support for image selection

.gitlab-ci.yml supports 'image' to allow selecting in which environment the script gets run. The documentation says "Used to specify a Docker image to use for the job", but it's clearly a bug in the documentation, because we can do it with nspawn-runner, too.

It turns out that most of the environment variables available to CI runs are also available to custom runner scripts. In this case, the value passed as image can be found as $CUSTOM_ENV_CI_JOB_IMAGE in the custom runner scripts environment.

After some experimentation I made this commit that makes every chroot under /var/lib/nspawn-runner available as an image:

# Set up 3 new images for CI jobs:
nspawn-runner chroot-create buster
nspawn-runner chroot-create bullseye
nspawn-runner chroot-create sid

That's it, CI scripts can now use image: buster, image: bullseye or image: sid, as they please. You can manually set up other chroots under /var/lib/nspawn-runner and they'll be automatically available.

You can also now choose a default image in config.toml in case the CI script doesn't specify one:

prepare_args = ["--verbose", "prepare", "--default-image=buster"]

25 January, 2021 03:49PM

hackergotchi for Martin-Éric Racine

Martin-Éric Racine

Help needed: clean up and submit KMS driver for Geode LX to LKML

Ever since X.org switched to rootless operation, the days of the Geode X.org driver have been numbered. The old codebase dates back from Geode's early days at Cyrix, was then updated by NSC to add support for their new GX2 architecture, from which AMD dropped GX1 support and added support for their new LX architecture. To put it mildly, that codebase is a serious mess.

However, at least the LX code comes with plenty of niceties, such as being able to detect when it runs on an OLPC XO-1 and to probe DCC pins to determine the optimal display resolution on other hardware. This still doesn't make the codebase cruft-free.

Anyhow, most Linux distributions have dropped support for anything older than i686 with PAE, which essentially means that the GX2 code is just for show. Debian is one of very few distributions whose x86-32 port still ships with i686 without PAE. In fact, the lowest common denominator kernel on i386 is configured for Geode (LX).

A while back, someone had started working on a KMS driver for the Geode LX. Through word of mouth, I got my hands on a copy of their Git tree. The driver worked reasonably well, but the codebase needs some polishing before it could be included in the Linux kernel tree.

Hence this call for help:

Is there anyone with good experience of the LKML coding standards who would be willing to clean up the driver's code and submit the patch to the LKML?

25 January, 2021 02:33PM by Martin-Éric (noreply@blogger.com)

Russ Allbery

Review: Laziness Does Not Exist

Review: Laziness Does Not Exist, by Devon Price

Publisher: Atria Books
Copyright: January 2021
ISBN: 1-9821-4013-5
Format: Kindle
Pages: 216

The premise of Laziness Does Not Exist is in the title: Laziness as a moral failing does not exist. It is a misunderstanding of other problems with physical or psychological causes, a belief system that is used to extract unsustainable amounts of labor, an excuse to withdraw our empathy, and a justification for not solving social problems. Price refers to this as the Laziness Lie, which they define with three main tenets:

  1. Your worth is your productivity.
  2. You cannot trust your own feelings and limits.
  3. There is always more you could be doing.

This book (an expansion of a Medium article) makes the case against all three tenets using the author's own burnout-caused health problems as the starting argument. They then apply that analysis to work, achievements, information overload, relationships, and social pressure. In each case, Price's argument is to prioritize comfort and relaxation, listen to your body and your limits, and learn who you are rather than who the Laziness Lie is trying to push you to be.

The reader reaction to a book like this will depend on where the reader is in thinking about the problem. That makes reviewing a challenge, since it's hard to simulate a reader with a different perspective. For myself, I found the content unobjectionable, but largely repetitive of other things I've read. The section on relationships in particular will be very familiar to Captain Awkward readers, just not as pointed. Similarly, the chapter on information overload is ground already covered by Digital Minimalism, among other books. That doesn't make this a bad book, but it's more of a survey, so if you're already well-read on this topic you may not get much out of it.

The core assertion is aggressive in part to get the reader to argue with it and thus pay attention, but I still came away convinced that laziness is not a useful word. The symptoms that cause us to call ourselves lazy — procrastination, burnout, depression, or executive function problems, for example — are better understood without the weight of moral reproach that laziness carries. I do think there is another meaning of laziness that Price doesn't cover, since they are aiming this book exclusively at people who are feeling guilty about possibly being lazy, and we need some term for people who use their social power to get other people to do all their work for them. But given how much the concept of laziness is used to attack and belittle the hard-working or exhausted, I'm happy to replace "laziness" with "exploitation" when talking about that behavior.

This is a profoundly kind and gentle book. Price's goal is to help people be less hard on themselves and to take opportunities to relax without guilt. But that also means that it stays in the frame of psychological analysis and self-help, and only rarely strays into political or economic commentary. That means it's more useful for taking apart internalized social programming, but less useful for thinking about the broader political motives of those who try to convince us to working endlessly and treat all problems as personal responsibilities rather than political failures. For that, I think Anne Helen Peterson's Can't Even is the more effective book. Price also doesn't delve much into history, and I now want to read a book on the origin of a work ethic as a defining moral trait.

One truly lovely thing about this book is that it's quietly comfortable with human variety of gender and sexuality in a way that's never belabored but that's obvious from the examples that Price uses. Laziness Does Not Exist felt more inclusive in that way, and to some extent on economic class, than Can't Even.

I was in the mood for a book that takes apart the political, social, and economic motivations behind convincing people that they have to constantly strive to not be lazy, so the survey nature of this book and its focus on self-help made it not the book for me. It also felt a bit repetitive despite its slim length, and the chapter structure didn't click for me. But it's not a bad book, and I suspect it will be the book that someone else needs to read.

Rating: 6 out of 10

25 January, 2021 05:00AM

Iustin Pop

Raspbian/Raspberry PI OS with initrd

Background

While Raspbian, ahem, Raspberry PI OS is mostly Debian, the biggest difference is the kernel, both in terms of code and packaging.

The packaging is weird since it needs to deal with the fact that there’s no bootloader per se, the firmware parses /boot/config.txt and depending on the setting of 64bit and/or kernel line, it loads a specific file. Normally, one of kernel7.img, kernel7l.img or kernel8.img. While this configuration file supports an initrd, it doesn’t have a clean way to associate an initrd with a kernel, but rather you have to (like for the actual kernel) settle on a hard-coded initrd name.

Due to this, the normal way of using an initrd doesn’t work, and one has to do two things:

  • enable building initrd’s at all
  • settle on the naming for the initrd
  • ensure the initrd is updated correctly

There are quite a few of forum threads about this, but there’s no official support for this. The best link I found was this Stack Exchange post, which goes most of the way, but fails at the third point above.

My trivial solution

Instead of naming tricks the above post suggests, I settled on having a fixed name. It risks boot failure when the kernel architecture will change, which could be worked around with hard-coded kernel too, but I haven’t done that yet.

First, enable initrd creation/update in /etc/default/raspberrypi-kernel, like the post says. Uncomment INITRD=yes, but not the RPI_INITRD. This will enable creating/updating the initrd when the kernel package is installed and/or other packages trigger it via hooks.

Second, naming: choose an initrd name. I have simply this in my config.txt:

initramfs initrd.img followkernel

So the value is fully hard-coded, and the actual work is done in the next part.

Last, add an initramfs-hook in (e.g.) /etc/initramfs/post-update.d/rpi-initrd. Note that, by default (unless other packages have created it), the /etc/initramfs directory doesn’t exist. It’s not the confusingly-named /etc/initramfs-tools/ directory, which is related to building the initrd, but rather to doing things with the (already built) initrd. This directory is briefly explain in the Debian kernel-handbook guide.

This is my contents:

#!/bin/bash

ABI="$1"
INITRD="$2"
BOOTDIR="$(dirname "$INITRD")"

# Note: the match below _must_ be synced with the boot kernel
if [[ "$ABI" == *-v8+ ]]; then
        echo "Building v8l+ image, updating top initrd"
        cp -p "$INITRD" "$BOOTDIR/initrd.img"
fi

This script seems to me much simpler, so in principle less chance of bugs, than all the renaming and editing config.txt I saw in that stack exchange post. As far as I know, all one needs to care about is to sync the ABI passed to this hook with the kernel one is running, and since there is only one kernel version installed at a time on Raspbian (as there’s no versioning in the kernel names), this should work correctly.

With this hook, the update also works correctly when packages trigger initrd updates, not only when new kernels are installed.

Note that the cp there is needed since the book partition is FAT, so no symbolic links/hard links allowed.

Happy initrd’ing!

25 January, 2021 01:33AM

January 24, 2021

Enrico Zini

hackergotchi for Daniel Pocock

Daniel Pocock

Internet origins of the mob

For anybody who has ever been elected into office, from the student union to a national legislature, the recent scenes in Washington DC have been particularly disturbing.

When reports appear about the efforts Trump made blackmailing the State of Georgia into changing their results, it is even more disturbing for those of us who also experience threats or blackmail while holding some form of office or voluntary leadership role. In my case, the free software community elected me as a representative in 2017 and right up to the day I resigned in September 2018, people who opposed the election result sent me constant threats and harassment.

Even two years after resigning, the same mob still pushes doxing and defamation. This is blackmail, they want the questions about volunteers and elections to be withdrawn. Each time new revelations come to light, such as the notorious FSFE women court case, it demonstrates why they find an independent representative so inconvenient.

US Capitol mob, FSFE, Debian, Doxing volunteers

My experiences with bad actors go back a long way and give some startling insights into the rise of these practices in the online space from way before the birth of Facebook or Twitter.

US Capitol riot, predicted and censored

Pine Gap, intelligence down under

In a Novmeber 2020 discussion about elections in a free software project, I posted a comment that predicted exactly where Donald Trump was going:

Kevin Costner got it right in his 1997 flop, The Postman. Stay tuned for the sequel, The GNU Mailman. Quote from the trailer "You are a dangerous man!".

The email never appeared in the thread. It vanished.

In the movie, the country is ravaged by a plague. Cliven Bundy is the law. The only remaining trace of great national institutions is a postman's uniform, which seems to fit Costner. This quickly leads to physical conflict. The Bundies are the mob, the Postman is a proxy for Washington.

Paranoia from the grave

In 1996, an Australian political party realized they had made a mistake, they had selected a candidate who was too extreme even for Australian politics. During her campaign, her racist comments about Indigenous Australians prompted the party to withdraw her endorsement. The decision came too late, ballot papers had already been printed with her name beside the party name. The Division of Oxley elected Pauline Hanson by mistake.

There is a unique symbiosis between extreme politicians and the media. Not only does the media profit from giving these candidates airtime but as a bonus, rival groups staging protests and counter-protests guarantee further news stories. Within months of her election, Australia's police were exerting more resources to protect Hanson than any other politician, including the Prime Minister.

In this context, Channel 7 saw an opportunity: Hanson cut a deal with the news desk to record a video to be played upon her assassination. On 25 November 1997, it was leaked (truncated version).

Dubbed the Video from the Grave, the following lines jump out:

There was always a chance that I would be killed and many believe this would be a mortal blow to what began with my election. You must not allow this to happen, ... you must fight on.

Like a geologist examing a core sample, we can go back to this 60 minutes report on the first 100 days of Hanson mania. Watching the first minutes of the video again today, the penny dropped. The Tea Party movement and Trump spent hundreds of millions of dollars on campaigns. Hanson was able to cook up an indistinguishable feast of extremism in a fish and chip shop. Trump had a budget like an Arnold Schwarzenegger movie, Pauline Hanson was operating like the producers of the Blair Witch Project but you can't really tell them apart. Before the invention of social media, Hanson had perfected and exemplified techniques that America's Tea Party movement would not start immitating for another 10 years.

Unlike Trump, Hanson sold her business empire when she was elected. The fish and chipperie was subsequently acquired by immigrants. A crowdfunding campaign was established to buy them out and convert it into a kebab or halal takeaway.

Hanson now has her own political party. In 2019, her candidates were calling for Australians to fight like hell and rival politicians were the targets.

Pauline Hanson, Fight like hell, Donald Trump
Quote from Pauline Hanson Party: Time to fight like hell against lazy or dishonest politicians

When the mob came for me

I had just completed my first year of undergraduate engineering and been elected to a role in the Melbourne University Student Union. As a side project, I created a web site supporting native title rights for indigenous Australians. The web site received a commendation from the leader of one of the main political parties, Kim Beazley:

Daniel Pocock, Kim Beazley, Canberra
Support for the maintenance of native title - and opposition to the Howard Government's Wik legislation - from all sections of the community has been unprecedented. Australians from all walks of life are coming together to proclaim their support for native title and reconciliation. This homepage provides yet another avenue for people around the world who support native title to make their feelings known to the Howard Government. <snip> This homepage is an excellent new way in which people in the community can contribute to the native title debate.
Kim Beazley
Leader of the Opposition
Canberra

The site was a runner up in the Loud Festival run by the Australian Council for the Arts. This was well outside the traditional undergraduate engineering curriculum. Mr Beazley became America's most wanted Australian.


Daniel Pocock, Loud Award runner up, 1997


Three days after Hanson's video was leaked in 1997, pleading with people to fight on, I found myself in their cross hairs. Although social media did not exist, doxing had just been invented and they chose to practice on Mr Beazley, Carlo Carli, the local member of parliament and I. Some of the online attacks captured here.

Looking at the doxings, you can find many synergies in the style of abuse between fascists in the far right and those who give orders to volunteers in open source. As Wikipedia notes, there is always some violation of the victim's privacy. The Australian fascists chose to publish my mobile phone number, it doesn't even belong to me any more but it still lingers on that web site for any neo-Nazi who comes along and wants to call it. The more recent attack involved molesting my entry in the Debian keyring the night before my wedding anniversary. Fascists choose to add something personal like this to add personal pain, to deter people from speaking again. We can see the same tactic in the siege of the US Capitol, the fascist note on Nancy Pelosi's desk:

Nancy Pelosi, Debian, FSFE, threats

Whether it is a threat on Pelosi's desk, a rogue Debian Developer desecrating my wedding anniversary or a dog leaving some dampness on a tree, the mindset behind it is equally intrusive and crude.

What I find really stunning from the earlier fascist doxing is the following quote, most ordinary readers would feel the efforts described here deserve praise:

What is interesting is that a little bit of on-line detective work reveals that Daniel Pocock is the technical contact for a domain vmore.org.au which is called Virtual Moreland and that this service provides FREE Internet services for Community Groups - just COMMUNITY GROUPS! (Au$100,000 has been sought from the Victorian state government to get this little baby going and based on the success of the participants in the past getting this tax payer funding should be a breeze).

Helping the government transfer taxpayer dollars back into highly transparent projects like Virtual Moreland would appear to be an incredible success.

You know your mob is special if the sight of a library has them frothing at the lips:

"(CO.AS.IT owns) A modern library service, ... <snip>" ... No guessing who paid for all of that...

Why do fascists hate libraries? There are usually lots of books. Some of those books, like some blogs, may confront their Code of Conduct mindset.

The doxing paints an image of community groups coming from the extreme left. Multiculturalism is a far more complex phenomena. The philosophy of people like Mr Carli and I would probably be seen as mainstream in most civilized countries. The Moreland region is popular with the Italian diaspora, which is also a very Catholic community. At that time, the Arch-Bishop of Melbourne was the conservative Cardinal George Pell. In addition to the state funding, we received generous donations of some spare computers and even a surplus file server from the Catholic administration.

It is fascinating to fast-forward 20 years and contrast the rants of Pauline Hanson's mob with the rogue elements of the open source software community. Hanson's mob started attacking me after students elected me as a representative. This pro-Google mob started attacking me after the free software community elected me as a representative. While Hanson's mob complained about my dedication to helping communities, Google's mob use me as a scapegoat, blaming me for all the strife in communities where some volunteers are disenfranchised. Yet in the latter case, they are not communities at all. They are exploitative organizations that keep volunteers off their membership rolls, a model that is barely a notch above modern day slavery. They don't allow members to join but if members ask questions about the money, they tell people we have been expelled. There is a strong smell of fraud in misrepresenting the true nature of membership.

While the defamation from Google is incredibly extreme, it is easy to see that my principles remained constant over these decades. Whether it is in the case of the native title campaign (1997) or my efforts to document the doctored membership rolls of the FSFE in 2018, what drives me is a concern for all participants to have equity, dignity and justice. A system where some volunteers are excluded from elections in the open source world has an unusual odour, much like the Apartheidesque phenomena of excluding Indigenous Australians from the land.

When Google attacks independent volunteers, it is because they can't accept the principles outside their own worldview, just as Donald Trump can't accept the people who didn't vote for him. In a stunning role-reversal, while officials are releasing details of Trump's attempts to blackmail Georgia, a GAFA mob led by Google was exposed blackmailing Australia's parliament.

It is disturbing for me to see that rogue elements of Debian, FSFE and Pauline Hanson's One Nation have found something in common, doxing volunteers with personal attacks to drown our principles. We find another synergy in the way fascist groups discredit people outside their monoculture, branding everything they don't agree with as spam. Like some archaeological discovery, tracing this tool of groupthink back to a far right web site from 1997. It is up there in the very first line of the doxing, the statement "Now I have said before that I don't like to receive unsolicited email". There is no way he could receive a message from my web site if he hadn't inserted his email address in the form to test it. To this day, fascists use the assertion of spamming to avoid questions and hobble people into groupthink.

Centenary of Federation, Australia, 2001

My efforts with Virtual Moreland were recognized with a Centenary of Federation certificate. Web sites hosted by the project were absorbed into other community hosting providers when I left Australia in 2002.

We also ran a training lab and portable Internet cafe based on thin client computing, like Linux Terminal Server Project, but that was only invented two years later.

Virtual Moreland lab

The government grant provided 1,000 hours of training to people from local community organizations making their first web site. Drupal and Wordpress didn't exist. I built my own Content Management System using PHP. This made it easier to train people. Thanks to the CMS, many of these local groups were keeping their web sites up to date.

This shows just how long I've been doing development with Debian. Earlier projects, while I was at high school, involved Slackware. When some newcomers arrive and start trying to erase the volunteers who lived in the era before Google, they are trying to erase a critical part of our heritage. Changing our history and our language is another synergy between the traditional fascists and those raiding the open source community. Google has even redefined the word fascism so it no longer includes their allies and apologists in society. It is both an attack upon society, who lose the insights from history and it is also an act of aggression against the target. When you've been doing something like Linux for this long, you don't just disappear on the whim of some Google puppet. When somebody stands up at a conference, pronounces herself to be a developer by fiat and incites a mob to humiliate real developers, it feels like she wants to cut off my arm. Maimimg people like that is another tool of fascists. In Sierra Leone, the practice of canceling people has been taken to extremes by amputating hands and feet. The false claims of expulsions and demotions have the same intention: frustrating people's future ability to work, enforcing and perpetuating asymmetry between the fascist and their victim. Not everybody who wears a uniform behaves this way.

In any other domain, volunteers who give decades of service like this are given recognition and thanks.

From PUPs to Mobs

One of the more remarkable phenomena in Australian politics was the decision of businessman Clive Palmer to start his own party and pick novice candidates to run under his name in as many seats as possible.

Palmer, like Trump, is a businessman. Trump's slogan was Make America Great Again. Who copied who?

Clive Palmer, Make Australia Great

The billionaire Palmer won a seat in parliament and set a record for absenteeism. President Trump counted a record amount of time and money spent on golf, mostly on his own properties.

But it turns out the copy-cat behaviour didn't start there. One of those almost randomly selected candidates of the Palmer United Party (PUP), Jacqui Lambie, was promising to fight like hell as early as 2014 in her campaign against poppy farms. In fact, Lambie has used the fight like hell slogan for everything from gay marriage to home defence.

The conclusion is that Trump's rabble-raising pitch may not even be his own creation, down under it sounds like a cover act inspired by two of the most loathed politicians in Australia's far right, Pauline Hanson and Jacqui Lambie.

A deeper conclusion is that if fascists around the world are all a bunch of carbon copies, this debunks the central pillar of their platform, their argument that we should discriminate against people based on their place of origin.

24 January, 2021 04:20PM

January 23, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

prrd 0.0.4: More tweaks

prrd facilitates the parallel running [of] reverse dependency [checks] when preparing R packages. It is used extensively for Rcpp, RcppArmadillo, RcppEigen, BH, and possibly others.

prrd screenshot image

The key idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development that is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the (dated) screenshot (running six parallel workers, arranged in split byobu session).

This release brings several smaller tweaks and improvements to the summary report that had accumulated in my use since the last realease last April. We also updated the CI runners as one does these days.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.4 (2021-01-23)

  • Report summary mode is now compact, more robust and reports extended CRAN summaries. (Dirk via several changes)

  • Continuous Integration now uses run.sh from r-ci

My CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 January, 2021 10:31PM

Sylvain Beucler

Android Emulator Rebuild

Android Rebuilds provides freely-licensed builds of Android development tools from a Mountain View-based company.

The Emulator package moved to a separate component and build system.

Emulator 30 is now available, as unattended Docker build scripts, build documentation as well as convenience binaries.

23 January, 2021 01:02PM

January 22, 2021

hackergotchi for Robert McQueen

Robert McQueen

Launching Endless OS Foundation

Passion Led Us Here

How our for-profit company became a nonprofit, to better tackle the digital divide.

Originally posted on the Endless OS Foundation blog.

An 8-year journey to a nonprofit

On the 1st of April 2020, our for-profit Endless Mobile officially became a nonprofit as the Endless OS Foundation. Our launch as a nonprofit just as the global pandemic took hold was, predictably, hardly noticed, but for us the timing was incredible: as the world collectively asked “What can we do to help others in need?”, we framed our mission statement and launched our .org with the same very important question in mind. Endless always had a social impact mission at its heart, and the challenges related to students, families, and communities falling further into the digital divide during COVID-19 brought new urgency and purpose to our team’s decision to officially step in the social welfare space.

On April 1st 2020, our for-profit Endless Mobile officially became a nonprofit as the Endless OS Foundation, focused on the #DigitalDivide.

Our updated status was a long time coming: we began our transformation to a nonprofit organization in late 2019 with the realization that the true charter and passions of our team would be greatly accelerated without the constraints of for-profit goals, investors and sales strategies standing in the way of our mission of digital access and equity for all. 

But for 8 years we made a go of it commercially, headquartered in Silicon Valley and framing ourselves as a tech startup with access to the venture capital and partnerships on our doorstep. We believed that a successful commercial channel would be the most efficient way to scale the impact of bringing computer devices and access to communities in need. We still believe this – we’ve just learned through our experience that we don’t have the funding to enter the computer and OS marketplace head-on. With the social impact goal first, and the hope of any revenue a secondary goal, we have had many successes in those 8 years bridging the digital divide throughout the world, from Brazil, to Kenya, and the USA. We’ve learned a huge amount which will go on to inform our strategy as a nonprofit.

Endless always had a social impact mission at its heart. COVID-19 brought new urgency and purpose to our team’s decision to officially step in the social welfare space.

Our unique perspective

One thing we learned as a for-profit is that the OS and technology we’ve built has some unique properties which are hugely impactful as a working solution to digital equity barriers. And our experience deploying in the field around the world for 8 years has left us uniquely informed via many iterations and incremental improvements.

Endless OS designer in discussion with prospective user

With this knowledge in-hand, we’ve been refining our strategy throughout 2020 and now starting to focus on what it really means to become an effective nonprofit and make that impact. In many ways it is liberating to abandon the goals and constraints of being a for-profit entity, and in other ways it’s been a challenging journey for me and the team to adjust our way of thinking and let these for-profit notions and models go. Previously we exclusively built and sold a product that defined our success; and any impact we achieved was a secondary consequence of that success and seen through that lens. Now our success is defined purely in terms of social impact, and through our actions, those positive impacts can be made with or without our “product”. That means that we may develop and introduce technology to solve a problem, but it is equally as valid to find another organization’s existing offering and design a way to increase that positive impact and scale.

We develop technology to solve access equity issues, but it’s equally as valid to find another organization’s offering and partner in a way that increases their positive impact.

The analogy to Free and Open Source Software is very strong – while Endless has always used and contributed to a wide variety of FOSS projects, we’ve also had a tension where we’ve been trying to hold some pieces back and capture value – such as our own application or content ecosystem, our own hardware platform – necessarily making us competitors to other organisations even though they were hoping to achieve the same things as us. As a nonprofit we can let these ideas go and just pick the best partners and technologies to help the people we’re trying to reach.

School kids writing on paper

Digital equity … 4 barriers we need to overcome

In future, our decisions around which projects to build or engage with will revolve around 4 barriers to digital equity, and how our Endless OS, Endless projects, or our partners’ offerings can help to solve them. We define these 4 equity barriers as: barriers to devices, barriers to connectivity, barriers to literacy in terms of your ability to use the technology, and barriers to engagement in terms of whether using the system is rewarding and worthwhile.

We define the 4 digital equity barriers we exist to impact as:
1. barriers to devices
2. barriers to connectivity
3. barriers to literacy
4. barriers to engagement

It doesn’t matter who makes the solutions that break these barriers; what matters is how we assist in enabling people to use technology to gain access to the education and opportunities these barriers block. Our goal therefore is to simply ensure that solutions exist – building them ourselves and with partners such as the FOSS community and other nonprofits – proving them with real-world deployments, and sharing our results as widely as possible to allow for better adoption globally.

If we define our goal purely in terms of whether people are using Endless OS, we are effectively restricting the reach and scale of our solutions to the audience we can reach directly with Endless OS downloads, installs and propagation. Conversely, partnerships that scale impact are a win-win-win for us, our partners, and the communities we all serve. 

Engineering impact

Our Endless engineering roots and capabilities feed our unique ability to build and deploy all of our solutions, and the practical experience of deploying them gives us evidence and credibility as we advocate for their use. Either activity would be weaker without the other.

Our engineering roots and capabilities feed our unique ability to build and deploy digital divide solutions.

Our partners in various engineering communities will have already seen our change in approach. Particularly, with GNOME we are working hard to invest in upstream and reconcile the long-standing differences between our experience and GNOME. If successful, many more people can benefit from our work than just users of Endless OS. We’re working with Learning Equality on Kolibri to build a better app experience for Linux desktop users and bring content publishers into our ecosystem for the first time, and we’ve also taken our very own Hack, the immersive and fun destination for kids learning to code, released it for non-Endless systems on Flathub, and made it fully open-source.

Planning tasks with sticky notes on a whiteboard

What’s next for our OS?

What then is in store for the future of Endless OS, the place where we have invested so much time and planning through years of iterations? For the immediate future, we need the capacity to deploy everything we’ve built – all at once, to our partners. We built an OS that we feel is very unique and valuable, containing a number of world-firsts: first production OS shipped with OSTree, first Flatpak-only desktop, built-in support for updating OS and apps from USBs, while still providing a great deal of reliability and convenience for deployments in offline and educational-safe environments with great apps and content loaded on every system.

However, we need to find a way to deliver this Linux-based experience in a more efficient way, and we’d love to talk if you have ideas about how we can do this, perhaps as partners. Can the idea of “Endless OS” evolve to become a spec that is provided by different platforms in the future, maybe remixes of Debian, Fedora, openSUSE or Ubuntu? 

Build, Validate, Advocate

Beyond the OS, the Endless OS Foundation has identified multiple programs to help underserved communities, and in each case we are adopting our “build, validate, advocate” strategy. This approach underpins all of our projects: can we build the technology (or assist in the making), will a community in-need validate it by adoption, and can we inspire others by telling the story and advocating for its wider use?

We are adopting a “build, validate, advocate” strategy.
1. build the technology (or assist in the making)
2. validate by community adoption
3. advocate for its wider use

As examples, we have just launched the Endless Key (link) as an offline solution for students during the COVID-19 at-home distance learning challenges. This project is also establishing a first-ever partnership of well-known online educational brands to reach an underserved offline audience with valuable learning resources. We are developing a pay-as-you-go platform and new partnerships that will allow families to own laptops via micro-payments that are built directly into the operating system, even if they cannot qualify for standard retail financing. And during the pandemic, we’ve partnered with Teach For America to focus on very practical digital equity needs in the USA’s urban and rural communities.

One part of the world-wide digital divide solution

We are one solution provider for the complex matrix of issues known collectively as the #DigitalDivide, and these issues will not disappear after the pandemic. Digital equity was an issue long before COVID-19, and we are not so naive to think it can be solved by any single institution, or by the time the pandemic recedes. It will take time and a coalition of partnerships to win. We are in for the long-haul and we are always looking for partners, especially now as we are finding our feet in the nonprofit world. We’d love to hear from you, so please feel free to reach out to me – I’m ramcq on IRC, RocketChat, Twitter, LinkedIn or rob@endlessos.org.

22 January, 2021 05:00PM by ramcq

hackergotchi for Bits from Debian

Bits from Debian

New Debian Maintainers (November and December 2020)

The following contributors were added as Debian Maintainers in the last two months:

  • Timo Röhling
  • Fabio Augusto De Muzio Tobich
  • Arun Kumar Pariyar
  • Francis Murtagh
  • William Desportes
  • Robin Gustafsson
  • Nicholas Guriev

Congratulations!

22 January, 2021 05:00PM by Jean-Pierre Giraud

Enrico Zini

Polishing nspawn-runner

This post is part of a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub.

gitlab-runner supports adding extra arguments to the custom scripts, and I can take advantage of that to pack all the various scripts that I prototyped so far into an all-in-one nspawn-runner command:

usage: nspawn-runner [-h] [-v] [--debug]
                     {chroot-create,chroot-login,prepare,run,cleanup,gitlab-config,toml}
                     ...

Manage systemd-nspawn machines for CI runs.

positional arguments:
  {chroot-create,chroot-login,prepare,run,cleanup,gitlab-config,toml}
                        sub-command help
    chroot-create       create a chroot that serves as a base for ephemeral
                        machines
    chroot-login        enter the chroot to perform maintenance
    prepare             start an ephemeral system for a CI run
    run                 run a command inside a CI machine
    cleanup             cleanup a CI machine after it's run
    gitlab-config       configuration step for gitlab-runner
    toml                output the toml configuration for the custom runner

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         verbose output
  --debug               verbose output

chroot maintenance

chroot-create and chroot-login are similar to what pbuilder, cowbuilder, schroot, debspawn and similar tools do.

They only take a chroot name, and default the rest of paths to where nspawn-runner expects things to be under /var/lib/nspawn-runner.

gitlab-runner setup

nspawn-runner toml <chroot-name> outputs a snippet to add to /etc/gitlab-runner/config.toml to configure the CI.

For example:`

$ ./nspawn-runner toml buster
[[runners]]
  name="buster"
  url="TODO"
  token="TODO"
  executor = "custom"
  builds_dir = "/var/lib/nspawn-runner/.build"
  cache_dir = "/var/lib/nspawn-runner/.cache"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.custom]
    config_exec = "/home/enrico/…/nspawn-runner/nspawn-runner"
    config_args = ["gitlab-config"]
    config_exec_timeout = 200
    prepare_exec = "/home/enrico/…/nspawn-runner/nspawn-runner"
    prepare_args = ["prepare", "buster"]
    prepare_exec_timeout = 200
    run_exec = "/home/enrico/dev/nspawn-runner/nspawn-runner"
    run_args = ["run"]
    cleanup_exec = "/home/enrico/…/nspawn-runner/nspawn-runner"
    cleanup_args = ["cleanup"]
    cleanup_exec_timeout = 200
    graceful_kill_timeout = 200
    force_kill_timeout = 200

One needs to remember to set url and token, and the runner is configured.

The end, for now

This is it, it works! Time will tell what issues or ideas will come up: for now, it's a pretty decent first version.

The various prepare, run, cleanup steps are generic enough that they can be used outside of gitlab-runner: feel free to build on them, and drop me a note if you find this useful!

Updated: Issues noticed so far, that could go into a new version:

  • updating the master chroot would disturb the running CI jobs that use it. Using nspawn's btrfs-specfic features would prevent this problem, and possibly simplify the implementation even more.
  • New step! trivially implementing support for multiple OS images

22 January, 2021 04:50PM

Assembling the custom runner

This post is part of a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub.

The plan

Back to custom runners, here's my plan:

  • config can be a noop
  • prepare starts the nspawn machine
  • run runs scripts with machinectl shell
  • cleanup runs machinectl stop

The scripts

Here are the scripts based on Federico's work:

base.sh with definitions sourced by all scripts:

MACHINE="run-$CUSTOM_ENV_CI_JOB_ID"
ROOTFS="/var/lib/gitlab-runner-custom-chroots/buster"
OVERLAY="/var/lib/gitlab-runner-custom-chroots/$MACHINE"

config.sh doing nothing:

#!/bin/sh
exit 0

prepare.sh starting the machine:

#!/bin/bash

source $(dirname "$0")/base.sh
set -eo pipefail

# trap errors as a CI system failure
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR

logger "gitlab CI: preparing $MACHINE"

mkdir -p $OVERLAY

systemd-run \
  -p 'KillMode=mixed' \
  -p 'Type=notify' \
  -p 'RestartForceExitStatus=133' \
  -p 'SuccessExitStatus=133' \
  -p 'Slice=machine.slice' \
  -p 'Delegate=yes' \
  -p 'TasksMax=16384' \
  -p 'WatchdogSec=3min' \
  systemd-nspawn --quiet -D $ROOTFS \
    --overlay="$ROOTFS:$OVERLAY:/"
    --machine="$MACHINE" --boot --notify-ready=yes

run.sh running the provided scripts in the machine:

#!/bin/bash
logger "gitlab CI: running $@"
source $(dirname "$0")/base.sh

set -eo pipefail
trap "exit $SYSTEM_FAILURE_EXIT_CODE" ERR

systemd-run --quiet --pipe --wait --machine="$MACHINE" /bin/bash < "$1"

cleanup.sh stopping the machine and removing the writable overlay directory:

#!/bin/bash
logger "gitlab CI: cleanup $@"
source $(dirname "$0")/base.sh

machinectl stop "$MACHINE"
rm -rf $OVERLAY

Trying out the plan

I tried a manual invocation of gitlab-runner, and it worked perfectly:

# mkdir /var/lib/gitlab-runner-custom-chroots/build/
# mkdir /var/lib/gitlab-runner-custom-chroots/cache/
# gitlab-runner exec custom \
    --builds-dir /var/lib/gitlab-runner-custom-chroots/build/ \
    --cache-dir /var/lib/gitlab-runner-custom-chroots/cache/ \
    --custom-config-exec /var/lib/gitlab-runner-custom-chroots/config.sh \
    --custom-prepare-exec /var/lib/gitlab-runner-custom-chroots/prepare.sh \
    --custom-run-exec /var/lib/gitlab-runner-custom-chroots/run.sh \
    --custom-cleanup-exec /var/lib/gitlab-runner-custom-chroots/cleanup.sh \
    tests
Runtime platform                                    arch=amd64 os=linux pid=18662 revision=775dd39d version=13.8.0
Running with gitlab-runner 13.8.0 (775dd39d)
Preparing the "custom" executor
Using Custom executor...
Running as unit: run-r1be98e274224456184cbdefc0690bc71.service
executor not supported                              job=1 project=0 referee=metrics
Preparing environment

Getting source from Git repository

Executing "step_script" stage of the job script
WARNING: Starting with version 14.0 the 'build_script' stage will be replaced with 'step_script': https://gitlab.com/gitlab-org/gitlab-runner/-/issues/26426

Job succeeded

Deploy

The remaining step is to deploy all this in /etc/gitlab-runner/config.toml:

concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "nspawn runner"
  url = "http://gitlab.siweb.local/"
  token = "…"
  executor = "custom"
  builds_dir = "/var/lib/gitlab-runner-custom-chroots/build/"
  cache_dir = "/var/lib/gitlab-runner-custom-chroots/cache/"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.custom]
    config_exec = "/var/lib/gitlab-runner-custom-chroots/config.sh"
    config_exec_timeout = 200
    prepare_exec = "/var/lib/gitlab-runner-custom-chroots/prepare.sh"
    prepare_exec_timeout = 200
    run_exec = "/var/lib/gitlab-runner-custom-chroots/run.sh"
    cleanup_exec = "/var/lib/gitlab-runner-custom-chroots/cleanup.sh"
    cleanup_exec_timeout = 200
    graceful_kill_timeout = 200
    force_kill_timeout = 200

Next steps

My next step will be polishing all this in a way that makes deploying and maintaining a runner configuration easy.

22 January, 2021 04:40PM

Exploring nspawn for CIs

This post is part of a series about trying to setup a gitlab runner based on systemd-nspawn. I published the polished result as nspawn-runner on GitHub.

Here I try to figure out possible ways of invoking nspawn for the prepare, run, and cleanup steps of gitlab custom runners. The results might be useful invocations beyond Gitlab's scope of application.

I begin with a chroot which will be the base for our build environments:

debootstrap --variant=minbase --include=git,build-essential buster workdir

Fully ephemeral nspawn

This would be fantastic: set up a reusable chroot, mount readonly, run the CI in a working directory mounted on tmpfs. It sets up quickly, it cleans up after itself, and it would make prepare and cleanup noops:

mkdir workdir/var/lib/gitlab-runner
systemd-nspawn --read-only --directory workdir --tmpfs /var/lib/gitlab-runner "$@"

However, run gets run multiple times, so I need the side effects of run to persist inside the chroot between runs.

Also, if the CI uses a large amount of disk space, tmpfs may get into trouble.

nspawn with overlay

Federico used --overlay to keep the base chroot readonly while allowing persistent writes on a temporary directory on the filesystem.

Note that using --overlay requires systemd and systemd-container from buster-backports because of systemd bug #3847.

Example:

mkdir -p tmp-overlay
systemd-nspawn --quiet -D workdir \
  --overlay="`pwd`/workdir:`pwd`/tmp-overlay:/"

I can run this twice, and changes in the file system will persist between systemd-nspawn executions. Great! However, any process will be killed at the end of each execution.

machinectl

I can give a name to systemd-nspawn invocations using --machine, and it allows me to run multiple commands during the machine lifespan using machinectl and systemd-run.

In theory machinectl can also fully manage chroots and disk images in /var/lib/machines, but I haven't found a way with machinectl to start multiple machines sharing the same underlying chroot.

It's ok, though: I managed to do that with systemd-nspawn invocations.

I can use the --machine=name argument to systemd-nspawn to make it visible to machinectl. I can use the --boot argument to systemd-nspawn to start enough infrastructure inside the container to allow machinectl to interact with it.

This gives me any number of persistent and named running systems, that share the same underlying chroot, and can cleanup after themselves. I can run commands in any of those systems as I like, and their side effects persist until a system is stopped.

The chroot needs systemd and dbus for machinectl to be able to interact with it:

debootstrap --variant=minbase --include=git,systemd,systemd,build-essential buster workdir

Let's boot the machine:

mkdir -p overlay
systemd-nspawn --quiet -D workdir \
    --overlay="`pwd`/workdir:`pwd`/overlay:/"
    --machine=test --boot

Let's try machinectl:

# machinectl list
MACHINE CLASS     SERVICE        OS     VERSION ADDRESSES
test    container systemd-nspawn debian 10      -

1 machines listed.
# machinectl shell --quiet test /bin/ls -la /
total 60
[…]

To run commands, rather than machinectl shell, I need to use systemd-run --wait --pipe --machine=name, otherwise machined won't forward the exit code. The result however is pretty good, with working stdin/stdout/stderr redirection and forwarded exit code.

Good, I'm getting somewhere.

The terminal where I ran systemd-nspawn is currently showing a nice getty for the booted system, which is cute, and not what I want for the setup process of a CI.

Spawning machines without needing a terminal

machinectl uses /lib/systemd/system/systemd-nspawn@.service to start machines. I suppose there's limited magic in there: start systemd-nspawn as a service, use --machine to give it a name, and machinectl manages it as if it started it itself.

What if, instead of installing a unit file for each CI run, I try to do the same thing with systemd-run?

systemd-run \
  -p 'KillMode=mixed' \
  -p 'Type=notify' \
  -p 'RestartForceExitStatus=133' \
  -p 'SuccessExitStatus=133' \
  -p 'Slice=machine.slice' \
  -p 'Delegate=yes' \
  -p 'TasksMax=16384' \
  -p 'WatchdogSec=3min' \
  systemd-nspawn --quiet -D `pwd`/workdir \
    --overlay="`pwd`/workdir:`pwd`/overlay:/"
    --machine=test --boot

It works! I can interact with it using machinectl, and fine tune DevicePolicy as needed to lock CI machines down.

This setup has a race condition where if I try to run a command inside the machine in the short time window before the machine has finished booting, it fails:

# systemd-run […] systemd-nspawn […] ; machinectl --quiet shell test /bin/ls -la /
Failed to get shell PTY: Protocol error
# machinectl shell test /bin/ls -la /
Connected to machine test. Press ^] three times within 1s to exit session.
total 60
[…]

systemd-nspawn has the option --notify-ready=yes that solves exactly this problem:

# systemd-run […] systemd-nspawn […] --notify-ready=yes ; machinectl --quiet shell test /bin/ls -la /
Running as unit: run-r5a405754f3b740158b3d9dd5e14ff611.service
total 60
[…]

On nspawn's side, I should now have all I need.

Next steps

My next step will be wrapping it all together in a gitlab runner.

22 January, 2021 04:30PM

January 21, 2021

Russell Coker

January 20, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

How others program

How do others program? I realized today that I've never actually seen it; in more than 30 years of coding, I've never really watched someone else write nontrivial code over a long period of time. I only see people's finished patches—and I know that the patches I send out for review sure doesn't look much like the code I initially wrote. (There are exceptions for small bugfixes and the likes, of course.)

It's not like I'm dying to pair program with someone (it sounds like an incredibly draining task), and I would suppose what goes on on a coding livestream doesn't necessarily match what would happen off-stream, but it was a surprising thought.

20 January, 2021 06:57PM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, December 2020

A Debian LTS logo Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian project funding

In December, we put aside 2100 EUR to fund Debian projects. The first project proposal (a tracker.debian.org improvement for the security team) was received and quickly approved by the paid contributors, then we opened a request for bids and the bid winner was announced today (it was easy, we had only one candidate). Hopefully this first project will be completed until our next report.

We’re looking forward to receive more projects from various Debian teams! Learn more about the rationale behind this initiative in this article.

Debian LTS contributors

In December, 12 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 7.0h (out of 14h assigned), thus carrying over 7h to January.
  • Ben Hutchings did 16.5h (out of 16h assigned and 9h from November), thus carrying over 8.5h to January.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned), thus carrying over h to January.
  • Emilio Pozuelo Monfort did 16.5h (out of 26h assigned), thus carrying over 9.5h to January.
  • Holger Levsen did 3.5h coordinating/managing the LTS team.
  • Markus Koschany did 26h (out of 26h assigned and 10.75h from November), thus carrying over 10.75 h to January.
  • Ola Lundqvist did 9.5h (out of 12h assigned and 11h from November), thus carrying over 11.5h to January.
  • Roberto C. Sánchez did 18.5h (out of 26h assigned and 2.25h from November) and gave back the remaining 9.75h.
  • Sylvain Beucler did 26h (out of 26h assigned).
  • Thorsten Alteholz did 26h (out of 26h assigned).
  • Utkarsh Gupta did 26h (out of 26h assigned).

Evolution of the situation

December was a quiet month as we didn’t have a team meeting nor any other unusual activity and we released 43 DLAs.

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 25 packages needing an update.

This month we are pleased to welcome Deveryware as new sponsor!

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 January, 2021 09:39AM by Raphaël Hertzog

January 19, 2021

John Goerzen

Roundup of Secure Messengers with Off-The-Grid Capabilities (Distributed/Mesh Messengers)

Amid all the conversation about Signal, and the debate over decentralization, one thing has often not been raised: all of these things require an Internet connection.

“Of course,” you might say. “Internet is everywhere these days.” Well, not so much, and it turns out there are some very good reasons that people might want messengers that work offline. Here are some examples:

  • Internet-using messengers leak certain metadata (eg, that a person is using it, or perhaps a sophisticated adversary could use timing analysis to determine that two people are talking using it)
  • Cell signal outages due to natural disaster, large influx of people (protests, unusual sporting events, festivals, etc), or other factors
  • Locations where cell signals are not available (rural areas, camping locations, wilderness areas, etc.)
  • Devices that don’t have cell data capability (many tablets, phones that have had service expire, etc.)

How do they work?

These all use some form of local radio signal. Some, such as Briar, may use short-range Bluetooth and Wifi, while others use radios such as LoRa that can reach several miles with low power. I’ve written quite a bit about LoRa before, and its unique low-speed but extreme-distance radio capabilities even on low power.

One common thread through these is that most of them are Android-only, though many are compatible with F-Droid and privacy-enhanced Android distributions.

Every item on this list uses full end-to-end encryption (E2EE).

Let’s dive on in.

Briar

Of all the options mentioned here, Briar is the one that bridges the traditional Internet-based approach with alternative options the best. It offers three ways for distributing data:

  • Over the Internet, via Tor onion services
  • Via Bluetooth to nearby devices
  • Via Wifi, to other devices connected to the same access point, even if Internet isn’t wokring on that AP

As far as I can tell, there is no centralized server in Briar at all. Your “account”, such as it is, lives entirely within your device; if you wipe your device, you will have to make a new account and re-establish contacts. The use of Tor is also neat to see; it ensures that an adversary can’t tell, just from that, that you’re using Briar at all, though of course timing analysis may still be possible (and Bluetooth and Wifi uses may reval some of who is communicating).

Briar features several types of messages (detailed in the manual), which really are just different spins on communication, which they liken to metaphors people are familiar with:

  • Basic 1-to-1 private messaging
  • “Private groups”, in which one particular person invites people to the chat group, and can dissolve it at any time
  • “Forums”, similar to private groups, but any existing member can invite more people to them, and they continue to exist until the last member leaves (founder isn’t special)
  • “Blogs”, messages that are automatically shared with all your contacts

By default, Briar raises an audible notification for incoming messages of all types. This is configurable for each type.

“Blogs” have a way to reblog (even a built-in RSS reader to facilitate that), but framed a different way, they are broadcast messages. They could, for instance, be useful for a “send help” message to everyone (assuming that people haven’t all shut off notifications of blogs due to others using them different ways).

Briar’s how it works page has an illustration specifically of how blogs are distributed. I’m unclear on some of the details, and to what extent this applies to other kinds of messages, but one thing that you can notice from this is that a person A could write a broadcast message without Internet access, person B could receive it via Bluetooth or whatever, and then when person B gets Internet access again, the post could be distributed more widely. However, it doesn’t appear that Briar is really a full mesh, since only known contacts in the distribution path for the message would repeat it.

There are some downsides to Briar. One is that, since an account is fully localized to a device, one must have a separate account for each device. That can lead to contacts having to pick a specific device to send a message to. There is an online indicator, which may help, but it’s definitely not the kind of seamless experience you get from Internet-only messengers. Also, it doesn’t support migrating to a new phone, live voice/video calls, or attachments, but attachments are in the works.

All in all, a solid communicator, and is the only one on this list that works 100% with the hardware everyone already has. While Bluetooth and Wifi have far more limited range than the other entries, there is undeniably convenience in not needing any additional hardware, and it may be particularly helpful when extra bags/pockets aren’t available. Also, Briar is fully Open Source.

Meshtastic

Meshtastic is a radio-first LoRa mesh project. What do I mean by radio-first? Well, basically cell phones are how you interact with Meshtastic, but they are optional. The hardware costs about $30 and the batteries last about 8 days. Range between nodes is a few miles in typical conditions (up to 11km / 7mi in ideal conditions), but nodes act as repeaters, so it is quite conceivable to just drop a node “in the middle” if you and contacts will be far apart. The project estimates that around 2000 nodes are in operation, and the network is stronger the more nodes are around.

The getting started site describes how to build one.

Most Meshtastic device builds have a screen and some buttons. They can be used independently from the Android app to display received messages, distance and bearing to other devices (assuming both have a GPS enabled), etc. This video is an introduction showing it off, this one goes over the hardware buttons. So even if your phone is dead, you can at least know where your friends are. Incidentally, the phone links up to the radio board using Bluetooth, and can provide a location source if you didn’t include one in your build. There are ideas about solar power for Meshtastic devices, too.

Meshtastic doesn’t, as far as I know, have an option for routing communication over the Internet, but the devices appear to be very thoughtfully-engineered and easy enough to put together. This one is definitely on my list to try.

Ripple-based devices

This is based on the LoRa Mesh Radio Instructables project, and is similar in concept to Meshtastic. It uses similar hardware, a similar app, but also has an option with a QWERTY hardware keyboard available, for those that want completely phone-free operation while still being able to send messages.

There are a number of related projects posted at Instructables: a GPS tracker, some sensors, etc. These are variations on the same basic concept.

These use the Ripple firmware, which is not open source, so I haven’t pursued it further.

GoTenna

For people that want less of a DIY model, and don’t mind proprietary solutions, there are two I’ll mention. The first is GoTenna Mesh, which is LoRa-based and sells units for $90 each. However, there are significant community concerns about the longevity of the project, as GoTenna has re-focused on government and corporate work. The Android app hasn’t been updated in 6 monnths despite a number of reviews citing issues, and the iOS app is also crusty.

Beartooth

Even more expensive at $125 each is the Beartooth. Also a proprietary option, I haven’t looked into it more, but they are specifically targetting backwoods types of markets.

Do not use: Bridgefy

Bridgefy was briefly prominent since it was used during the Hong Kong protests. However, numerous vulnerabilities have been demonstrated, and the developers have said they are re-working the app to address them. I wouldn’t recommend it for now.

Alternatives: GMRS handhelds

In the USA, GMRS voice handhelds are widely available. Although a license is required, it is simple (no exam) and cheap ($35) and extends to a whole family. GMRS radios also interoperate with FRS radios, which require no license and share some frequencies, but are limited to lower power (though are often sufficient).

Handheld GMRS radios that use up to 5W of power are readily available. A voice signal is a lot harder to carry for a long distance than a very low-bandwidth digital one, so even with much more power you will probably not get the same kind of range you will with something like Meshtastic, and they don’t come with any kind of security or encryption at all. However, for basic communication, they are often a useful tool.

19 January, 2021 08:37PM by John Goerzen

Russ Allbery

Review: The Secret Barrister

Review: The Secret Barrister, by The Secret Barrister

Publisher: Picador
Copyright: 2018
Printing: 2019
ISBN: 1-5098-4115-6
Format: Kindle
Pages: 344

The Secret Barrister is a survey and critique of the criminal legal system of England and Wales. The author is an anonymous barrister who writes a legal blog of the same name (which I have not read).

A brief and simplified primer for those who, like me, are familiar with the US legal system but not the English one: A barrister is a lawyer who argues cases in court, as distinct from a solicitor who does all the other legal work (and may make limited court appearances). If you need criminal legal help in England and Wales, you hire a solicitor, and they are your primary source of legal advise. If your case goes to court, your solicitor will generally (not always) refer the work of arguing your case before a judge and jury to a barrister and "instruct" them in the details of your argument. The job of the barrister is then to handle the courtroom trial, offer trial-specific legal advice, and translate your defense (or the crown's prosecution) into persuasive courtroom arguments.

Unlike the United States, with its extremely sharp distinction between prosecutors and criminal defense attorneys, criminal barristers in England and Wales argue both prosecutions and defenses depending on who hires them. (That said, the impression I got from this book is that the creation of the Crown Prosecution Service is moving England closer to the US model and more prosecutions are now handled by barristers employed directly by the CPS, whom I assume do not take defense cases.) Barristers follow the cab-rank rule, which means that, like a taxicab, they are professionally obligated to represent people on a first-come, first-serve basis and are not allowed to pick and choose clients.

(Throughout, I'm referencing the legal system of England and Wales because the author restricts his comments to it. Presumably this is because the Scottish — and Northern Irish? — legal systems are different yet again in ways I do not know.)

If details like this sound surprising, you can see the appeal of this book to me. It's easy, in the US, to have a vast ignorance about the legal systems of other countries or even the possibility of different systems, which makes it hard to see how our system could be improved. I had a superficial assumption that since US law started as English common law, the US and English legal systems would be substantially similar. And they are to an extent; they're both adversarial rather than inquisitorial, for example (more on that in a moment). But the current system of criminal prosecution evolved long after US independence and thus evolved differently despite similar legal foundations. Those differences are helpful for this American to ponder the road not taken and the impact of our respective choices.

That said, explaining the criminal legal system to Americans isn't the author's purpose. The first fifty pages are that beginner's overview, since apparently even folks who live in England are confused by the ubiquity of US legal dramas (not that those are very accurate representations of the US legal system either). The rest of the book, and its primary purpose, is an examination of the system's failings, starting with the magistrates' courts (which often use lay judges and try what in the US would be called misdemeanors, although as discussed in this book their scope is expanding). Other topics include problems with bail, how prosecution is structured, how victims and witnesses are handled, legal aid, sentencing, and the absurd inadequacy of compensation for erroneous convictions.

The most useful part of this book for me, apart from the legal system introduction, was the two chapters the author spends arguing first for and then against replacing an adversarial system with an inquisitorial system (the French criminal justice system, for example). When one is as depressed about the state of one's justice system as both I and the author are, something radically different sounds appealing. The author first makes a solid case for the inquisitorial system and then tries to demolish it, still favoring the adversarial system, and I liked that argument construction.

The argument in favor of an adversarial system is solid and convincing, but it's also depressing. It's the argument of someone who has seen the corruption, sloppiness, and political motivations in an adversarial system and fears what would happen if they were able to run rampant under a fig leaf of disinterested objectivity. I can't disagree, particularly when starting from an adversarial system, but this argument feels profoundly cynical. It reminds me of the libertarian argument for capitalism: humans are irredeemably awful, greed and self-interest are the only reliable or universal human motives, and therefore the only economic system that can work is one based on and built to harness greed, because expecting any positive characteristics from humans collectively is hopelessly naive. The author of this book is not quite that negative in their argument for an adversarial system, but it's essentially the same reasoning: the only way a system can be vaguely honest is if it's constantly questioned and attacked. It can never be trusted to be objective on its own terms. I wish the author had spent more time on the obvious counter-argument: when the system is designed for adversarial combat, it normalizes and even valorizes every dirty tactic that might result in a victory. The system reinforces our worst impulses, not to mention grinding up and destroying people who cannot afford their own dirty tricks.

The author proposes several explanations for the problems they see in the criminal legal system, including "tough on crime" nonsense from politicians that sounds familiar to this American reader. Most problems, though, they trace back to lack of funding: of the police, of the courts, of the prosecutors, and of legal aid. I don't know enough about English politics to have an independent opinion on this argument, but the stories of outsourcing to the lowest bidder, overworked civil servants, ridiculously low compensation rates, flawed metrics like conviction rates, and headline-driven political posturing that doesn't extend to investing in necessary infrastructure like better case-tracking systems sounds depressingly familiar.

This is one of those books where I appreciated the content but not the writing. It's not horrible, but the sentences are ponderous and strained and the author is a bit too fond of two-dollar words. They also have a dramatic and self-deprecating way of describing their own work that I suspect they thought was funny but that I found grating. By the end of this book, I was irritated enough that I can't recommend it. But the content was interesting, even the critique of a political system that isn't mine, and it prompted some new thoughts on the difficulties of creating a fair justice system. If you can deal with the author's writing style, you may also enjoy it.

Rating: 6 out of 10

19 January, 2021 04:00AM

January 18, 2021

hackergotchi for Evgeni Golov

Evgeni Golov

building a simple KVM switch for 30€

Prompted by tweets from Lesley and Dave, I thought about KVM switches again and came up with a rather cheap solution to my individual situation (YMMY, as usual).

As I've written last year, my desk has one monitor, keyboard and mouse and two computers. Since writing that post I got a new (bigger) monitor, but also an USB switch again (a DIGITUS USB 3.0 Sharing Switch) - this time one that doesn't freak out my dock \o/

However, having to switch the used computer in two places (USB and monitor) is rather inconvenient, but also getting an KVM switch that can do 4K@60Hz was out of question.

Luckily, hackers gonna hack, everything, and not only receipt printers (😉). There is a tool called ddcutil that can talk to your monitor and change various settings. And udev can execute commands when (USB) devices connect… You see where this is going?

After installing the package (available both in Debian and Fedora), we can inspect our system with ddcutil detect. You might have to load the i2c_dev module (thanks Philip!) before this works -- it seems to be loaded automatically on my Fedora, but you never know 😅.

$ sudo ddcutil detect
Invalid display
   I2C bus:             /dev/i2c-4
   EDID synopsis:
      Mfg id:           BOE
      Model:
      Serial number:
      Manufacture year: 2017
      EDID version:     1.4
   DDC communication failed
   This is an eDP laptop display. Laptop displays do not support DDC/CI.

Invalid display
   I2C bus:             /dev/i2c-5
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   DDC communication failed

Display 1
   I2C bus:             /dev/i2c-7
   EDID synopsis:
      Mfg id:           AOC
      Model:            U2790B
      Serial number:
      Manufacture year: 2020
      EDID version:     1.4
   VCP version:         2.2

The first detected display is the built-in one in my laptop, and those don't support DDC anyways. The second one is a ghost (see ddcutil#160) which we can ignore. But the third one is the one we can (and will control). As this is the only valid display ddcutil found, we don't need to specify which display to talk to in the following commands. Otherwise we'd have to add something like --display 1 to them.

A ddcutil capabilities will show us what the monitor is capable of (or what it thinks, I've heard some give rather buggy output here) -- we're mostly interested in the "Input Source" feature (Virtual Control Panel (VCP) code 0x60):

$ sudo ddcutil capabilities
…
   Feature: 60 (Input Source)
      Values:
         0f: DisplayPort-1
         11: HDMI-1
         12: HDMI-2
…

Seems mine supports it, and I should be able to switch the inputs by jumping between 0x0f, 0x11 and 0x12. You can see other values defined by the spec in ddcutil vcpinfo 60 --verbose, some monitors are using wrong values for their inputs 🙄. Let's see if ddcutil getvcp agrees that I'm using DisplayPort now:

$ sudo ddcutil getvcp 0x60
VCP code 0x60 (Input Source                  ): DisplayPort-1 (sl=0x0f)

And try switching to HDMI-1 using ddcutil setvcp:

$ sudo ddcutil setvcp 0x60 0x11

Cool, cool. So now we just need a way to trigger input source switching based on some event…

There are three devices connected to my USB switch: my keyboard, my mouse and my Yubikey. I do use the mouse and the Yubikey while the laptop is not docked too, so these are not good indicators that the switch has been turned to the laptop. But the keyboard is!

Let's see what vendor and product IDs it has, so we can write an udev rule for it:

$ lsusb
…
Bus 005 Device 006: ID 17ef:6047 Lenovo ThinkPad Compact Keyboard with TrackPoint
…

Okay, so let's call ddcutil setvcp 0x60 0x0f when the USB device 0x17ef:0x6047 is added to the system:

ACTION=="add", SUBSYSTEM=="usb", ATTR{idVendor}=="17ef", ATTR{idProduct}=="6047", RUN+="/usr/bin/ddcutil setvcp 0x60 0x0f"
$ sudo vim /etc/udev/rules.d/99-ddcutil.rules
$ sudo udevadm control --reload

And done! Whenever I connect my keyboard now, it will force my screen to use DisplayPort-1.

On my workstation, I deployed the same rule, but with ddcutil setvcp 0x60 0x11 to switch to HDMI-1 and my cheap not-really-KVM-but-in-the-end-KVM-USB-switch is done, for the price of one USB switch (~30€).

Note: if you want to use ddcutil with a Lenovo Thunderbolt 3 Dock (or any other dock using Displayport Multi-Stream Transport (MST)), you'll need kernel 5.10 or newer, which fixes a bug that prevents ddcutil from talking to the monitor using I²C.

18 January, 2021 01:25PM by evgeni

Craig Small

Percent CPU for processes

The ps program gives a snapshot of the processes running on your Unix-like system. On most Linux installations, this will be the ps program from the procps project.

While you can get a lot of information from the tool, a lot of the fields need further explanation or can give “wrong” or confusing information; or putting it another way, they provide the right information that looks wrong.

One of these confusing fields is the %CPU or pcpu field. You can see this as the third field with the ps aux command. You only really need the u option to see it, but ps aux is a pretty common invokation.

More than 100%?

This post was inspired by procps issue 186 where the submitter said that the sum of the processes cannot be more than the number of CPUs times 100%. If you have 1 CPU then the sum of %CPU for all processes should be 100% or less, have 16 CPUs then 1600% is your maximum number.

Some people reason for the oddity of over 100% CPU as some rounding thing gone wrong and at first I did think that; except I know we get a lot of reports about comparing the top header CPU load vs process load not lining up and its because “they’re different”.

The trick here, is ps is reporting a percentage of what? Or, perhaps to give a better clue, a percentage of when?

PCPU Calculations

So to get to the bottom of this, let’s look at the relevant code. In ps/output.c we have a function pr_pcpu that prints the percent CPU. The relevant lines are:

  total_time = pp->utime + pp->stime;
  if(include_dead_children) total_time += (pp->cutime + pp->cstime);
  seconds = cook_etime(pp);
  if(seconds) pcpu = (total_time * 1000ULL / Hertz) / seconds;

OK, ignoring the include _dead_time line (you get this from the S option and means you include the time this process waited for its children processes) and the scaling (process times are in Jiffies, we have the CPU as 0 to 999 for reasons) you can reduce this down to.

%CPU = ( Tutime + Tstime ) / Tetime

So we find the amount of time the CPU(s) have been busy either in userland or the system, add them together, then divide the sum by the total time. The utime and stime increment like a car’s odometer. So if a process uses one Jiffy of CPU time in userland, that counter goes to 1. If it does it again a few seconds later, then that counter goes to 2.

To give an example, if a process has run for ten seconds and within those ten seconds the CPU has been busy in userland for that process, then we get 10/10 = 100% which makes sense.

Not all Start times are the same

Let’s take another example, a process still consumes ten seconds CPU time but been running for twenty seconds, the answer is 10/20 or 50%. With our single CPU example system both of these cannot be running at the same time otherwise we have 150% CPU utilisation which is not possible.

However, let’s adjust this slightly. We have assumed uniform utilisation. But take the following scenario:

  • At time T: Process P1 starts and uses 100% CPU
  • At time T+10 seconds: Process P1 stops using CPU but still runs, perhaps waiting for I/O or sleeping.
  • Also at time T+10 seconds: Process P2 starts and uses 100% CPU
  • At time T+20 we run the ps command and look at the %CPU column

The output for ps -o times,etimes,pcpu,comm would look something like:

    TIME ELAPSED %CPU COMMAND
      10      20   50 P1
      10      10  100 P2

What we will see is P1 has 10/20 or 50% CPU and P2 has 10/10 or 100% CPU. Add those up, and you have 150% CPU, magic!

The key here is the ELAPSED column. P1 has given you the CPU utilisation across 20 seconds of system time and P2 the CPU utilisation across only 10 seconds. You directly add them together you get the wrong answer.

What’s the point of %CPU?

Probably the %CPU column gives results that a lot of people are not expecting, so what’s the point of it? Don’t use it to see why the CPU is running hot; you can see above those two processes were working the CPU hard at different times. What it is useful for is to see how “busy” a process is, but be warned its an average. It’s helpful for something that starts busy but if the process idles or hardly uses CPU for a week then goes bananas you won’t see it.

The top program, because a lot of its statistics are deltas from the last refresh, is a much better program for this sort of information about what is happening right now.

18 January, 2021 09:39AM by Dropbear Blog

January 17, 2021

hackergotchi for Wouter Verhelst

Wouter Verhelst

On Statements, Facts, Hypotheses, Science, Religion, and Opinions

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/

Anyway...

[[!img Error: Image::Magick is not installed]]

17 January, 2021 02:03PM

Software available through Extrepo

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

17 January, 2021 02:03PM

Dear Google

... Why do you have to be so effing difficult about a YouTube API project that is used for a single event per year?

FOSDEM creates 600+ videos on a yearly basis. There is no way I am going to manually upload 600+ videos through your webinterface, so we use the API you provide, using a script written by Stefano Rivera. This script grabs video filenames and metadata from a YAML file, and then uses your APIs to upload said videos with said metadata. It works quite well. I run it from cron, and it uploads files until the quota is exhausted, then waits until the next time the cron job runs. It runs so well, that the first time we used it, we could upload 50+ videos on a daily basis, and so the uploads were done as soon as all the videos were created, which was a few months after the event. Cool!

The second time we used the script, it did not work at all. We asked one of our key note speakers who happened to be some hotshot at your company, to help us out. He contacted the YouTube people, and whatever had been broken was quickly fixed, so yay, uploads worked again.

I found out later that this is actually a normal thing if you don't use your API quota for 90 days or more. Because it's happened to us every bloody year.

For the 2020 event, rather than going through back channels (which happened to be unavailable this edition), I tried to use your normal ways of unblocking the API project. This involves creating a screencast of a bloody command line script and describing various things that don't apply to FOSDEM and ghaah shoot me now so meh, I created a new API project instead, and had the uploads go through that. Doing so gives me a limited quota that only allows about 5 or 6 videos per day, but that's fine, it gives people subscribed to our channel the time to actually watch all the videos while they're being uploaded, rather than being presented with a boatload of videos that they can never watch in a day. Also it doesn't overload subscribers, so yay.

About three months ago, I started uploading videos. Since then, every day, the "fosdemtalks" channel on YouTube has published five or six videos.

Given that, imagine my surprise when I found this in my mailbox this morning...

Google lies, claiming that my YouTube API project isn't being used for 90 days and informing me that it will be disabled

This is an outright lie, Google.

The project has been created 90 days ago, yes, that's correct. It has been used every day since then to upload videos.

I guess that means I'll have to deal with your broken automatic content filters to try and get stuff unblocked...

... or I could just give up and not do this anymore. After all, all the FOSDEM content is available on our public video host, too.

17 January, 2021 02:03PM

SReview 0.6

... isn't ready yet, but it's getting there.

I had planned to release a new version of SReview, my online video review and transcoding system that I wrote originally for FOSDEM but is being used for DebConf, too, after it was set up and running properly for FOSDEM 2020. However, things got a bit busy (both in my personal life and in the world at large), so it fell a bit by the wayside.

I've now also been working on things a bit more, in preparation for an improved administrator's interface, and have started implementing a REST API to deal with talks etc through HTTP calls. This seems to be coming along nicely, thanks to OpenAPI and the Mojolicious plugin for parsing that. I can now design the API nicely, and autogenerate client side libraries to call them.

While at it, because libmojolicious-plugin-openapi-perl isn't available in Debian 10 "buster", I moved the docker containers over from stable to testing. This revealed that both bs1770gain and inkscape changed their command line incompatibly, resulting in me having to work around those incompatibilities. The good news is that I managed to do so in a way that keeps running SReview on Debian 10 viable, provided one installs Mojolicious::Plugin::OpenAPI from CPAN rather than from a Debian package. Or installs a backport of that package, of course. Or, heck, uses the Docker containers in a kubernetes environment or some such -- I'd love to see someone use that in production.

Anyway, I'm still finishing the API, and the implementation of that API and the test suite that ensures the API works correctly, but progress is happening; and as soon as things seem to be working properly, I'll do a release of SReview 0.6, and will upload that to Debian.

Hopefully that'll be soon.

17 January, 2021 02:03PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Yesterday was our monthly Debian meeting.

Yesterday was our monthly Debian meeting. We do one every month for Tokyo and Kansai combined, because it's online and no reason to split for now. I presented what I learnt about nodejs packaging in Debian. This time, I started using Emacs for presentation, presenting PDF file. This week I switched most of my login shells to emacsclient, and experimenting. It's interesting how much things break, and how much I depended on .bashrc and .profile being loaded. But most things work and I don't need most things outside of emacs...

17 January, 2021 01:39AM by Junichi Uekawa

January 16, 2021

Abhijith PA

Transition from Thunderbird to Mutt

I was going OK with Thunderbird and enigmail(though it have many problems). Normally I go through changelogs before updating packages and rarely do a complete upgrage of my machine. Couple of days ago I did a complete upgrade of system which updated my Thunderbird to latest version and throwing of enigmail plugin for using their native openPGP support. There is a blog from Mozilla which I should’ve read earlier. Thunderbird’s builtin openPGP functionality is still in experimental, atleast not ready for my workflow. I could’ve downgrade to version 68. But I chose to move to my secondary MUA, mutt. I was using mutt for emails and newsletters that I check twice in a year a so.

So I started configuring mutt to handle my big mailboxes. It took three evenings to configure mutt to my workflow. Though the basic setup can be done in less than an hour it is the small nitpicks consumed much of my time. Currently I have isync to pull and keep mails offline. Mutt to read, msmtp to send, abook as the email address book and urlview to see the links in mail. I am still learning notmuch and virtual mailbox ways to filter.

Mutt

There are ton of articles out there to configure mutt and all related things to it. But I find certain configs very hard to get. So I will write down those.

  • As a long Thunderbird user, I still want my mailbox index to look in a certain way. I used;

     set date_format="%d/%m/%y %I:%M %p"
     set index_format="%3C | %Z %?X?@& ? %-22.17L %-5.50s %> %-22.20D"
    

    This gives following order - serial number, various flags (if there is an attachment, it will shows ‘@’), sender, subject and to extreme right there will be date and time (12h local) of mail

  • If use nano to write mails you can use
    set editor="nano --syntax=email -r 70 -b"
    

    to get 70 char length and mail related syntax highlighting

  • mutt has a way to make new mail notification. The new_mail_command can be used to execute a custom script upon new mail, such as running notify-send. There are many standalone mail notifier such as mailnag, mail notification etc. It all just felt bulky for me. So I ended up making these.

    #!/bin/sh
    accounts="$(awk '/^Channel/ {print $2}' "$MBSYNCRC")"
    for account in $accounts; do
         acc="$(echo "$account" | sed "s/.*\///")"
         new=$(find "$MAILBOX/$acc/Inbox/new/" -type f -newer "$MUTT/.mailsynclastrun" 2> /dev/null)
         newcount=$(echo "$new" | sed '/^\s*$/d' | wc -l)
         if [ "$newcount" -gt "0" ]; then
                 for file in $new; do
                 # Extract subject and sender from mail.
                 from=$(awk '/^From: / && ++n ==1,/^\<.*\>:/' "$file" | perl -CS -MEncode -ne 'print decode("MIME-Header", $_)' | awk '{ $1=""; if (NF>=3)$NF=""; print $0 }' | sed 's/^[[:blank:]]*[\"'\''\<]*//;s/[\"'\''\>]*[[:blank:]]*$//')
                 subject=$(awk '/^Subject: / && ++n == 1,/^\<.*\>: / && ++i == 2' "$file" | head -n 1 | perl -CS -MEncode -ne 'print decode("MIME-Header", $_)' | sed 's/^Subject: //' | sed 's/^{[[:blank:]]*[\"'\''\<]*//;s/[\"'\''\>]*[[:blank:]]*$//' | tr -d '\n')
                 displays="$(pgrep -a Xorg | grep -wo "[0-9]*:[0-9]\+")"
                         for x in $displays; do
                                 export DISPLAY=$x
                                 notify-send -i $MUTT/mutt.png -t 5000 "$account received new message:" "$from: <i>$subject</i>"
                         done
                 done
         fi
    done
    touch "$MUTT/.mailsynclastrun" 
    

    And hooked to mail_new_command. This code snippet is from lukesmith’s mutt-wizard, I barely modified to meet my need. Currently it only look in to ‘Inbox’. I need to modify to check other mailbox folders in future.

So far, everything going okay.

Cons

  • some times mbsync throws EOF and secret key not found error.
  • searching is still a pain in mutt
  • nano’s spell checker also check things which I am replying to.

More to come

Well for now I moved mail from part of the Thunderbird. But Thunderbird was more than a MUA to me. It was my RSS reader, calendar and to-do list manager. I will write more about those once I make a complete transition.

16 January, 2021 04:37AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's been 20 years since I became a Debian Developer.

It's been 20 years since I became a Debian Developer. Lots of fun things happened, and I think fondly of the team. I am no longer active for the past 10 years due to family reasons, and it's surprising that I have been inactive for that long. I still use Debian, and I still participate in the local Debian meetings.

16 January, 2021 12:22AM by Junichi Uekawa

January 15, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.6: Some Updates

rcpp logo

The Rcpp team is proud to announce release 1.0.6 of Rcpp which arrived at CRAN earlier today, and has been uploaded to Debian too. Windows and macOS builds should appear at CRAN in the next few days. This marks the first release on the new six-months cycle announced with release 1.0.5 in July. As reminder, interim ‘dev’ or ‘rc’ releases will often be available in the Rcpp drat repo; this cycle there were four.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2174 packages on CRAN depend on Rcpp for making analytical code go faster and further (which is an 8.5% increase just since the last release), along with 207 in BioConductor.

This release features six different pull requests from five different contributors, mostly fixing fairly small corner cases, plus some minor polish on documentation and continuous integration. Before releasing we once again made numerous reverse dependency checks none of which revealed any issues. So the passage at CRAN was pretty quick despite the large dependency footprint, and we are once again grateful for all the work the CRAN maintainers do.

Changes in Rcpp patch release version 1.0.6 (2021-01-14)

  • Changes in Rcpp API:

    • Replace remaining few uses of EXTPTR_PTR with R_ExternalPtrAddr (Kevin in #1098 fixing #1097).

    • Add push_back and push_front for DataFrame (Walter Somerville in #1099 fixing #1094).

    • Remove a misleading-to-wrong comment (Mattias Ellert in #1109 cleaning up after #1049).

    • Address a sanitizer report by initializing two private bool variables (Benjamin Christoffersen in #1113).

    • External pointer finalizer toggle default values were corrected to true (Dirk in #1115).

  • Changes in Rcpp Documentation:

    • Several URLs were updated to https and/or new addresses (Dirk).
  • Changes in Rcpp Deployment:

    • Added GitHub Actions CI using the same container-based setup used previously, and also carried code coverage over (Dirk in #1128).
  • Changes in Rcpp support functions:

    • Rcpp.package.skeleton() avoids warning from R. (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions.

If you like this or other open-source work I do, you can sponsor me at GitHub. My sincere thanks to my current sponsors for me keeping me caffeinated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 January, 2021 11:24PM

hackergotchi for Michael Prokop

Michael Prokop

Revisiting 2020

*

Mainly to recall what happened last year and to give thoughts and plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2019, 2018, 2017, 2016, 2015, 2014, 2013 + 2012).

Due to the Coronavirus disease (COVID-19) pandemic, 2020 was special™ for several reasons, but overall I consider myself and my family privileged and am very grateful for that.

In terms of IT events, I planned to attend Grazer Linuxdays and DebConf in Haifa/Israel. Sadly Grazer Linuxdays didn’t take place at all, and DebConf took place online instead (which I didn’t really participate in for several reasons). I took part in the well organized DENOG12 + ATNOG 2020/1 online meetings. I still organize our monthly Security Treff Graz (STG) meetups, and for half of the year, those meetings took place online (which worked OK-ish overall IMO).

Only at the beginning of 2020, I managed to play Badminton (still playing in the highest available training class (in german: “Kader”) at the University of Graz / Universitäts-Sportinstitut, USI). For the rest of the year – except for ~2 weeks in October or so – the sessions couldn’t occur.

Plenty of concerts I planned to attend were cancelled for obvious reasons, including the ones I would have played myself. But I managed to attend Jazz Redoute 2020 – Dom im Berg, Martin Grubinger in Musikverein Graz and Emiliano Sampaio’s Mega Mereneu Project at WIST Moserhofgasse (all before the corona situation kicked in). The concert from Tonč Feinig & RTV Slovenia Big Band occurred under strict regulations in Summer. At the beginning of 2020, I also visited Literaturshow “Roboter mit Senf” at Literaturhaus Graz.

The lack of concerts and rehearsals also severely impacted my playing the drums (including at HTU BigBand Graz), which pretty much didn’t take place. :(

Grml-wise we managed to publish release 2020.06, codename Ausgehfuahangl. Regarding jenkins-debian-glue I tried to clarify its state and received some really lovely feedback.

I consider 2020 as the year where I dropped regular usage of Jabber (so far my accounts still exist, but I’m no longer regularly online and am not sure for how much longer I’ll keep my accounts alive as such).

Business-wise it was our seventh year of business with SynPro Solutions GmbH. No big news but steady and ongoing work with my other business duties Grml Solutions and Grml-Forensic.

As usual, I shared childcare with my wife. Due to the corona situation, my wife got a new working schedule, which shuffled around our schedule a bit on Mondays + Tuesdays. Still, we managed to handle the homeschooling/distance learning quite well. Currently we’re sitting in the third lockdown, and yet another round of homeschooling/distance learning is going on those days (let’s see how long…). I counted 112 actual school days in all of 2020 for our older daughter with only 68 school days since our first lockdown on 16th of March, whereas we had 213(!) press conferences by our Austrian government in 2020. (Further rants about the situation in Austria snipped.)

Book reading-wise I managed to complete 60 books (see “Mein Lesejahr 2020“). Once again, I noticed that what felt like good days for me always included reading books, so I’ll try to keep my reading pace for 2021. I’ll also continue with my hobbies “Buying Books” and “Reading Books”, to get worse at Tsundoku.

Hoping for vaccination and a more normal 2021, Schwuppdiwupp!

15 January, 2021 11:07PM by mika

January 14, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Bullseye freeze

Bullseye is freezing! Yay! (And Trondheim is now below -10.)

It's too late for that kind of change now, but it would have been nice if plocate could have been default for bullseye:

plocate popcon graph

Surprisingly enough, mlocate has gone straight downhill:

mlocate popcon graph

It seems that since buster, there's an override in place to change its priority away from standard, and I haven't been able to find anyone who could tell me why. (It was known that it was request moved away from standard for cloud images, which makes a lot of sense, but not for desktop/server images.)

Perhaps for bookworm, we can get a locate back in the default install? plocate really is a much better user experience, in my (admittely biased) opinion. :-)

14 January, 2021 08:22PM

January 13, 2021

Antoine Beaupré

New phone: Pixel 4a

I'm sorry to announce that I gave up on the Fairphone series and switched to a Google Phone (Pixel 4a) running CalyxOS.

Problems in fairy land

My fairphone2, even if it is less than two years old, is having major problems:

  • from time to time, the screen flickers and loses "touch" until I squeeze it back together
  • the camera similarly disconnects regularly
  • even when it works, the camera is... pretty bad: low light is basically unusable, it's slow and grainy
  • the battery can barely keep up for one day
  • the cellular coverage is very poor, in Canada: I lose signal at the grocery store and in the middle of my house...

Some of those problems are known: the Fairphone 2 is old now. It was probably old even when I got it. But I can't help but feel a little sad to let it go: the entire point of that device was to make it easy to fix. But alas, because it's sold only in Europe, local stores don't carry replacement parts. To be fair, Fairphone did offer to fix the device, but with a 2 weeks turnaround, I had to get another phone anyways.

I did actually try to buy a fairphone3, from Clove. But they did some crazy validation routine. By email, they asked me to provide a photo copy of a driver's license and the credit card, arguing they need to do this to combat fraud. I found that totally unacceptable and asked them to cancel my order. And because I'm not sure the FP3 will fix the coverage issues, I decided to just give up on Fairphone until they officially ship to the Americas.

Do no evil, do not pass go, do not collect 200$

So I got a Google phone, specifically a Pixel 4a. It's a nice device, all small and shiny, but it's "plasticky" - I would have prefered metal, but it seems you need to pay much, much more to get that (in the Pixel 5).

In any case, it's certainly a better form factor than the Fairphone 2: even though the screen is bigger, the device itself is actually smaller and thinner, which feels great. The OLED screen is beautiful, awesome contrast and everything, and preliminary tests show that the camera is much better than the one on the Fairphone 2. (The be fair, again, that is another thing the FP3 improved significantly. And that is with the stock Camera app from CalyxOS/AOSP, so not as good as the Google Camera app, which does AI stuff.)

CalyxOS: success

The Pixel 4a not not supported by LineageOS: it seems every time I pick a device in that list, I manage to miss the right device by one (I bought a Samsung S9 before, which is also unsupported, even though the S8 is). But thankfully, it is supported by CalyxOS.

That install was a breeze: I was hesitant in playing again with installing a custom Android firmware on a phone after fighting with this quite a bit in the past (e.g. htc-one-s, lg-g3-d852). But it turns out their install instructions, mostly using a AOSP alliance device-flasher works absolutely great. It assumes you know about the commandline, and it does require to basically curl | sudo (because you need to download their binary and run it as root), but it Just. Works. It reminded me of how great it was to get the Fairphone with TWRP preinstalled...

Oh, and kudos to the people in #calyxos on Freenode: awesome tech support, super nice folks. An amazing improvement over the ambiance in #lineageos! :)

Migrating data

Unfortunately, migrating the data was the usual pain in the back. This should improve the next time I do this: CalyxOS ships with seedvault, a secure backup system for Android 10 (or 9?) and later which backs up everything (including settings!) with encryption. Apparently it works great, and CalyxOS is also working on a migration system to switch phones.

But, obviously, I couldn't use that on the Fairphone 2 running Android 7... So I had to, again, improvised. The first step was to install Syncthing, to have an easy way to copy data around. That's easily done through F-Droid, already bundled with CalyxOS (including the privileged extension!). Pair the devices and boom, a magic portal to copy stuff over.

The other early step I took was to copy apps over using the F-Droid "find nearby" functionality. It's a bit quirky, but really helps in copying a bunch of APKs over.

Then I setup a temporary keepassxc password vault on the Syncthing share so that I could easily copy-paste passwords into apps. I used to do this in a text file in Syncthing, but copy-pasting in the text file is much harder than in KeePassDX. (I just picked one, maybe KeePassDroid is better? I don't know.) Do keep a copy of the URL of the service to reduce typing as well.

Then the following apps required special tweaks:

  • AntennaPod has an import/export feature: export on one end, into the Syncthing share, then import on the other. then go to the queue and select all episodes and download
  • the Signal "chat backup" does copy the secret key around, so you don't get the "security number change" warning (even if it prompts you to re-register) - external devices need to be relinked though
  • AnkiDroid, DSub, Nextcloud, and Wallabag required copy-pasting passwords

I tried to sync contacts with DAVx5 but that didn't work so well: the account was setup correctly, but contacts didn't show up. There's probably just this one thing I need to do to fix this, but since I don't really need sync'd contact, it was easier to export a VCF file to Syncthing and import again.

Known problems

One problem with CalyxOS I found is that the fragile little microg tweaks didn't seem to work well enough for Signal. That was unexpected so they encouraged me to file that as a bug.

The other "issue" is that the bootloader is locked, which makes it impossible to have "root" on the device. That's rather unfortunate: I often need root to debug things on Android. In particular, it made it difficult to restore data from OSMand (see below). But I guess that most things just work out of the box now, so I don't really need it and appreciate the extra security. Locking the bootloader means full cryptographic verification of the phone, so that's a good feature to have!

OSMand still doesn't have a good import/export story. I ended up sharing the Android/data/net.osmand.plus/files directory and importing waypoints, favorites and tracks by hand. Even though maps are actually in there, it's not possible for Syncthing to write directly to the same directory on the new phone, "thanks" to the new permission system in Android which forbids this kind of inter-app messing around.

Tracks are particularly a problem: my older OSMand setup had all those folders neatly sorting those tracks by month. This makes it really annoying to track every file manually and copy it over. I have mostly given up on that for now, unfortunately. And I'll still need to reconfigure profiles and maps and everything by hand. Sigh. I guess that's a good clearinghouse for my old tracks I never use...

Update: turns out setting storage to "shared" fixed the issue, see comments below!

Conclusion

Overall, CalyxOS seems like a good Android firmware. The install is smooth and the resulting install seems solid. The above problems are mostly annoyances and I'm very happy with the experience so far, although I've only been using it for a few hours so this is very preliminary.

13 January, 2021 10:49PM

hackergotchi for Daniel Pocock

Daniel Pocock

Why is Free Software important in home automation?

There are many serious issues to reflect on after the siege at the US Capitol.

One of those is the importance of genuinely Free Software, with full source code for appliances in our homes and our communications platforms. From Trump Tower to the White House, Free Software like Domoticz is your (only) friend.

Please join my session about Free Communications at FOSDEM 2021, online due to the pandemic. Subscribe here for announcements about major achievements in Free Real-Time Communications technology.

13 January, 2021 03:10PM

Vincent Fourmond

Taking advantage of Ruby in QSoas

First of all, let me all wish you a happy new year, with all my wishes of health and succes. I sincerely hope this year will be simpler for most people as last year !

For the first post of the year, I wanted to show you how to take advantage of Ruby, the programming language embedded in QSoas, to make various things, like:
  • creating a column with the sum of Y values;
  • extending values that are present only in a few lines;
  • renaming datasets using a pattern.

Summing the values in a column

When using commands that take formulas (Ruby code), like apply-formula, the code is run for every single point, for which all the values are updated. In particulier, the state of the previous point is not known. However, it is possible to store values in what is called global variables, whose name start with an $ sign. Using this, we can keep track of the previous values. For instance, to create a new column with the sum of the y values, one can use the following approach:
QSoas> eval $sum=0
QSoas> apply-formula /extra-columns=1 $sum+=y;y2=$sum
The first line initializes the variable to 0, before we start summing, and the code in the second line is run for each dataset row, in order. For the first row, for instance, $sum is initially 0 (from the eval line); after the execution of the code, it is now the first value of y. After the second row, the second value of y is added, and so on. The image below shows the resulting y2 when used on:
QSoas> generate-dataset -1 1 x


Extending values in a column

Another use of the global variables is to add "missing" data. For instance, let's imagine that a files given the variation of current over time as the potential is changed, but the potential is only changed stepwise and only indicated when it changes:
## time	current	potential
0	0.1	0.5
1	0.2
2	0.3
3	0.2
4	1.2	0.6
5	1.3
...
If you need to have the values everywhere, for instance if you need to split on their values, you could also use a global variable, taking advantage of the fact that missing values are represented by QSoas using "Not A Number" values, which can be detected using the Ruby function nan?:
QSoas> apply-formula "if y2.nan?; then y2=$value; else $value=y2;end"
Note the need of quotes because there are spaces in the ruby code. If the value of y2 is NaN, that is it is missing, then it is taken from the global variable $value else $value is set the current value of y2. Hence, the values are propagated down:
## time	current	potential
0	0.1	0.5
1	0.2	0.5
2	0.3	0.5
3	0.2	0.5
4	1.2	0.6
5	1.3	0.6
...
Of course, this doesn't work if the first value of y2 is missing.

Renaming using a pattern

The command save-datasets can be used to save a whole series of datasets to the disk. It can also rename them on the fly, and, using the /mode=rename option, does only the renaming part, without saving. You can make full use of meta-data (see also a first post here)for renaming. The full power is unlocked using the /expression= option. For instance, for renaming the last 5 datasets (so numbers 0 to 4) using a scheme based on the value of their pH meta-data, you can use the following code:
QSoas> save-datasets /mode=rename /expression='"dataset-#{$meta.pH}"' 0..4
The double quotes are cumbersome but necessary, since the outer quotes (') prevent the inner ones (") to be removed and the inner quotes are here to indicate to Ruby that we are dealing with text. The bit inside #{...} is interpreted by Ruby as Ruby code; here it is $meta.pH, the value of the "pH" meta-data. Finally the 0..4 specifies the datasets to work with. So theses datasets will change name to become dataset-7 for pH 7, etc...

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

13 January, 2021 02:45PM by Vincent Fourmond (noreply@blogger.com)

January 12, 2021

hackergotchi for Daniel Pocock

Daniel Pocock

Packaging Domoticz for Debian, Ubuntu, Raspbian and Fedora

Today I published fresh packages for Domoticz and the Domoticz-Zigate. As the instructions have changed for setting up the Domoticz-Zigate, this is an updated blog, verified with v4.12.102 of the Domoticz-Zigate plugin.

Getting started in home automation has never been easier, cheaper and more important

Many countries are now talking about longer lockdowns to restrict new strains of the Coronavirus. When the new US President takes office, many suspect he will introduce more stringent restrictions than his predecessor. Smart lighting can make life more enjoyable when spending time at home.

At the same time, more and more companies are bringing out low-cost Zigbee solutions. A previous blog covered Lidl's new products in December. Ikea's products are also incredibly cheap, they include a wide range of bulbs, buttons, motion sensors, smart sockets and other accessories that work with free solutions like Domoticz.

NOTE: when you use the ZiGate stick, you do not need to buy the hub from Philips, Ikea or any other vendor and you do not need their proprietary apps. Your ZiGate stick becomes the hub.

Packaging details

Federico Ceratto and I have been working on the Debian packaging of Domoticz, one of the leading free and open source home automation / smart home solutions. As there are a large suite of packages involved, I'm keen to find more people willing to collaborate on a parallel packaging for Fedora. Many of the enhancements we've made for Debian, such as the directory structure, are applicable across all GNU/Linux distributions.

As part of that effort, I've also been packaging the plugin for the Zigate USB stick and two of the utilities for updating firmware on the Zigate, the JennicModuleProgrammer and the zigate-flasher. This gives users a complete solution in the form of familiar packages.

These are initially Debian packages, also available for Raspbian, but I also try to share any lessons from this effort with the upstream developers and also provide a foundation for Fedora packaging. Fedora already has the core Domoticz package since Fedora 32. Some of the other related packages described here are fairly straightforward to port.

Raspberry Pi 2.0 with Zigate USB stick

Trying the packages

Raspbian setup (Raspbian users only)

If you have a regular Debian setup, you can skip to the next section.

  1. Download the Raspbian light image from the official Raspbian download page
  2. Write the image to an SD card using
    cat
    or a similar tool
  3. Boot the Raspberry Pi
  4. Login as user pi with the password raspberry
  5. Run
    sudo raspi-config
  6. Set a password for the pi user, the default is raspberry
  7. Set the hostname
  8. In (4) Localization settings, set your Locale, Timezone and Keyboard
  9. In (5) Interfacing, enable SSH
  10. At the command line, run
    timedatectl
    to verify the time and time synchronization is correct
  11. Run
    ip addr
    to see your IP address
  12. (optional) Connect to the Pi from your usual workstation and copy your SSH public key to
    ~pi/.ssh/authorized_keys
  13. (optional) Disable PasswordAuthentication in
    /etc/ssh/sshd_config
    (you don't want the script kiddies next door turning on your heating system in the middle of summer do you?)

Package repository setup (Debian, Raspbian and Ubuntu users)

As these were not included in the recent release of Debian 10 (buster) and as they are evolving more rapidly than packages in the stable distribution, I'm distributing them through Debify.

To use packages from Debify, the first step is to update your apt configuration. This works for regular Debian or Raspbian systems running stretch or buster:

$ wget -O - http://apt.debify.org/add-apt-debify | bash

Package installation

$ sudo apt update
$ sudo apt install domoticz-zigate-plugin

All the necessary packages should be installed automatically. A sample installation log is below for comparison.

Zigate users - check firmware version

You may want to check that you have the latest firmware installed in your Zigate.

  1. Download the firmware image from here
  2. If your apt setup automatically installs Recommended packages then jennic-module-programmer has already been installed for you. If not, you can do
    sudo apt install jennic-module-programmer zigate-flasher
    . You can also try the alternative tool, zigate-flasher
  3. Unplug your Zigate USB stick and open the case. Hold down the button and while holding the button, reconnect it to the USB port. The blue light should now be illuminated but very dimly. This means it is ready for a firmware update.
  4. Run the update, for example, if you copied or downloaded the firmware to /tmp and if you have no other USB serial device attached, you could use the following command:
    sudo JennicModuleProgrammer -s /dev/ttyUSB0 -f /tmp/ZiGate_v3.1d.bin
  5. Wait for the command to complete successfully
  6. Detach the Zigate and put it back into its case
  7. Attach the Zigate again
  8. Restart Domoticz:
    sudo systemctl restart domoticz

If the JennicModuleProgrammer utility doesn't work for you, if it sits there for ten minutes without doing anything you can also try the zigate-flasher. I packaged both of these so you have the choice: please share your feedback in the Domoticz forums. Repeat the steps above, replacing step 4 with:

  1. $ sudo zigate-flasher -p /dev/ttyUSB0 -w /tmp/ZiGate_v3.1d.bin
    Found MAC-address: 00:11:22:33:44:55:66:77
    writing new flash from /tmp/ZiGate_v3.1d.bin
    $
    

Zigate users: quick Domoticz setup

  • Make sure your power supply provides sufficient power for the Raspberry Pi and the Zigate
  • Open a terminal window to monitor the Domoticz logs on the device running Domoticz. Use the command sudo journalctl -f to monitor the logs. You will see various clues about progress when starting up, when adding your ZiGate to Domoticz and when joining devices to it.
  • Connect to the Raspberry Pi using your web browser, Domoticz uses port 8080. Therefore, if the IP address is 192.168.1.105, the URL will be http://192.168.1.105:8080
  • Click the Setup -> Hardware setting in the menu
  • Add your Zigate.
  • Set the option Initialize ZiGate (Erase Memory) to True for your first attempt. After doing this once, you need to come back here and set it False.
  • Click the Add button at the bottom and verify that the ZiGate appears.
  • On the same host, try to access the Domoticz-Zigate plugin on port 9440. It may take a few seconds before it is ready. For example, if the IP address is 192.168.1.105, the URL will be http://192.168.1.105:9440
  • In the Domoticz-Zigate plugin settings, look at the top of the screen and click the button Accept New Hardware. The blue light on the ZiGate stick will start flashing.
  • Try to join a device such as a light bulb or temperature sensor. For example, on a temperature sensor, you might need to hold down the join button for a few seconds until the light flashes.
  • In the Domoticz-Zigate plugin on port 9440, you can go to the Management->Devices page to verify that the device is really there.
  • In Domoticz on port 8080, go to the Setup -> Devices tab and verify the new device is visible in the list

Native Zigbee bindings between the light switches (dimmers) and the bulbs

Zigbee devices can link directly to each other. This ensures that they keep working if the hub / ZiGate is offline.

To do this with Domoticz and/or multiple bulbs, you need to make the connections in a specific order:

  1. For best results, make sure every other device is disconnected from power and only try to join one device at a time. Otherwise, the wrong device may become associated with a switch/button.
  2. Reset each bulb, one at a time, and then join them in Domoticz using the procedure described in the previous section
  3. Reset each light switch/dimmer, one at a time and then join them in Domoticz using the procedure described in the previous section. To reset a Philips Hue dimmer, press the setup button on the back. To reset and Ikea dimmer, press the button on the back 4 times in 5 seconds.
  4. Unplug all the light bulbs, smart sockets and other devices except for the one you want to join first
  5. Hold the light switch/dimmer close to the bulb and complete the native joining procedure, for example, if it is an Ikea dimmer, you hold the button on the back for 10 seconds.
  6. Unplug the bulb, plug in the next bulb and repeat the procedure.
  7. Note that you can join more than one bulb to the same light switch/dimmer in this procedure and you can also join more than one light switch to the same bulb.
  8. You will be able to control these bulbs from Domoticz but if your Domoticz is offline, you can also control them directly with the light switches/dimmers paired in this procedure.

Next steps and troubleshooting

Please share your feedback and questions through the Domoticz forums.

Sample installation log

pi@pi5:~ $ sudo apt install domoticz-zigate-plugin
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  domoticz jennic-module-programmer libopenzwave1.6 libpython3-dev libpython3.5-dev openzwave
The following NEW packages will be installed:
  domoticz domoticz-zigate-plugin jennic-module-programmer libopenzwave1.6 libpython3-dev libpython3.5-dev openzwave
0 upgraded, 7 newly installed, 0 to remove and 74 not upgraded.
Need to get 51.7 MB of archives.
After this operation, 91.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf libopenzwave1.6 armhf 1.6+ds-1~bpo9+1 [406 kB]
Get:2 http://raspbian.raspberrypi.org/raspbian stretch/main armhf libpython3.5-dev armhf 3.5.3-1+deb9u1 [36.9 MB]
Get:3 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf openzwave armhf 1.6+ds-1~bpo9+1 [24.6 kB]
Get:4 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf domoticz armhf 4.11020-2~bpo9+1 [10.8 MB]
Get:5 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf domoticz-zigate-plugin all 4.4.9~beta1-2~bpo9+1 [3,515 kB]
Get:6 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf jennic-module-programmer armhf 0.6-1~bpo9+1 [9,690 B]
Get:7 http://raspbian.raspberrypi.org/raspbian stretch/main armhf libpython3-dev armhf 3.5.3-1 [18.7 kB]                                                                     
Fetched 51.7 MB in 9s (5,717 kB/s)                                                                                                                                           
Selecting previously unselected package libopenzwave1.6.
(Reading database ... 34831 files and directories currently installed.)
Preparing to unpack .../0-libopenzwave1.6_1.6+ds-1~bpo9+1_armhf.deb ...
Unpacking libopenzwave1.6 (1.6+ds-1~bpo9+1) ...
Selecting previously unselected package libpython3.5-dev:armhf.
Preparing to unpack .../1-libpython3.5-dev_3.5.3-1+deb9u1_armhf.deb ...
Unpacking libpython3.5-dev:armhf (3.5.3-1+deb9u1) ...
Selecting previously unselected package libpython3-dev:armhf.
Preparing to unpack .../2-libpython3-dev_3.5.3-1_armhf.deb ...
Unpacking libpython3-dev:armhf (3.5.3-1) ...
Selecting previously unselected package openzwave.
Preparing to unpack .../3-openzwave_1.6+ds-1~bpo9+1_armhf.deb ...
Unpacking openzwave (1.6+ds-1~bpo9+1) ...
Selecting previously unselected package domoticz.
Preparing to unpack .../4-domoticz_4.11020-2~bpo9+1_armhf.deb ...
Unpacking domoticz (4.11020-2~bpo9+1) ...
Selecting previously unselected package domoticz-zigate-plugin.
Preparing to unpack .../5-domoticz-zigate-plugin_4.4.9~beta1-2~bpo9+1_all.deb ...
Unpacking domoticz-zigate-plugin (4.4.9~beta1-2~bpo9+1) ...
Selecting previously unselected package jennic-module-programmer.
Preparing to unpack .../6-jennic-module-programmer_0.6-1~bpo9+1_armhf.deb ...
Unpacking jennic-module-programmer (0.6-1~bpo9+1) ...
Setting up jennic-module-programmer (0.6-1~bpo9+1) ...
Setting up libopenzwave1.6 (1.6+ds-1~bpo9+1) ...
Setting up libpython3.5-dev:armhf (3.5.3-1+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libpython3-dev:armhf (3.5.3-1) ...
Setting up openzwave (1.6+ds-1~bpo9+1) ...
Setting up domoticz (4.11020-2~bpo9+1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/domoticz.service → /lib/systemd/system/domoticz.service.
Setting up domoticz-zigate-plugin (4.4.9~beta1-2~bpo9+1) ...
Adding user `domoticz' to group `dialout' ...
Adding user domoticz to group dialout
Done.
pi@pi5:~ $ 

12 January, 2021 05:55PM

John Goerzen

Remote Directory Tree Comparison, Optionally Asynchronous and Airgapped

Note: this is another article in my series on asynchronous communication in Linux with UUCP and NNCP.

In the previous installment on store-and-forward backups, I mentioned how easy it is to do with ZFS, and some of the tools that can be used to do it without ZFS. A lot of those tools are a bit less robust, so we need some sort of store-and-forward mechanism to verify backups. To be sure, verifying backups is good with ANY scheme, and this could be used with ZFS backups also.

So let’s say you have a shiny new backup scheme in place, and you’d like to verify that it’s working correctly. To do that, you need to compare the source directory tree on machine A with the backed-up directory tree on machine B.

Assuming a conventional setup, here are some ways you might consider to do that:

  • Just copy everything from machine A to machine B and compare locally
  • Or copy everything from machine A to a USB drive, plug that into machine B, and compare locally
  • Use rsync in dry-run mode and see if it complains about anything

The first two options are not particularly practical for large datasets, though I note that the second is compatible with airgapping. Using rsync requires both systems to be online at the same time to perform the comparison.

What would be really nice here is a tool that would write out lots of information about the files on a system: their names, sizes, last modified dates, maybe even sha256sum and other data. This file would be far smaller than the directory tree itself, would compress nicely, and could be easily shipped to an airgapped system via NNCP, UUCP, a USB drive, or something similar.

Tool choices

It turns out there are already quite a few tools in Debian (and other Free operating systems) to do this, and half of them are named mtree (though, of course, not all mtrees are compatible with each other.) We’ll look at some of the options here.

I’ve made a simple test directory for illustration purposes with these commands:

mkdir test
cd test
echo hi > hi
ln -s hi there
ln hi foo
touch empty
mkdir emptydir
mkdir somethingdir
cd somethingdir
ln -s ../there

I then also used touch to set all files to a consistent timestamp for illustration purposes.

Tool option: getfacl (Debian package: acl)

This comes with the acl package, but can be used with other than ACL purposes. Unfortunately, it doesn’t come with a tool to directly compare its output with a filesystem (setfacl, for instance, can apply the permissions listed but won’t compare.) It ignores symlinks and doesn’t show sizes or dates, so is ineffective for our purposes.

Example output:

$ getfacl --numeric -R test
...
# file: test/hi
# owner: 1000
# group: 1000
user::rw-
group::r--
other::r--
...

Tool option: fmtree, the FreeBSD mtree (Debian package: freebsd-buildutils)

fmtree can prepare a “specification” based on a directory tree, and compare a directory tree to that specification. The comparison also is aware of files that exist in a directory tree but not in the specification. The specification format is a bit on the odd side, but works well enough with fmtree. Here’s a sample output with defaults:

$ fmtree -c -p test
...
# .
/set type=file uid=1000 gid=1000 mode=0644 nlink=1
.               type=dir mode=0755 nlink=4 time=1610421833.000000000
    empty       size=0 time=1610421833.000000000
    foo         nlink=2 size=3 time=1610421833.000000000
    hi          nlink=2 size=3 time=1610421833.000000000
    there       type=link mode=0777 time=1610421833.000000000 link=hi

... skipping ...

# ./somethingdir
/set type=file uid=1000 gid=1000 mode=0777 nlink=1
somethingdir    type=dir mode=0755 nlink=2 time=1610421833.000000000
    there       type=link time=1610421833.000000000 link=../there
# ./somethingdir
..

..

You might be wondering here what it does about special characters, and the answer is that it has octal escapes, so it is 8-bit clean.

To compare, you can save the output of fmtree to a file, then run like this:

cd test
fmtree < ../test.fmtree

If there is no output, then the trees are identical. Change something and you get a line of of output explaining each difference. You can also use fmtree -U to change things like modification dates to match the specification.

fmtree also supports quite a few optional keywords you can add with -K. They include things like file flags, user/group names, various tipes of hashes, and so forth. I'll note that none of the options can let you determine which files are hardlinked together.

Here's an excerpt with -K sha256digest added:

    empty       size=0 time=1610421833.000000000 \
                sha256digest=e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    foo         nlink=2 size=3 time=1610421833.000000000 \
                sha256digest=98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4

If you include a sha256digest in the spec, then when you verify it with fmtree, the verification will also include the sha256digest. Obviously fmtree -U can't correct a mismatch there, but of course it will detect and report it.

Tool option: mtree, the NetBSD mtree (Debian package: mtree-netbsd)

mtree produces (by default) output very similar to fmtree. With minor differences (such as the name of the sha256digest in the output), the discussion above about fmtree also applies to mtree.

There are some differences, and the most notable is that mtree adds a -C option which reads a spec and converts it to a "format that's easier to parse with various tools." Here's an example:

$ mtree -c -K sha256digest -p test | mtree -C
. type=dir uid=1000 gid=1000 mode=0755 nlink=4 time=1610421833.0 flags=none 
./empty type=file uid=1000 gid=1000 mode=0644 nlink=1 size=0 time=1610421833.0 flags=none 
./foo type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./hi type=file uid=1000 gid=1000 mode=0644 nlink=2 size=3 time=1610421833.0 flags=none 
./there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=hi time=1610421833.0 flags=none 
./emptydir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir type=dir uid=1000 gid=1000 mode=0755 nlink=2 time=1610421833.0 flags=none 
./somethingdir/there type=link uid=1000 gid=1000 mode=0777 nlink=1 link=../there time=1610421833.0 flags=none 

Most definitely an improvement in both space and convenience, while still retaining the relevant information. Note that if you want the sha256digest in the formatted output, you need to pass the -K to both mtree invocations. I could have done that here, but it is easier to read without it.

mtree can verify a specification in either format. Given what I'm about to show you about bsdtar, this should illustrate why I bothered to package mtree-netbsd for Debian.

Unlike fmtree, the mtree -U command will not adjust modification times based on the spec, but it will report on differences.

Tool option: bsdtar (Debian package: libarchive-tools)

bsdtar is a fascinating program that can work with many formats other than just tar files. Among the formats it supports is is the NetBSD mtree "pleasant" format (mtree -C compatible).

bsdtar can also convert between the formats it supports. So, put this together: bsdtar can convert a tar file to an mtree specification without extracting the tar file. bsdtar can also use an mtree specification to override the permissions on files going into tar -c, so it is a way to prepare a tar file with things owned by root without resorting to tools like fakeroot.

Let's look at how this can work:

$ cd test
$ bsdtar --numeric -cf - --format=mtree .

. time=1610472086.318593729 mode=755 gid=1000 uid=1000 type=dir
./empty time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=0
./foo nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./hi nlink=2 time=1610421833.0 mode=644 gid=1000 uid=1000 type=file size=3
./ormat\075mtree time=1610472086.318593729 mode=644 gid=1000 uid=1000 type=file size=5632
./there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=hi
./emptydir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir time=1610421833.0 mode=755 gid=1000 uid=1000 type=dir
./somethingdir/there time=1610421833.0 mode=777 gid=1000 uid=1000 type=link link=../there

You can use mtree -U to verify that as before. With the --options mtree: set, you can also add hashes and similar to the bsdtar output. Since bsdtar can use input from tar, pax, cpio, zip, iso9660, 7z, etc., this capability can be used to create verification of the files inside quite a few different formats. You can convert with bsdtar -cf output.mtree --format=mtree @input.tar. There are some foibles with directly using these converted files with mtree -U, but usually minor changes will get it there.

Side mention: stat(1) (Debian package: coreutils)

This tool isn't included because it won't operate recursively, but is a tool in the similar toolbox.

Putting It Together

I will still be developing a complete non-ZFS backup system for NNCP (or UUCP) in a future post. But in the meantime, here are some ideas you can reflect on:

  • Let's say your backup scheme involves sending a full backup every night. On the source system, you could pipe the generated tar file through something like tee >(bsdtar -cf bcakup.mtree @-) to generate an mtree file in-band while generating the tar file. This mtree file could be shipped over for verification.
  • Perhaps your backup scheme involves sending incremental backup data via rdup or even ZFS, but you would like to periodically verify that everything is good -- that an incremental didn't miss something. Something like mtree -K sha256 -c -x -p / | mtree -C -K sha256 would let you accomplish that.

I will further develop at least one of these ideas in a future post.

Bonus: cross-tool comparisons

In my mtree-netbsd packaging, I added tests like this to compare between tools:

fmtree -c -K $(MTREE_KEYWORDS) | mtree
mtree -c -K $(MTREE_KEYWORDS) | sed -e 's/\(md5\|sha1\|sha256\|sha384\|sha512\)=/\1digest=/' -e 's/rmd160=/ripemd160digest=/' | fmtree
bsdtar -cf - --options 'mtree:uname,gname,md5,sha1,sha256,sha384,sha512,device,flags,gid,link,mode,nlink,size,time,uid,type,uname' --format mtree . | mtree

12 January, 2021 05:53PM by John Goerzen

Molly de Blanc

1028 Words on Free Software

The promise of free software is a near-future utopia, built on democratized technology. This future is just and it is beautiful, full of opportunity and fulfillment for everyone everywhere. We can create the things we dream about when we let our minds wander into the places they want to. We can be with the people we want and need to be, when we want and need to.

This is currently possible with the technology we have today, but it’s availability is limited by the reality of the world we live in – the injustice, the inequity, the inequality. Technology runs the world, but it does not serve the interests of most of us. In order to create a better world, our technology must be transparent, accountable, trustworthy. It must be just. It must be free.

The job of the free software movement is to demonstrate that this world is possible by living its values now: justice, equity, equality. We build them into our technology, and we build technology that make it possible for these values to exist in the world.

At the Free Software Foundation, we liked to say that we used all free software because it was important to show that we could. You can do anything with free software, so we did everything with it. We demonstrated the importance of unions for tech workers and non-profit workers by having one. We organized collectively and protected our rights for the sake of ourselves and one another. We had non-negotiable salaries, based on responsibility level and position. That didn’t mean we worked in an office free from the systemic problems that plague workplaces everywhere, but we were able to think about them differently.

Things were this way because of Richard Stallman – but I view his influence on these things as negative rather than positive. He was a cause that forced these outcomes, rather than being supportive of the desires and needs of others. Rather than indulge in gossip or stories, I would like to jump to the idea that he was supposed to have been deplatformed in October 2019. In resigning from his position as president of the FSF, he certainly lost some of his ability to reach audiences. However, Richard still gives talks. The FSF continues to use his image and rhetoric in their own messaging and materials. They gave him time to speak at their annual conference in 2020. He maintains leadership in the GNU project and otherwise within the FSF sphere. The people who empowered him for so many years are still in charge.

Richard, and the continued respect and space he is given, is not the only problem. It represents a bigger problem. Sexism and racism (among others) run rampant in the community. This happens because of bad actors and, more significantly, by the complacency of organizations, projects, and individuals afraid of losing contributors, respect, or funding. In a sector that has so much money and so many resources, women are still being paid less than men; we deny people opportunities to learn and grow in the name of immediate results; people who aren’t men, who aren’t white, are abused and harassed; people are mentally and emotionally taken advantage of, and we are coerced into burn out and giving up our lives for these companies and projects and we are paid for tolerating all of this by being told we’re doing a good job or making a difference.

But we’re not making a difference. We’re perpetuating the worst of the status quo that we should be fighting against. We must not continue. We cannot. We need to live our ideals as they are, and take the natural next steps in their evolution. We cannot have a world of just technology when we live in a world of exclusion; we cannot have free software if we continue to allow, tolerate, and laud the worst of us. I’ve been in and around free software for seventeen years. Nearly every part of it I’ve participated in has members and leadership that benefit from allowing and encouraging the continuation of maleficence and systemic oppression.

We must purge ourselves of these things – of sexism, racism, injustice, and the people who continue and enable it. There is no space to argue over whether a comment was transphobic – if it hurt a trans person then it is transphobic and it is unacceptable. Racism is a global problem and we must be anti-racist or we are complicit. Sexism is present and all men benefit from it, even if they don’t want to. These are free software issues. These are things that plague software, and these are things software reinforces within our societies.

If a technology is exclusionary, it does not work. If a community is exclusionary, it must be fixed or thrown away. There is no middle ground here. There is no compromise. Without doing this, without taking the hard, painful steps to actually live the promise of user freedom and everything it requires and entails, our work is pointless and free software will fail.

I don’t think it’s too late for there to be a radical change – the radical change – that allows us to create the utopia we want to see in the world. We must do that by acknowledging that just technology leads to a just society, and that a just society allows us to make just technology. We must do that by living within the principles that guide this future now.

I don’t know what will happen if things don’t change soon. I recently saw someone comment that change doesn’t happens unless one person is willing to sacrifice everything to make that change, to lead and inspire others to play small parts. This is unreasonable to ask of or expect from someone. I’ve been burning myself out to meet other people’s expectations for seventeen years, and I can’t keep doing it. Of course I am not alone, and I am not the only one working on and occupied by these problems. More people must step up, not just for my sake, but for the sake of all of us, the work free software needs to do, and the future I dream about.

12 January, 2021 04:40PM by mollydb

Russell Coker

PSI and Cgroup2

In the comments on my post about Load Average Monitoring [1] an anonymous person recommended that I investigate PSI. As an aside, why do I get so many great comments anonymously? Don’t people want to get credit for having good ideas and learning about new technology before others?

PSI is the Pressure Stall Information subsystem for Linux that is included in kernels 4.20 and above, if you want to use it in Debian then you need a kernel from Testing or Unstable (Bullseye has kernel 4.19). The place to start reading about PSI is the main Facebook page about it, it was originally developed at Facebook [2].

I am a little confused by the actual numbers I get out of PSI, while for the load average I can often see where they come from (EG have 2 processes each taking 100% of a core and the load average will be about 2) it’s difficult to work out where the PSI numbers come from. For my own use I decided to treat them as unscaled numbers that just indicate problems, higher number is worse and not worry too much about what the number really means.

With the cgroup2 interface which is supported by the version of systemd in Testing (and which has been included in Debian backports for Buster) you get PSI files for each cgroup. I’ve just uploaded version 1.3.5-2 of etbemon (package mon) to Debian/Unstable which displays the cgroups with PSI numbers greater than 0.5% when the load average test fails.

System CPU Pressure: avg10=0.87 avg60=0.99 avg300=1.00 total=20556310510
/system.slice avg10=0.86 avg60=0.92 avg300=0.97 total=18238772699
/system.slice/system-tor.slice avg10=0.85 avg60=0.69 avg300=0.60 total=11996599996
/system.slice/system-tor.slice/tor@default.service avg10=0.83 avg60=0.69 avg300=0.59 total=5358485146

System IO Pressure: avg10=18.30 avg60=35.85 avg300=42.85 total=310383148314
 full avg10=13.95 avg60=27.72 avg300=33.60 total=216001337513
/system.slice avg10=2.78 avg60=3.86 avg300=5.74 total=51574347007
/system.slice full avg10=1.87 avg60=2.87 avg300=4.36 total=35513103577
/system.slice/mariadb.service avg10=1.33 avg60=3.07 avg300=3.68 total=2559016514
/system.slice/mariadb.service full avg10=1.29 avg60=3.01 avg300=3.61 total=2508485595
/system.slice/matrix-synapse.service avg10=2.74 avg60=3.92 avg300=4.95 total=20466738903
/system.slice/matrix-synapse.service full avg10=2.74 avg60=3.92 avg300=4.95 total=20435187166

Above is an extract from the output of the loadaverage check. It shows that tor is a major user of CPU time (the VM runs a ToR relay node and has close to 100% of one core devoted to that task). It also shows that Mariadb and Matrix are the main users of disk IO. When I installed Matrix the Debian package told me that using SQLite would give lower performance than MySQL, but that didn’t seem like a big deal as the server only has a few users. Maybe I should move Matrix to the Mariadb instance. to improve overall system performance.

So far I have not written any code to display the memory PSI files. I don’t have a lack of RAM on systems I run at the moment and don’t have a good test case for this. I welcome patches from people who have the ability to test this and get some benefit from it.

We are probably about 6 months away from a new release of Debian and this is probably the last thing I need to do to make etbemon ready for that.

12 January, 2021 06:50AM by etbe

RISC-V and Qemu

RISC-V is the latest RISC architecture that’s become popular. It is the 5th RISC architecture from the University of California Berkeley. It seems to be a competitor to ARM due to not having license fees or restrictions on alterations to the architecture (something you have to pay extra for when using ARM). RISC-V seems the most popular architecture to implement in FPGA.

When I first tried to run RISC-V under QEMU it didn’t work, which was probably due to running Debian/Unstable on my QEMU/KVM system and there being QEMU bugs in Unstable at the time. I have just tried it again and got it working.

The Debian Wiki page about RISC-V is pretty good [1]. The instructions there got it going for me. One thing I wasted some time on before reading that page was trying to get a netinst CD image, which is what I usually do for setting up a VM. Apparently there isn’t RISC-V hardware that boots from a CD/DVD so there isn’t a Debian netinst CD image. But debootstrap can install directly from the Debian web server (something I’ve never wanted to do in the past) and that gave me a successful installation.

Here are the commands I used to setup the base image:

apt-get install debootstrap qemu-user-static binfmt-support debian-ports-archive-keyring

debootstrap --arch=riscv64 --keyring /usr/share/keyrings/debian-ports-archive-keyring.gpg --include=debian-ports-archive-keyring unstable /mnt/tmp http://deb.debian.org/debian-ports

I first tried running RISC-V Qemu on Buster, but even ls didn’t work properly and the installation failed.

chroot /mnt/tmp bin/bash
# ls -ld .
/usr/bin/ls: cannot access '.': Function not implemented

When I ran it on Unstable ls works but strace doesn’t work in a chroot, this gave enough functionality to complete the installation.

chroot /mnt/tmp bin/bash
# strace ls -l
/usr/bin/strace: test_ptrace_get_syscall_info: PTRACE_TRACEME: Function not implemented
/usr/bin/strace: ptrace(PTRACE_TRACEME, ...): Function not implemented
/usr/bin/strace: PTRACE_SETOPTIONS: Function not implemented
/usr/bin/strace: detach: waitpid(1602629): No child processes
/usr/bin/strace: Process 1602629 detached

When running the VM the operation was noticably slower than the emulation of PPC64 and S/390x which both ran at an apparently normal speed. When running on a server with equivalent speed CPU a ssh login was obviously slower due to the CPU time taken for encryption, a ssh connection from a system on the same LAN took 6 seconds to connect. I presume that because RISC-V is a newer architecture there hasn’t been as much effort made on optimising the Qemu emulation and that a future version of Qemu will be faster. But I don’t think that Debian/Bullseye will give good Qemu performance for RISC-V, probably more changes are needed than can happen before the freeze. Maybe a version of Qemu with better RISC-V performance can be uploaded to backports some time after Bullseye is released.

Here’s the Qemu command I use to run RISC-V emulation:

qemu-system-riscv64 -machine virt -device virtio-blk-device,drive=hd0 -drive file=/vmstore/riscv,format=raw,id=hd0 -device virtio-blk-device,drive=hd1 -drive file=/vmswap/riscv,format=raw,id=hd1 -m 1024 -kernel /boot/riscv/vmlinux-5.10.0-1-riscv64 -initrd /boot/riscv/initrd.img-5.10.0-1-riscv64 -nographic -append net.ifnames=0 noresume security=selinux root=/dev/vda ro -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-device,rng=rng0 -device virtio-net-device,netdev=net0,mac=02:02:00:00:01:03 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Currently the program /usr/sbin/sefcontext_compile from the selinux-utils package needs execmem access on RISC-V while it doesn’t on any other architecture I have tested. I don’t know why and support for debugging such things seems to be in early stages of development, for example the execstack program doesn’t work on RISC-V now.

RISC-V emulation in Unstable seems adequate for people who are serious about RISC-V development. But if you want to just try a different architecture then PPC64 and S/390 will work better.

12 January, 2021 01:31AM by etbe

John Goerzen

The Good, Bad, and Scary of the Banning of Donald Trump, and How Decentralization Makes It All Better

It is undeniable that banning Donald Trump from Facebook, Twitter, and similar sites is a benefit for the moment. It may well save lives, perhaps lots of lives. But it raises quite a few troubling issues.

First, as EFF points out, these platforms have privileged speakers with power, especially politicians, over regular users. For years now, it has been obvious to everyone that Donald Trump has been violating policies on both platforms, and yet they did little or nothing about it. The result we saw last week was entirely forseeable — and indeed, WAS forseen, including by elements in those companies themselves. (ACLU also raises some good points)

Contrast that with how others get treated. Facebook, two days after the coup attempt, banned Benjamin Wittes, apparently because he mentioned an Atlantic article opposed to nutcase conspiracy theories. The EFF has also documented many more egregious examples: taking down documentation of war crimes, childbirth images, black activists showing the racist messages they received, women discussing online harassment, etc. The list goes on; YouTube, for instance, has often been promoting far-right violent videos while removing peaceful LGBTQ ones.

In short, have we simply achieved legal censorship by outsourcing it to dominant corporations?

It is worth pausing at this point to recognize two important princples:

First, that we do not see it as right to compel speech.

Secondly, that there exist communications channels and other services that nobody is calling on to suspend Donald Trump.

Let’s dive into those a little bit.

There have been no prominent calls for AT&T, Verizon, Gmail, or whomever provides Trump and his campaign with cell phones or email to suspend their service to him. Moreover, the gas stations that fuel his vehicles and the airports that service his plane continue to provide those services, and nobody has seriously questioned that, either. Even his Apple phone that he uses to post to Twitter remains, as far as I know, fully active.

Secondly, imagine you were starting up a small web forum focused on raising tomato plants. It is, and should be, well within your rights to keep tomato-haters out, as well as people that have no interest in tomatoes but would rather talk about rutabagas, politics, or Mars. If you are going to host a forum about tomatoes, you have the right to keep it a forum about tomatoes; you cannot be forced to distribute someone else’s speech. Likewise in traditional media, a newspaper cannot be forced to print every letter to the editor in full.

In law, there is a notion of a common carrier, that provides services to the general public without discrimination. Phone companies and ISPs fall under this.

Facebook, Twitter, and tomato sites don’t. But consider what happens if Facebook bans you. You might be using Facebook-owned Whatsapp to communicate with family and friends, and suddenly find yourself unable to ask someone to pick you up. Or your treasured family photos might be in Facebook-owned Instagram, lost forever. It’s not just Facebook; similar things happen with Google, locking people out of their phones and laptops, their emails, even their photos.

Is it right that Facebook and Google aren’t regulated as common carriers? Perhaps, or perhaps we need some line of demarcation between their speech-to-the-public services (Facebook timeline posts, YouTube) and private communication (Whatsapp, Gmail). It’s a thorny issue; should government be regulating speech instead? That’s also fraught. So is corporate control.

Decentralization Helps Dramatically

With email, you get to pick your email provider (yes, there are two or three big ones, but still plenty of others). Each email provider will have its own set of things it considers acceptable, and its own set of other servers and accounts it’s willing to exchange mail with. (It is extremely common for mail providers to choose not to accept mail from various other mail servers based on ISP, IP address, reputation, and so forth.)

What if we could do something like that for Twitter and Facebook?

Let you join whatever instance you like. Maybe one instance is all about art and they don’t talk about politics. Or another is all about Free Software and they don’t have advertising. And then there are plenty of open instances that accept anything that’s respectful. And, like email, people of one server can interact with those using another just as easily as if they were using the same one.

Well, this isn’t hypothetical; it already exists in the Fediverse. The most common option is Mastodon, and it so happens that a month ago I wrote about its benefits for other reasons, and included some links on getting started.

There is no reason that we must all let our online speech be controlled by companies with a profit motive to keep hate speech on their platforms. There is no reason that we must all have a single set of rules, or accept strong corporate or government control, either. The quality of conversation on Mastodon is far higher than either Twitter or Facebook; decentralization works and it’s here today.

12 January, 2021 01:04AM by John Goerzen

January 11, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.75.0-0: New upstream release, added Beast

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over 100 individual libraries. The BH package provides a sizeable subset of header-only libraries for use by R.

Version 1.75.0 of Boost was released in December, right on schedule with their April, August and December releases. I now try to follow these releases at a lower (annual) cadence and prepared BH 1.75.0-0 in mid-December. Extensive reverse-depends checks revealed a need for changes in a handful of packages whose maintainers I contacted then. With one exception, everybody responded in kind and brought updated packages to CRAN which permitted us to upload the package there two days ago. And thanks to this planned and coordinated upload, the package is now available on CRAN a mere two days later. My thanks to the maintainers of these packages for helping it along; this prompt responses really are appreciated. The version on CRAN is the same as the one the drat announced in this tweet asking for testing help. If you installed that version, you are still current as no changes were required since December and CRAN now contains same file.

This release adds one new library: Boost Beast, an http and websocket library built on top of Boost Asio. Other changes are highlighed below.

Changes in version 1.75.0-0 (2020-12-12)

  • Removed file NAMESPACE as the package has neither R code, nor a shared library to load

  • The file LICENSE_1_0.txt is now included (as requested in #73)

  • Added new beast library (as requested in #74)

  • Upgraded to Boost 1.75.0 (#75)

Via CRANberries, there is a diffstat report relative to the previous release. Its final line is quite impressive: 3485 files changed, 100838 insertions(+), 84890 deletions(-). Wow.

Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 January, 2021 11:57PM

hackergotchi for Bastian Venthur

Bastian Venthur

Dear Apple,

In the light of WhatsApp’s recent move to enforce new Privacy Agreements onto its users, alternative messenger services like Signal are currently gaining some more momentum.

While this sounds good, it is hard to believe that this will be more than a dent in WhatsApp’s user base. WhatsApp is way too ubiquitous, and the whole point of using such a service for most users is to use the one that everyone is using. Unfortunately.

Convincing WhatsApp users to additionally install Signal is hard: they already have SMS for the few people that are not using WhatsApp, now expecting them to install a third app for the same purpose seems ridiculous.

Android mitigates this problem a lot by allowing to make other apps — like Signal — the default SMS/MMS app on the phone. Suddenly people are able to use Signal for SMS/MMS and Signal messages transparently. Signal is smart enough to figure out if the conversation partner is using Signal and enables encryption, video calls and other features. If not, it just falls back to plain old SMS. All in the same app, very convenient for the user!

I don’t really get why the same thing is not possible on iOS? Apple is well known for taking things like privacy and security for its users seriously, and this seems like a low-hanging fruit. So dear Apple, wouldn’t now be a good time to team up with WhatsApp-alternatives like Signal to help the users to make the right choice?

11 January, 2021 09:00PM by Bastian Venthur

January 10, 2021

Tim Retout

Iustin Pop

Dealing with evil ads

Background

I usually don’t mind ads, as not as they not very intrusive. I get that the current media model is basically ad-funded, and that unless I want to pay $1/month or so to 50 web sites, I have to accept ads, so I don’t run an ad-blocker.

Sure, sometimes are annoying (hey YT, mid-roll ads are borderline), but I’ve also seen many good ads, as in interesting or funny even. Well, I don’t think I ever bought anything as direct result from ads, so I don’t know how useful ads are for the companies, but hey, what do I care.

Except… there a few ad networks that run what I would say are basically revolting ads. Things I don’t want to ever accidentally see while eating, or things that are really make you go WTF? Maybe you know them, maybe you don’t, but I guess there are people who don’t know how to clean their ears, or people for whom a fast 7 day weight loss routine actually works.

Thankfully, most of the time I don’t browse sites which use this networks, but randomly they do “leak� to even sites I do browse. If I’m not very stressed already, I can ignore them, otherwise they really, really annoy me.

Case in point, I was on Slashdot, and because I was logged on and recently had mod points, the right side column had a check-box “disable ads�. That sidebar had some relatively meaningful ads, like a VPN subscription (not that I would use it, but it is a tech thing), or even a book about Kali Linux, etc. etc. So I click the “disable ads�, and the right column goes away. I scroll down happily, only to be met, at the bottom, by the “best way to clean your ear�, “the most 50 useless planes ever built� (which had a drawing of something that was for sure never ever built outside of in movies), “you won’t believe how this child actor looks today�, etc.

Solving the problem

The above really, really pissed me off, so I went to search “how to block ad network�. To my surprise, the fix was not that simple, for standard users at least.

Method 1: hosts file

The hosts file is reasonable as it is relatively cross-platform (Linux and Windows and Mac, I think), but how the heck do you edit hosts on your phone?

And furthermore, it has some significant downsides.

First, /etc/hosts lists individual hosts, so for an entire ad network, the example I had had two screens of host names. This is really unmaintainable, since rotating host names, or having a gazillion of them is trivial.

Second, it cannot return negative answers. I.e. you have to give each of those hosts a valid IPv4/IPv6, and have something either reply with 404 or another 4xx response, or not listen on port 80/443. Too annoying.

And finally, it’s a client-side solution, so one would have to replicate it across all clients in a home, and keep it in sync.

Method 2: ad-blockers

I dislike ad-blockers on principle, since they need wide permissions on all pages, but it is a recommended solution. However, to my surprise, one finds threads saying ad-blocker foo has whitelisted ad network bar, at which point you’re WTF? Why do I use an ad-blocker if they get paid by the lowest of the ad networks to show the ads?

And again, it’s a client-side solution, and one would have to deploy it across N clients, and keep them in sync, etc.

Method 3: HTTP proxy blocking

To my surprise, I didn’t find this mentioned in a quick internet search. Well, HTTP proxies have long gone the way of the dodo due to “HTTPs everywhere�, and while one can still use them even with HTTPS, it’s not that convenient:

  • you need to tunnel all traffic through them, which might result in bottlenecks (especially for media playing/maybe video-conference/etc.).
  • or even worse, there might be protocol issues/incompatibilities due to 100% tunneling.
  • running a proxy opens up some potential security issues on the internal network, so you need to harden the proxy as well, and maintain it.
  • you need to configure all clients to know about the proxy (via DHCP or manually), which might or might not work well, since it’s client-dependent.
  • you can only block at CONNECT level (host name), and you have to build up regexes for the host name.

On the good side, the actual blocking configuration is centralised, and the only distributed configuration is pointing the clients through the proxy.

While I used to run a proxy back in HTTP times, the gains were significant back them (media elements caching, downloads caching, all with a slow pipe, etc.), but today is not worth it, so I’ve stopped and won’t bring a proxy back just for this.

Method 4: DNS resolver filtering

After thinking through all the options, I thought - hey, a caching/recursive DNS resolver is what most people with a local network run, right? How difficult is to block at resolver level?

… and oh my, it is so trivial, for some resolvers at least. And yes, I didn’t know about this a week ago 😅

Response Policy Zones

Now, of course, there is a standard for this, called Response Policy Zone, and which is supported across multiple resolvers. There are many tutorials on how to use RPZs to configure things, some of them quite detailed - e.g. this one, or a simple/more straightforward one here.

The upstream BIND documentation also explains things quite well here, so you can go that route as well. It looks a bit hairy for me thought, but it works, and since it is a standard, it can be more easily deployed.

There are many discussions on the internet about how to configure RPZs, how to not even resolve the names (if you’re going to return something explicitly/statically), etc. so there are docs, but again it seems a bit overdone.

Resolver hooks

There’s another way too, if your resolver allows scripting. For example, the PowerDNS resolver allow Lua scripting, and has a relatively simple API—at least, to me it looks way, way simpler than the RPZ equivalent.

After 20 minutes of reading the docs, I ended up with this, IMO trivial, solution (in a file named e.g. rules.lua):

ads = newDS()
ads:add({'evilads.com', 'evilads.well-known-cdn.com', 'moreads.net'})

function preresolve(dq)
  if ads:check(dq.qname) then
    dq.rcode = pdns.NXDOMAIN
    return true;
  end
  return false;
end

… and that’s it. Well, enable it/load the file in the configuration, but nothing else. Syntax is pretty straightforward, matching by suffix here, and if you need more complex stuff, you can of course do it; it’s just Lua and a simple API.

I don’t see any immediate equivalent in Bind, so there’s that, but if you can use PowerDNS, then the above solution seems simple for simple cases, and could be extended if needed (not sure in which cases).

The only other thing one needs to do is to serve the local/custom resolver to all clients, whether desktop or mobile, and that’s it. DNS server is bread-and-butter in DHCP, so better support than proxy, and once the host name has been (mis)resolved, nothing is involved anymore in the communication path. True, your name server might get higher CPU usage, but for home network, this should not be a problem.

Can this filtering method (either RPZ or hooks) be worked around by ad networks? Sure, like anything. But changing the base domain is not fun. DNSSEC might break it (note Bind RPZ can be configure to ignore DNSSEC), but I’m more worried about DNS-over-HTTPS, which I thought initially it’s done for the user, but now I’m not so sure anymore. Not being in control even of your own DNS resolver seems… evil 😈, but what do I know.

Combined authoritative + recursive solution

This solution was provided by Guillem Jover, who uses unbound, which is a combined authoritative name server and recursive resolver in one, and dnsmasq (which is even more things, I think):

For my LANs I use unbound, and then block this kind of thing in /etc/unbound/unbound.conf.d/block.conf, with stuff like:

server:
 local-zone: adsite.example.com refuse

But then for things that are mobile, and might get out of the LAN, such as laptops, I also block with dnsmasq in /etc/dnsmasq.d/block.conf, with stuff like:

 address=/adsite.example.com/

I still use ublock-origin to block stuff at the browser level, though, for yet an extra layer of noise suppression. :)

Thanks for the info!

Happy browsing!

10 lines of Lua, and now for sure I’m going to get even fatter without the “this natural method will melt your belly fat in 7 days� information. Or I will just throw away banana peels without knowing what I could do with hem.

After a few days, I asked myself “but ads are not so bad, why did I…� and then realised that yes, ads are not so bad anymore. And Slashdot actually loads faster 😜

So, happy browsing!

10 January, 2021 09:33PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.10.1.2.2: Minor update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 802 other packages on CRAN.

This release was needed because we use the Matrix package for some (optional) tests related to sparse matrices, and a small and subtle change and refinement in the recent 1.3.0 release of Matrix required us to make an update for the testing. Nothing has changed in how we set up, or operate on, sparse matrices. My thanks to Binxiang and Martin Maechler for feedback and suggestion on the initial fix both Binxiang and I set up independently. At the same time we upgrade some package internals related to continuous integration (for that, also see my blog post and video from earlier this week). Lastly Conrad sent in a one-line upstream fix for dealing with NaN in sign().

The full set of changes follows.

Changes in RcppArmadillo version 0.10.1.2.2 (2021-01-08)

  • Correct one unit test for Matrix 1.3.0-caused changed (Binxiang in #319 and Dirk in #322).

  • Suppress one further warning from Matrix (Dirk)

  • Apply an upstream NaN correction (Conrad in #321)

  • Added GitHub Actions CI using run.sh from r-ci (Dirk)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 January, 2021 08:20PM

January 09, 2021

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2020

As a reader of Planet Debian I see a bunch of updates at the start of each month about what people are up to in terms of their Free Software activities. I’m not generally active enough in the Free Software world to justify a monthly report, but I did a report of my Free Software Activities for 2019 and thought I’d do another for 2020. I ended up not doing as much as last year; I put a lot of that down to fatigue about the state of the world and generally not wanting to spend time on the computer at the end of the working day.

Conferences

2020 was unsurprisingly not a great year for conference attendance. I was fortunate enough to make it to FOSDEM and CopyleftConf 2020 - I didn’t speak at either, but had plenty of interesting hallway track conversations as well as seeing some good talks. I hadn’t been planning to attend DebConf20 due to time constraints, but its move to an entirely online conference meant I was able to attend a few talks at least. I have to say I don’t like virtual conferences as much as the real thing; it’s not as easy to have the casual chats at them, and it’s also harder to carve out the exclusive time when you’re at home. That said I spoke at NIDevConf this year, which was also fully virtual. It’s not a Free Software focussed conference, but there’s a lot of crossover in terms of technologies and I spoke on my experiences with Go, some of which are influenced by my packaging experiences within Debian.

Debian

Most of my contributions to Free software happen within Debian.

As part of the Data Protection Team I responded to various inbound queries to that team. Some of this involved chasing up other project teams who had been slow to respond - folks, if you’re running a service that stores personal data about people then you need to be responsive to requests about it.

The Debian Keyring was possibly my largest single point of contribution. We’re in a roughly 3 month rotation of who handles the keyring updates, and I handled 2020.02.02, 2020.03.24, 2020.06.24, 2020.09.24 + 2020.12.24

For Debian New Members I’m mostly inactive as an application manager - we generally seem to have enough available recently. If that changes I’ll look at stepping in to help, but I don’t see that happening. I continue to be involved in Front Desk, having various conversations throughout the year with the rest of the team, but there’s no doubt Mattia and Pierre-Elliott are the real doers at present.

In terms of package uploads I continued to work on gcc-xtensa-lx106, largely doing uploads to deal with updates to the GCC version or packaging (5, 6 + 7). sigrok had a few minor updates, libsigkrok 0.5.2-2, libsigrokdecode 0.5.3-2 as well as a new upstream release of Pulseview 0.4.2-1 and a fix to cope with change to QT 0.4.2-2. Due to the sigrok-firmware requirement on sdcc I also continued to help out there, updating to 4.0.0+dfsg-1 and doing some fixups in 4.0.0+dfsg-2.

Despite still not writing an VHDL these days I continue to try and make sure ghdl is ok, because I found it a useful tool in the past. In 2020 that meant a new upstream release, 0.37+dfsg-1 along with a couple of more minor updates (0.37+dfsg-2 + 0.37+dfsg-3.

libcli had a new upstream release, 1.10.4-1, and I did a long overdue update to sendip to the latest upstream release, 2.6-1 having been poked about an outstanding bug by the Reproducible Builds folk.

OpenOCD is coming up to 4 years since its last stable release, but I did a snapshot upload to Debian experimental (0.10.0+g20200530-1) and a subsequent one to unstable (0.10.0+g20200819-1). There are also moves to produce a 0.11.0 release and I uploaded 0.11.0~rc1-1 as a result. libjaylink got a bump as a result (0.2.0-1) after some discussion with upstream.

OpenOCD

On the subject of OpenOCD I’ve tried to be a bit more involved upstream. I’m not familiar enough with the intricacies of JTAG/SWD/the various architectures supported to contribute to the core, but I pushed the config for my HIE JTAG adapter upstream and try and review patches that don’t require in depth hardware knowledge.

Linux

I’ve been contributing to the Linux kernel for a number of years now, mostly just minor bits here and there for issues I hit. This year I spent a lot of time getting support for the MikoTik RB3011 router upstreamed. That included the basic DTS addition, fixing up QCA8K to support SGMII CPU connections, adding proper 802.1q VLAN support to QCA8K and cleaning up an existing QCOM ADM driver that’s required for the NAND. There were a number of associated bugfixes/minor changes found along the way too. It can be a little frustrating at times going round the review loop with submitting things upstream, but I do find it quite satisfying when it all comes together and I have no interest in weird vendor trees that just bitrot over time.

Software in the Public Interest

I haven’t sat on the board of SPI since 2015 but I was still acting as the primary maintainer of the membership website (with Martin Michlmayr as the other active contributor) and hosting it on my own machine. I managed to finally extricate myself from this role in August. I remain a contributing member.

Personal projects

2020 finally saw another release (0.6.0, followed swiftly by 0.6.1 to allow the upload of 0.6.1-1 to Debian) of onak. This release finally adds various improvements to deal with the hostility shown to the OpenPGP keyserver network in recent years, including full signature verification as an option.

I fixed an oversight in my Digoo/1-wire temperature decoder and a bug that turned up on ARM but not MIPS in my mqtt-arp code. I should probably package it for Debian (even if I don’t upload it), as I’m running it on my RB3011 now.

09 January, 2021 06:09PM

Thorsten Alteholz

My Debian Activities in December 2020

FTP master

This month I only accepted 8 packages and like last month rejected 0. Despite the holidays 293 packages got accepted.

Debian LTS

This was my seventy-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 26h. During that time I did LTS uploads of:

  • [DLA 2489-1] minidlna security update for two CVEs
  • [DLA 2490-1] x11vnc security update for one CVE
  • [DLA 2501-1] influxdb security update for one CVE
  • [DLA 2511-1] highlight.js security update for one CVE

Unfortunately package slirp has the same version in Stretch and Buster. So I first had to upload slirp/1:1.0.17-11 to unstable, in order to be allowed to fix the CVE in Buster and to finally upload a new version to Stretch. Meanwhile the fix for Buster has been approved by the Release Team and I am waiting for the next point release now.

I also prepared a debdiff for influxdb, which will result in DSA-4823-1 in January.

As there appeared new CVEs for openjpeg2, I did not do an upload yet. This is planned for January now.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirtieth ELTS month.

During my allocated time I uploaded:

  • ELA-341-1 for highlight.js

As well as for LTS, I did not finish work on all CVEs of openjpeg2, so the upload is postponed to January.

Last but not least I did some days of frontdesk duties.

Unfortunately I also had to give back some hours.

Other stuff

This month I uploaded new upstream versions of:

I fixed one or two bugs in:

I improved packaging of:

Some packages just needed a source upload:

… and there have been even some new packages:

With these uploads I finished the libosmocom- and libctl-transitions.

The Debian Med Advent Calendar was again really successful this year. There was no new record, but with 109, the second most number of bugs has been closed.

year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
2017 105
2018 81
2019 104
2020 109

Well done everybody who participated. It is really nice to see that Andreas is no longer a lone wolf.

09 January, 2021 07:34AM by alteholz

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

puppetserver 6: a Debian packaging post-mortem

I have been a Puppet user for a couple of years now, first at work, and eventually for my personal servers and computers. Although it can have a steep learning curve, I find Puppet both nimble and very powerful. I also prefer it to Ansible for its speed and the agent-server model it uses.

Sadly, Puppet Labs hasn't been the most supportive upstream and tends to move pretty fast. Major versions rarely last for a whole Debian Stable release and the upstream .deb packages are full of vendored libraries.1

Since 2017, Apollon Oikonomopoulos has been the one doing most of the work on Puppet in Debian. Sadly, he's had less time for that lately and with Puppet 5 being deprecated in January 2021, Thomas Goirand, Utkarsh Gupta and I have been trying to package Puppet 6 in Debian for the last 6 months.

With Puppet 6, the old ruby Puppet server using Passenger is not supported anymore and has been replaced by puppetserver, written in Clojure and running on the JVM. That's quite a large change and although puppetserver does reuse some of the Clojure libraries puppetdb (already in Debian) uses, packaging it meant quite a lot of work.

Work in the Clojure team

As part of my efforts to package puppetserver, I had the pleasure to join the Clojure team and learn a lot about the Clojure ecosystem.

As I mentioned earlier, a lot of the Clojure dependencies needed for puppetserver were already in the archive. Unfortunately, when Apollon Oikonomopoulos packaged them, the leiningen build tool hadn't been packaged yet. This meant I had to rebuild a lot of packages, on top of packaging some new ones.

Since then, thanks to the efforts of Elana Hashman, leiningen has been packaged and lets us run the upstream testsuites and create .jar artifacts closer to those upstream releases.

During my work on puppetserver, I worked on the following packages:

List of packages
  • backport9
  • bidi-clojure
  • clj-digest-clojure
  • clj-helper
  • clj-time-clojure
  • clj-yaml-clojure
  • cljx-clojure
  • core-async-clojure
  • core-cache-clojure
  • core-match-clojure
  • cpath-clojure
  • crypto-equality-clojure
  • crypto-random-clojure
  • data-csv-clojure
  • data-json-clojure
  • data-priority-map-clojure
  • java-classpath-clojure
  • jnr-constants
  • jnr-enxio
  • jruby
  • jruby-utils-clojure
  • kitchensink-clojure
  • lazymap-clojure
  • liberator-clojure
  • ordered-clojure
  • pathetic-clojure
  • potemkin-clojure
  • prismatic-plumbing-clojure
  • prismatic-schema-clojure
  • puppetlabs-http-client-clojure
  • puppetlabs-i18n-clojure
  • puppetlabs-ring-middleware-clojure
  • puppetserver
  • raynes-fs-clojure
  • riddley-clojure
  • ring-basic-authentication-clojure
  • ring-clojure
  • ring-codec-clojure
  • shell-utils-clojure
  • ssl-utils-clojure
  • test-check-clojure
  • tools-analyzer-clojure
  • tools-analyzer-jvm-clojure
  • tools-cli-clojure
  • tools-reader-clojure
  • trapperkeeper-authorization-clojure
  • trapperkeeper-clojure
  • trapperkeeper-filesystem-watcher-clojure
  • trapperkeeper-metrics-clojure
  • trapperkeeper-scheduler-clojure
  • trapperkeeper-webserver-jetty9-clojure
  • url-clojure
  • useful-clojure
  • watchtower-clojure

If you want to learn more about packaging Clojure libraries and applications, I rewrote the Debian Clojure packaging tutorial and added a section about the quirks of using leiningen without a dedicated dh_lein tool.

Work left to get puppetserver 6 in the archive

Unfortunately, I was not able to finish the puppetserver 6 packaging work. It is thus unlikely it will make it in Debian Bullseye. If the issues described below are fixed, it would be possible to to package puppetserver in bullseye-backports though.

So what's left?

jruby

Although I tried my best (kudos to Utkarsh Gupta and Thomas Goirand for the help), jruby in Debian is still broken. It does build properly, but the testsuite fails with multiple errors:

  • ruby-psych is broken (#959571)
  • there are some random java failures on a few tests (no clue why)
  • tests ran by raklelib/rspec.rake fail to run, maybe because the --pattern command line option isn't compatible with our version of rake? Utkarsh seemed to know why this happens.

jruby testsuite failures aside, I have not been able to use the jruby.deb the package currently builds in jruby-utils-clojure (testsuite failure). I had the same exact failure with the (more broken) jruby version that is currently in the archive, which leads me to think this is a LOAD_PATH issue in jruby-utils-clojure. More on that below.

To try to bypass these issues, I tried to vendor jruby into jruby-utils-clojure. At first I understood vendoring meant including upstream pre-built artifacts (jruby-complete.jar) and shipping them directly.

After talking with people on the #debian-mentors and #debian-ftp IRC channels, I now understand why this isn't a good idea (and why it's not permitted in Debian). Many thanks to the people who were patient and kind enough to discuss this with me and give me alternatives.

As far as I now understand it, vendoring in Debian means "to have an embedded copy of the source code in another package". Code shipped that way still needs to be built from source. This means we need to build jruby ourselves, one way or another. Vendoring jruby in another package thus isn't terribly helpful.

If fixing jruby the proper way isn't possible, I would suggest trying to build the package using embedded code copies of the external libraries jruby needs to build, instead of trying to use the Debian libraries.2 This should make it easier to replicate what upstream does and to have a final .jar that can be used.

jruby-utils-clojure

This package is a first-level dependency for puppetserver and is the glue between jruby and puppetserver.

It builds fine, but the testsuite fails when using the Debian jruby package. I think the problem is caused by a jruby LOAD_PATH issue.

The Debian jruby package plays with the LOAD_PATH a little to try use Debian packages instead of downloading gems from the web, as upstream jruby does. This seems to clash with the gem-home, gem-path, and jruby-load-path variables in the jruby-utils-clojure package. The testsuite plays around with these variables and some Ruby libraries can't be found.

I tried to fix this, but failed. Using the upstream jruby-complete.jar instead of the Debian jruby package, the testsuite passes fine.

This package could clearly be uploaded to NEW right now by ignoring the testsuite failures (we're just packaging static .clj source files in the proper location in a .jar).

puppetserver

jruby issues aside, packaging puppetserver itself is 80% done. Using the upstream jruby-complete.jar artifact, the testsuite fails with a weird Clojure error I'm not sure I understand, but I haven't debugged it for very long.

Upstream uses git submodules to vendor puppet (agent), hiera (3), facter and puppet-resource-api for the testsuite to run properly. I haven't touched that, but I believe we can either:

  • link to the Debian packages
  • fix the Debian packages if they don't include the right files (maybe in a new binary package that just ships part of the source code?)

Without the testsuite actually running, it's hard to know what files are needed in those packages.

What now

Puppet 5 is now deprecated.

If you or your organisation cares about Puppet in Debian,3 puppetserver really isn't far away from making it in the archive.

Very talented Debian Developers are always eager to work on these issues and can be contracted for very reasonable rates. If you're interested in contracting someone to help iron out the last issues, don't hesitate to reach out via one of the following:

As for I, I'm happy to say I got a new contract and will go back to teaching Economics for the Winter 2021 session. I might help out with some general Debian packaging work from time to time, but it'll be as a hobby instead of a job.

Thanks

The work I did during the last 6 weeks would be not have been possible without the support of the Wikimedia Foundation, who were gracious enough to contract me. My particular thanks to Faidon Liambotis, Moritz Mühlenhoff and John Bond.

Many, many thanks to Rob Browning, Thomas Goirand, Elana Hashman, Utkarsh Gupta and Apollon Oikonomopoulos for their direct and indirect help, without which all of this wouldn't have been possible.


  1. For example, the upstream package for the Puppet Agent vendors OpenSSL. 

  2. One of the problems of using Ruby libraries already packaged in Debian is that jruby currently only supports Ruby 2.5. Ruby libraries in Debian are currently expected to work with Ruby 2.7, with the transition to Ruby 3.0 planned after the Bullseye release. 

  3. If you run Puppet, you clearly should care: the .deb packages upstream publishes really aren't great and I would not recommend using them. 

09 January, 2021 05:00AM by Louis-Philippe Véronneau

January 08, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#32: Portable Continuous Integration using r-ci

Welcome to the 32th post in the rarely raucous R recommendations series, or R4 for short. This post covers continuous integration, a topic near and dear to many of us who have to recognise its added value.

The popular and widely-used service at Travis is undergoing changes driven by a hard-to-argue with need for monetization. A fate that, if we’re honest, lies ahead for most “free” services so who know, maybe one day we have to turn away from other currently ubiquitous service. Because one never knows, it can pay off to not get to tied to any one service. Which brings us to today’s post and my r-ci service which allows me to run CI at Travis, at GitHub, at Azure, and on local Docker containers as the video demonstrates. It will likely also work at GitLab and other services, I simply haven’t tried any others.

The slides are here. The r-ci website introduces r-ci at a high-level. This repo at GitHub contains run.sh, and can be used to raise issues, ask questions, or provide feedback.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 January, 2021 04:09AM

January 07, 2021

hackergotchi for Daniel Pocock

Daniel Pocock

Technology, Citizenship, Democracy and Tyrants

This week, the world saw dramatic scenes in Washington as a mob stormed the US Capitol building. Scenes like this are painful to watch but they also provide fascinating opportunities to learn lessons about leadership and the world we live in.

At the same time, the free software world has been coming to grips with the judicial complaints at least one woman has made against a free software organization, the Free Software Foundation Europe e.V. and its leader, Matthias Kirschner.

This is relevant for all of us in every free software organization. Try to put yourself in the shoes of a female employee or a volunteer, a foreigner in Germany, when they go into a court room and they are confronted by this vile defamation from somebody who claims, as President of an organization using the name Free Software, that he speaks on behalf of all of us.

Now stop for a moment and think of the woman shot during the riots in Washington. As an employee of the armed services, President Trump was not just a political leader, he was also commander in chief and therefore her boss. As both an American and a soldier, even if her conduct looks outrageous to many of us, she believed she belonged there.

The communications of both these presidents, Kirschner's vile campaigns of vilification against multiple women and volunteers and Trump's call to resist, resonated with each of these women in ways that outsiders may never appreciate.

The stories of these "leaders" are intertwined and among other things, begin on my birthday.

9 November

9 November 1989 is the day that a mob began pulling down the Berlin Wall. As a thought exercise, how would censors at Twitter or Facebook decide which side to take in such an event? Would they focus on the short term risk to human life or the long term benefit of democracy in former communist states? Should they make such decisions at all?

On 9 November 2016, I woke up on my birthday to the news that Donald Trump had been elected president of the United States.

As a belated birthday gift, I'm giving the world this White House briefing room scene for OBS Studio so you can all peacefully simulate a visit to the White House during President Trump's absence from social media. OBS Studio is free software, that means it is free for everybody to download and use, whether you are a president or not.

President Trump has shown us the promise of democracy: anyone can be president. Now let free software do the rest.

The censorship question, Silicon Valley against the US constitution

Having mentioned censorship, I can't resist the urge to put Mark Zuckerberg in his place.

Twitter, Google-Youtube and Facebook all censored the US President. Many people will not lose sleep over that decision.

The US constitution provides two means to remove a president. Neither of these procedures puts power in the hands of Silicon Valley. The US constitution requires the action to come from the democratically elected lawmakers and Vice President. Many Americans are furious at Trump for failing to uphold his oath and protect the US constitution against a mob: but who will protect the constitution from Silicon Valley?

It is at exactly this time that Silicon Valley overlords will arrive and pretend to be saviors. They are wolves in sheep's clothing.

If you sincerely care about the loss of human life, contemplate the hundreds of thousands of Americans who died under Trump's Coronavirus response. Silicon Valley did little to restrain him when he advised people not to wear face masks, a far more deadly policy than the riots.

It is a simple fact that the US death toll from Coronavirus far exceeds the combined death toll from Hiroshima and Nagasaki.

Why did Twitter and friends stop procrastinating and finally cut him off on 7 January 2021? Was it because of the loss of life at the US Capitol or was it because on the same day, voters in Georgia had just given the incoming president control of the US Senate?

When Trump advised the US population to drink or inject bleach, nobody took any action against him.

Google, Twitter and Facebook tolerated Trump while hundreds of thousands of people were dying. They feared another four years of his rule. When the run-off vote in Georgia banished the Republicans, Silicon Valley had no more use for Trump and they cut him loose. Had Trump remained in office or had the Republicans won Georgia, Trump would still be tweeting today. Hundreds of thousands of dead Coronavirus patients wouldn't be mentioned. Silicon Valley's intervention was entirely self-serving.

Nominating for the FSFE elections

Shortly after that birthday surprise in 2016, I submitted my nomination for the role of Fellowship Representative in the elections of FSFE.

Ever since then, I've been subjected to something that the former leader of Debian described as a campaign of harassment.

Even before the US election in November 2020, media outlets leaked details about a Trump conspiracy to undermine the vote. While FSFE's fellows were deciding how to vote in 2017, FSFE's management were secretly discussing how to avoid further elections:

florian snow, matthias kirschner, fsfe

Well dressed children

Since losing the US election in November 2020, President Trump has behaved like a giant toddler stamping his foot.

On a daily basis, this reminded me of the state of free software organizations. From the disappearance of my nomination email in the Fedora Council elections, my removal from the Debian keyring days before nominations opened in 2019 and the situation in FSFE.

FSFE stands out. The community clearly voted for me, putting Florian Snow second, as results at Cornell's independent Internet Voting Service clearly show.

Yet just a few months later, the president, Matthias Kirschner, used his executive power to give the same General Assembly voting rights to the loser, Snow, in effect, changing the result of the election.

Subject: Re: [GA] Membership Application: Florian Snow, Feedback until 15 December
Date: Mon, 18 Dec 2017 07:16:04 +0100
From: Matthias Kirschner <mk@fsfe.org>
To: ga@lists.fsfe.org

After all the positive feedback by you, I have now accepted Florian as member to be officially confirmed at the next general assembly.

Jonas will add him to the mailing list.

Regards,
Matthias

The community had voted for me but Kirschner needed to have another white German male. I've never seen anything like this in any other country. What was he looking for with this move, an emotional support animal? If we really care about diversity, we have to consider that this same psychology made it impossible for Kirschner to work with women as equals, so he fired them all.

The way that Snow entered the General Assembly, despite losing the election, is what came to mind when I saw people clambering into the US Senate.

florian snow, matthias kirschner, fsfe

Elections were just an illusion

It turns out that everything FSFE gave us, the Fellowship smart cards with our names on them, like membership cards, the fsfe.org email addresses and even the elections were just a gimmick to make us feel like members without really being members.

In other words, these Fellowships were comparable to investments in funds operated by Bernard Madoff or Allen Stanford. Their ponzi schemes gave investors bank statements showing balances that didn't exist while FSFE voting gave us a feeling of membership that didn't really exist.

Challenging such deceptions was my responsibility as an elected respresentative of the community. If the representative doesn't call out something like this, they are not doing their job properly. Yet whenever I asked about it, the only reply was Kirschner's fury. When FSFE's female employees wrote about asking for equal pay, I could relate to their words completely in my quest for volunteers to have equal voting:

A female colleague and me had dared to discuss wage transparency and gender pay gap in the office. Apparently it is common in Germany that this gap exceeds 20%, but we both felt secure that the free software movement is progressive, and cares about being inclusive and equal opportunities oriented. Unfortunately we miscalculated – our boss Matthias was beyond furious. After that office meeting, he told my colleague “there will be consequences”.

Another round of censorship

The woman in question writes about defamation in particular:

The court process was taxing. The FSFE lawyer made up easily disprovable slanders against me – and I say “easily” because their charges were demonstrably false, but of course finding evidence that is also admissible in the conservative German court system did not do much good for my stress levels over summer. They disrespected the court by not submitting papers on time, and they refused to answer my allegations, opting instead to portray me as a disobedient, sexist, racist, incompetent belligerent. Why did they give such a person a permanent contract after a six month probation period?

People noticed that after I resigned in disgust from the FSFE, my blog was censored from Planet Fedora, Planet Mozilla, Planet Ubuntu and Planet Debian. I receive regular reports of emails being sent behind my back. Several people leaked an email from Kirschner about how he wanted to gather information:

One general wish -- which I agreed with -- from Debian was to better share information about people

What is the purpose of such a communication which is clearly illegal under the GDPR? Did Kirschner hope to obtain information with which to coerce or discredit a representative elected by the community, just as Trump has tried to discredit the US elections?

At the point Kirschner wrote that email, I was no longer a member of FSFE. His methods and motives were completely illegal under the GDPR, just as Trump's attempts to cling on to power are illegal.

Home invasions

We could think of the US Capitol, rather than the White House, as the home of US democracy. How would this woman feel about the president of the Free Software organization going to her home against her will?

A weekend of non-stop calls followed, including from hidden phone numbers. He even texted telling me I should answer my phone, for my own better. Even after my lawyer warned him to terminate all attempts to communicate with me and send someone else to pick up my work laptop, he came in person to my house, and was very irritated that I was not alone.

This is incredible stuff.

Take a moment to put yourself in this woman's shoes, imagine your boss arriving at your home anticipating he would find you alone.

Now imagine if this woman has spent much of her life in the free software world, she volunteers for other groups, maybe Fedora or Mozilla and she attends one of their booths at an event. How is she going to feel if she is standing at a Fedora booth or Mozilla booth and it is right beside the FSFE booth, right beside the man who violated her home in this way?

A few weeks ago, I politely asked on the Fedora Council mailing list whether it was appropriate for Red Hat to give money to Kirschner's group. Kirschner's behavior at this woman's home is no more tolerable than Trump's behavior at the US Capitol.

Choking democracy

After Kirschner used his executive authority to grant Florian Snow voting rights in the FSFE, Kirschner then put Snow in charge of the communications from the representative elected by the Fellows. This was a hideous snub to democracy, matched only by the antics of Trump and his mob:

Subject: Request to mailing list Discussion rejected
Date: Tue, 18 Sep 2018 05:06:09 +0000
From: discussion-owner@lists.fsfe.org
To: daniel@pocock.pro

Your request to the Discussion mailing list

Posting of your message titled "Re: FSFE and censorship - not true?"

has been rejected by the list moderator.

Snow, the interloper, shutting down debate, just as interlopers shut down the US Capitol...

florian snow, matthias kirschner, fsfe

07 January, 2021 10:00PM

Technology, Citizenship, Democracy and Tyrants

This week, the world saw dramatic scenes in Washington as a mob stormed the US Capitol building. Scenes like this are painful to watch but they also provide fascinating opportunities to learn lessons about leadership and the world we live in.

At the same time, the free software world has been coming to grips with the judicial complaints at least one woman has made against a free software organization, the Free Software Foundation Europe e.V. and its leader, Matthias Kirschner.

This is relevant for all of us in every free software organization. Try to put yourself in the shoes of a female employee or a volunteer, a foreigner in Germany, when they go into a court room and they are confronted by this vile defamation from somebody who claims, as President of an organization using the name Free Software, that he speaks on behalf of all of us.

Now stop for a moment and think of the woman shot during the riots in Washington. As an employee of the armed services, President Trump was not just a political leader, he was also commander in chief and therefore her boss. As both an American and a soldier, even if her conduct looks outrageous to many of us, she believed she belonged there.

The communications of both these presidents, Kirschner's vile campaigns of vilification against multiple women and volunteers and Trump's call to resist, resonated with each of these women in ways that outsiders may never appreciate.

The stories of these "leaders" are intertwined and among other things, begin on my birthday.

9 November

9 November 1989 is the day that a mob began pulling down the Berlin Wall. As a thought exercise, how would censors at Twitter or Facebook decide which side to take in such an event? Would they focus on the short term risk to human life or the long term benefit of democracy in former communist states? Should they make such decisions at all?

On 9 November 2016, I woke up on my birthday to the news that Donald Trump had been elected president of the United States.

As a belated birthday gift, I'm giving the world this White House briefing room scene for OBS Studio so you can all peacefully simulate a visit to the White House during President Trump's absence from social media. OBS Studio is free software, that means it is free for everybody to download and use, whether you are a president or not.

President Trump has shown us the promise of democracy: anyone can be president. Now let free software do the rest.

The censorship question, Silicon Valley against the US constitution

Having mentioned censorship, I can't resist the urge to put Mark Zuckerberg in his place.

Twitter, Google-Youtube and Facebook all censored the US President. Many people will not lose sleep over that decision.

The US constitution provides two means to remove a president. Neither of these procedures puts power in the hands of Silicon Valley. The US constitution requires the action to come from the democratically elected lawmakers and Vice President. Many Americans are furious at Trump for failing to uphold his oath and protect the US constitution against a mob: but who will protect the constitution from Silicon Valley?

It is at exactly this time that Silicon Valley overlords will arrive and pretend to be saviors. They are wolves in sheep's clothing.

If you sincerely care about the loss of human life, contemplate the hundreds of thousands of Americans who died under Trump's Coronavirus response. Silicon Valley did little to restrain him when he advised people not to wear face masks, a far more deadly policy than the riots.

It is a simple fact that the US death toll from Coronavirus far exceeds the combined death toll from Hiroshima and Nagasaki.

Why did Twitter and friends stop procrastinating and finally cut him off on 7 January 2021? Was it because of the loss of life at the US Capitol or was it because on the same day, voters in Georgia had just given the incoming president control of the US Senate?

When Trump advised the US population to drink or inject bleach, nobody took any action against him.

Google, Twitter and Facebook tolerated Trump while hundreds of thousands of people were dying. They feared another four years of his rule. When the run-off vote in Georgia banished the Republicans, Silicon Valley had no more use for Trump and they cut him loose. Had Trump remained in office or had the Republicans won Georgia, Trump would still be tweeting today. Hundreds of thousands of dead Coronavirus patients wouldn't be mentioned. Silicon Valley's intervention was entirely self-serving.

Nominating for the FSFE elections

Shortly after that birthday surprise in 2016, I submitted my nomination for the role of Fellowship Representative in the elections of FSFE.

Ever since then, I've been subjected to something that the former leader of Debian described as a campaign of harassment.

Even before the US election in November 2020, media outlets leaked details about a Trump conspiracy to undermine the vote. While FSFE's fellows were deciding how to vote in 2017, FSFE's management were secretly discussing how to avoid further elections:

florian snow, matthias kirschner, fsfe

Well dressed children

Since losing the US election in November 2020, President Trump has behaved like a giant toddler stamping his foot.

On a daily basis, this reminded me of the state of free software organizations. From the disappearance of my nomination email in the Fedora Council elections, my removal from the Debian keyring days before nominations opened in 2019 and the situation in FSFE.

FSFE stands out. The community clearly voted for me, putting Florian Snow second, as results at Cornell's independent Internet Voting Service clearly show.

Yet just a few months later, the president, Matthias Kirschner, used his executive power to give the same General Assembly voting rights to the loser, Snow, in effect, changing the result of the election.

Subject: Re: [GA] Membership Application: Florian Snow, Feedback until 15 December
Date: Mon, 18 Dec 2017 07:16:04 +0100
From: Matthias Kirschner <mk@fsfe.org>
To: ga@lists.fsfe.org

After all the positive feedback by you, I have now accepted Florian as member to be officially confirmed at the next general assembly.

Jonas will add him to the mailing list.

Regards,
Matthias

The community had voted for me but Kirschner needed to have another white German male. I've never seen anything like this in any other country. What was he looking for with this move, an emotional support animal? If we really care about diversity, we have to consider that this same psychology made it impossible for Kirschner to work with women as equals, so he fired them all.

The way that Snow entered the General Assembly, despite losing the election, is what came to mind when I saw people clambering into the US Senate.

florian snow, matthias kirschner, fsfe

Elections were just an illusion

It turns out that everything FSFE gave us, the Fellowship smart cards with our names on them, like membership cards, the fsfe.org email addresses and even the elections were just a gimmick to make us feel like members without really being members.

In other words, these Fellowships were comparable to investments in funds operated by Bernard Madoff or Allen Stanford. Their ponzi schemes gave investors bank statements showing balances that didn't exist while FSFE voting gave us a feeling of membership that didn't really exist.

Challenging such deceptions was my responsibility as an elected respresentative of the community. If the representative doesn't call out something like this, they are not doing their job properly. Yet whenever I asked about it, the only reply was Kirschner's fury. When FSFE's female employees wrote about asking for equal pay, I could relate to their words completely in my quest for volunteers to have equal voting:

A female colleague and me had dared to discuss wage transparency and gender pay gap in the office. Apparently it is common in Germany that this gap exceeds 20%, but we both felt secure that the free software movement is progressive, and cares about being inclusive and equal opportunities oriented. Unfortunately we miscalculated – our boss Matthias was beyond furious. After that office meeting, he told my colleague “there will be consequences”.

Another round of censorship

The woman in question writes about defamation in particular:

The court process was taxing. The FSFE lawyer made up easily disprovable slanders against me – and I say “easily” because their charges were demonstrably false, but of course finding evidence that is also admissible in the conservative German court system did not do much good for my stress levels over summer. They disrespected the court by not submitting papers on time, and they refused to answer my allegations, opting instead to portray me as a disobedient, sexist, racist, incompetent belligerent. Why did they give such a person a permanent contract after a six month probation period?

People noticed that after I resigned in disgust from the FSFE, my blog was censored from Planet Fedora, Planet Mozilla, Planet Ubuntu and Planet Debian. I receive regular reports of emails being sent behind my back. Several people leaked an email from Kirschner about how he wanted to gather information:

One general wish -- which I agreed with -- from Debian was to better share information about people

What is the purpose of such a communication which is clearly illegal under the GDPR? Did Kirschner hope to obtain information with which to coerce or discredit a representative elected by the community, just as Trump has tried to discredit the US elections?

At the point Kirschner wrote that email, I was no longer a member of FSFE. His methods and motives were completely illegal under the GDPR, just as Trump's attempts to cling on to power are illegal.

Home invasions

We could think of the US Capitol, rather than the White House, as the home of US democracy. How would this woman feel about the president of the Free Software organization going to her home against her will?

A weekend of non-stop calls followed, including from hidden phone numbers. He even texted telling me I should answer my phone, for my own better. Even after my lawyer warned him to terminate all attempts to communicate with me and send someone else to pick up my work laptop, he came in person to my house, and was very irritated that I was not alone.

This is incredible stuff.

Take a moment to put yourself in this woman's shoes, imagine your boss arriving at your home anticipating he would find you alone.

Now imagine if this woman has spent much of her life in the free software world, she volunteers for other groups, maybe Fedora or Mozilla and she attends one of their booths at an event. How is she going to feel if she is standing at a Fedora booth or Mozilla booth and it is right beside the FSFE booth, right beside the man who violated her home in this way?

A few weeks ago, I politely asked on the Fedora Council mailing list whether it was appropriate for Red Hat to give money to Kirschner's group. Kirschner's behavior at this woman's home is no more tolerable than Trump's behavior at the US Capitol.

Choking democracy

After Kirschner used his executive authority to grant Florian Snow voting rights in the FSFE, Kirschner then put Snow in charge of the communications from the representative elected by the Fellows. This was a hideous snub to democracy, matched only by the antics of Trump and his mob:

Subject: Request to mailing list Discussion rejected
Date: Tue, 18 Sep 2018 05:06:09 +0000
From: discussion-owner@lists.fsfe.org
To: daniel@pocock.pro

Your request to the Discussion mailing list

Posting of your message titled "Re: FSFE and censorship - not true?"

has been rejected by the list moderator.

Snow, the interloper, shutting down debate, just as interlopers shut down the US Capitol...

florian snow, matthias kirschner, fsfe

07 January, 2021 10:00PM

hackergotchi for Keith Packard

Keith Packard

kgames

Reviving Very Old X Code

I've taken the week between Christmas and New Year's off this year. I didn't really have anything serious planned, just taking a break from the usual routine. As often happens, I got sucked into doing a project when I received this simple bug report Debian Bug #974011

I have been researching old terminal and X games recently, and realized
that much of the code from 'xmille' originated from the terminal game
'mille', which is part of bsdgames.

...

[The copyright and license information] has been stripped out of all
code in the xmille distribution.  Also, none of the included materials
give credit to the original author, Ken Arnold.

The reason the 'xmille' source is missing copyright and license information from the 'mille' files is that they were copied in before that information was added upstream. Xmille forked from Mille around 1987 or so. I wrote the UI parts for the system I had at the time, which was running X10R4. A very basic port to X11 was done at some point, and that's what Debian has in the archive today.

At some point in the 90s, I ported Xmille to the Athena widget set, including several custom widgets in an Xaw extension library, Xkw. It's a lot better than the version in Debian, including displaying the cards correctly (the Debian version has some pretty bad color issues).

Here's what the current Debian version looks like:

Fixing The Bug

To fix the missing copyright and license information, I imported the mille source code into the "latest" Xaw-based version. The updated mille code had a number of bug fixes and improvements, along with the copyright information.

That should have been sufficient to resolve the issue and I could have constructed a suitable source package from whatever bits were needed and and uploaded that as a replacement 'xmille' package.

However, at some later point, I had actually merged xmille into a larger package, 'kgames', which also included a number of other games, including Reversi, Dominoes, Cribbage and ten Solitaire/Patience variants. (as an aside, those last ten games formed the basis for my Patience Palm Pilot application, which seems to have inspired an Android App of the same name...)

So began my yak shaving holiday.

Building Kgames in 2020

Ok, so getting this old source code running should be easy, right? It's just a bunch of C code designed in the 80s and 90s to work on VAXen and their kin. How hard could it be?

  1. Everything was a 32-bit computer back then; pointers and ints were both 32 bits, so you could cast them with wild abandon and cause no problems. Today, testing revealed segfaults in some corners of the code.

  2. It's K&R C code. Remember that the first version of ANSI-C didn't come out until 1989, and it was years later that we could reliably expect to find an ANSI compiler with a random Unix box.

  3. It's X11 code. Fortunately (?), X11 hasn't changed since these applications were written, so at least that part still works just fine. Imagine trying to build Windows or Mac OS code from the early 90's on a modern OS...

I decided to dig in and add prototypes everywhere; that found a lot of pointer/int casting issues, as well as several lurking bugs where the code was just plain broken.

After a day or so, I had things building and running and was no longer hitting crashes.

Kgames 1.0 uploaded to Debian New Queue

With that done, I decided I could at least upload the working bits to the Debian archive and close the bug reported above. kgames 1.0-2 may eventually get into unstable, presumably once the Debian FTP team realizes just how important fixing this bug is. Or something.

Here's what xmille looks like in this version:

And here's my favorite solitaire variant too:

But They Look So Old

Yeah, Xaw applications have a rustic appearance which may appeal to some, but for people with higher resolution monitors and “well seasoned” eyesight, squinting at the tiny images and text makes it difficult to enjoy these games today.

How hard could it be to update them to use larger cards and scalable fonts?

Xkw version 2.0

I decided to dig in and start hacking the code, starting by adding new widgets to the Xkw library that used cairo for drawing instead of core X calls. Fortunately, the needs of the games were pretty limited, so I only needed to implement a handful of widgets:

  • KLabel. Shows a text string. It allows the string to be left, center or right justified. And that's about it.

  • KCommand. A push button, which uses KLabel for the underlying presentation.

  • KToggle. A push-on/push-off button, which uses KCommand for most of the implementation. Also supports 'radio groups' where pushing one on makes the others in the group turn off.

  • KMenuButton. A button for bringing up a menu widget; this is some pretty simple behavior built on top of KCommand.

  • KSimpleMenu, KSmeBSB, KSmeLine. These three create pop-up menus; KSimpleMenu creates a container which can hold any number of KSmeBSB (string) and KSmeLine (separator lines) objects).

  • KTextLine. A single line text entry widget.

The other Xkw widgets all got their rendering switched to using cairo, plus using double buffering to make updates look better.

SVG Playing Cards

Looking on wikimedia, I found a page referencing a large number of playing cards in SVG form That led me to Adrian Kennard's playing card web site that let me customize and download a deck of cards, licensed using the CC0 Public Domain license.

With these cards, I set about rewriting the Xkw playing card widget, stripping out three different versions of bitmap playing cards and replacing them with just these new SVG versions.

SVG Xmille Cards

Ok, so getting regular playing cards was good, but the original goal was to update Xmille, and that has cards hand drawn by me. I could just use those images, import them into cairo and let it scale them to suit on the screen. I decided to experiment with inkscape's bitmap tracing code to see what it could do with them.

First, I had to get them into a format that inkscape could parse. That turned out to be a bit tricky; the original format is as a set of X bitmap layers; each layer painting a single color. I ended up hacking the Xmille source code to generate the images using X, then fetching them with XGetImage and walking them to construct XPM format files which could then be fed into the portable bitmap tools to create PNG files that inkscape could handle.

The resulting images have a certain charm:

I did replace the text in the images to make it readable, otherwise these are untouched from what inkscape generated.

The Results

Remember that all of these are applications built using the venerable X toolkit; there are still some non-antialiased graphics visible as the shaped buttons use the X Shape extension. But, all rendering is now done with cairo, so it's all anti-aliased and all scalable.

Here's what Xmille looks like after the upgrades:

And here's spider:

Once kgames 1.0 reaches Debian unstable, I'll upload these new versions.

07 January, 2021 08:47PM

Russell Coker

Monopoly the Game

The Smithsonian Mag has an informative article about the history of the game Monopoly [1]. The main point about Monopoly teaching about the problems of inequality is one I was already aware of, but there are some aspects of the history that I learned from the article.

Here’s an article about using modified version of Monopoly to teach Sociology [2].

Maria Paino and Jeffrey Chin wrote an interesting paper about using Monopoly with revised rules to teach Sociology [3]. They publish the rules which are interesting and seem good for a class.

I think it would be good to have some new games which can teach about class differences. Maybe have an “Escape From Poverty” game where you have choices that include drug dealing to try and improve your situation or a cooperative game where people try to create a small business. While Monopoly can be instructive it’s based on the economic circumstances of the past. The vast majority of rich people aren’t rich from land ownership.

07 January, 2021 05:05AM by etbe

John Goerzen

This Is How Tyrants Go: Alone

I remember reading an essay a month or so ago — sadly I forget where — talking about how things end for tyrants. If I were to sum it up, it would be with the word “alone.” Their power fading, they find that they had few true friends or believers; just others that were greedy for power or riches and, finding those no longer to be had, depart the sinking ship. The article looked back at examples like Nixon and examples from the 20th century in Europe and around the world.

Today we saw images of a failed coup attempt.

But we also saw hope.

Already senior staff in the White House are resigning. Ones that had been ardent supporters. In the end, just 6 senators supported the objection to the legitimate electors. Six. Lindsay Graham, Mike Pence, and Mitch McConnel all deserted Trump.

CNN reports that there are serious conversations about invoking the 25th amendment and removing him from office, because even Republicans are to the point of believing that America should not have two more weeks of this man.

Whether those efforts are successful or not, I don’t know. What I do know is that these actions have awakened many people, in a way that nothing else could for four years, to the dangers of Trump and, in the end, have bolstered the cause of democracy.

Hard work will remain but today, Donald Trump is in the White House alone, abandoned by allies and blocked by Twitter. And we know that within two weeks, he won’t be there at all.

We will get through this.

07 January, 2021 04:00AM by John Goerzen

January 06, 2021

hackergotchi for Urvika Gola

Urvika Gola

Dog tails and tales from 2020

There is no denying in the fact that 2020 was a challenging year for everybody, including animals.

In India, animals such as dogs who mostly filled their bellies at street food stalls, were starving as there were no street eateries operating during a long long lockdown. I was in my home town, New Delhi, working from home like most of us.

During the month of July 2020, a dog near the place I live (we fondly called her Brownie) delivered 7 pups in wilderness.

I would never forget my first sight of them, inside a dirty, garbage filled land there were the cutest, cleanest, tiniest ball of fur! All of them were toppled as the land’s surface was uneven.. The first instinct was to put them all together on a flat surface. After the search mission completed, this was the sight..

Brownie and her litter put together in the same land where she gave birth.

The next day, I sought help from a animal-lover person to build a temporary shed for the puppies! We came and changed sheets, cleaned the surroundings and put fresh water for Brownie until…

..it started raining heavily one night and we were worried if the shed would sustain the heavy rainfall.
Next morning the first thing was to check on the pups, luckily, the pups were fine however, the entire area and their bed was damp.

Without any second thought, the pups were moved from there to a safe house shelter as it was predicted that the rains will continue for a few more weeks due to monsoon. Soon, 2 months went by, from observing the pups crawl over, their eyes open and to their first bark, despite the struggles, it was an beautiful experience.
Brownie weaned off the pups and thus, they were ready for adoption! However, my biggest fear was, will anyone come forward to adopt them??

With such thoughts parallelly running in my mind, I started to post about adoption for these 7 pups.
To my biggest surprise, one by one, 5 amazing humans came forward and decided to give these pups a better life than what they would get on the streets of India. I wouldn’t be able to express in words how grateful I will to be all the five dog parents who decided to adopt an Indian Street Puppy/Indies/Desi Puppy, opening up the space in their hearts and homes for the pups!

One of the 5 adopted pups is adopted by a person who hails from USA, but currently working in India. It’s so heartwarming to see, that in India, despite so much awareness created against breeders and their methods, people still prefer to go for foreign bred puppy and disregard Indian/Desi Dogs.. On the other hand, there are foreigners who value the life of a Indian/Desi Dog :”)

The 5 Adopted Pups who now have a permanent loving family!

The adorable, “Robin”!
“Don” and his new big brother!

The naughty and handsome, “Swayze”!

First Pup who got adopted – “Pluto”
Playful and Beautiful, “Bella”!

If this isn’t perfect, I don’t know what is! God had planned loving families for them and they found it..
However, Its been almost six months now, that we haven’t found a permanent home for 2 of the 7 pups, but they have the perfect foster family taking care of them right now.

UP FOR ADOPTION – Delhi/NCR
Meet Momo and Beesa,
2 out of the 7 pups, who are still waiting for a forever home, currently living with a loving foster family.

Vaccinations, Deworming is done.
Female pups, 6 months old.

Now as winters are here, Along with one of my friend, who is also fostering the two pups, arranged gunny sack bags for our street, stray dogs. Two NGOs namely, Lotus Indie Foundation and We Exist Foundation who work Animal Welfare in India, were providing dog beds to ground volunteers like us. We are fortunate that they selected us and helped us to make winters less harsh for the stray dogs. However, the cold is such, I also purchased dog coats as well and put in on a few furries. After hours of running behind dogs and convincing them to wear coats, we managed to put it on a few.

Brownie, the mom dog!

This is a puppy!
She did not let us put coat on her 😀

Another topic that needs more sensitivity is Sterilization/Neutering of dogs, that’s a viable method cited by the Government to control dog population and end suffering of puppies who die under the wheels of cars. However, the implementation of this is worrisome, as it’s not as robust. In a span of 6 months, I managed to get 5 dog sterilized in my area, number is not big but I feel it’s a good start as an individual 😊

When I see them now, healthier, happier, running around, with no fear of getting attacked by dogs, I can’t express the content I feel. For 2 of the dogs (Brownie and her friend) I got it done personally from a private vet. For the other 3, I got it done via Municipal Corporations who do it for free for dogs, you’d have to call them and they come with dog catchers and a van and drop them back in the same area, but volunteers like us have to be very vigilant and active during the whole process to follow up with them.

Dogs getting dropped off after sterilization.

My 2020 ended with this, I am not sure why I am I even writing this in my blog where mostly I focused on my technical work and experiences, but this pandemic was challenging was everybody and what we planned couldn’t happen, but because of 2020, because of the pandemic, I was on WFH in my city and I was able to help a few dogs in my area have a healthy life ahead! 😊

What I learned during this entire adventure was, there are a lot of sweet, sensitive, caring people that we are just yet to meet. Along the way, we will also meet insensitive and discouraging people, who are unwilling to change or listen, ignore them and continue your good.

Have 1 person by your side, it’s so much stronger than 10 against you.

Silver lining! Hope you all had some positive experiences despite the adversity faced by every single one of us.

06 January, 2021 05:52PM by urvikagola

hackergotchi for Jonathan Dowland

Jonathan Dowland

PaperWM

My PaperWM desktop, as I write this post.

My PaperWM desktop, as I write this post.

Just before Christmas I decided to try out a GNOME extension I'd read about, PaperWM. It looked promising, but I was a little nervous about breaking my existing workflow, which was heavily reliant on the Put Windows extension.

It's great! I have had to carefully un-train some of my muscle memory but it seems to be worth it. It seems to strike a great balance between the rigidity of a tile-based window manager and a more traditional floating-windows one.

I'm always wary of coming to rely upon large extensions or plugins. The parent software is often quite hands-off about caring about supporting users of them, or breaking them by making API changes. Certainly those Firefox users who were heavily dependent on plugins prior to the Quantum fire-break are still very, very angry. (I actually returned to Firefox at that point, so I avoided the pain, and enjoy the advantages of the re-architecture). PaperWM hopefully is large enough and popular enough to avoid that fate.

06 January, 2021 04:44PM

January 05, 2021

Russell Coker

Planet Linux Australia

Linux Australia have decided to cease running the Planet installation on planet.linux.org.au. I believe that blogging is still useful and a web page with a feed of Australian Linux blogs is a useful service. So I have started running a new Planet Linux Australia on https://planet.luv.asn.au/. There has been discussion about getting some sort of redirection from the old Linux Australia page, but they don’t seem able to do that.

If you have a blog that has a reasonable portion of Linux and FOSS content and is based in or connected to Australia then email me on russell at coker.com.au to get it added.

When I started running this I took the old list of feeds from planet.linux.org.au, deleted all blogs that didn’t have posts for 5 years and all blogs that were broken and had no recent posts. I emailed people who had recently broken blogs so they could fix them. It seems that many people who run personal blogs aren’t bothered by a bit of downtime.

As an aside I would be happy to setup the monitoring system I use to monitor any personal web site of a Linux person and notify them by Jabber or email of an outage. I could set it to not alert for a specified period (10 mins, 1 hour, whatever you like) so it doesn’t alert needlessly on routine sysadmin work and I could have it check SSL certificate validity as well as the basic page header.

05 January, 2021 11:45PM by etbe

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, December 2020

I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 9 hours from earlier months. I worked 16.5 hours this month, so I will carry over 8.5 hours to January. (Updated: corrected number of hours worked.)

I updated linux-4.19 to include the changes in the Debian 10.7 point release, uploaded the package, and issued DLA-2483-1 for this.

I picked some regression fixes from the Linux 4.9 stable branch to the linux package, and uploaded the package. This unfortunately failed to build on arm64 due to some upstream changes uncovering an old bug, so I made a second upload fixing that. I issued DLA-2494-1 for this.

I updated the linux packaging branch for stretch to Linux 4.9.249, but haven't made another package upload yet.

05 January, 2021 10:32PM

Reproducible Builds

Reproducible Builds in December 2020

Greetings and welcome to the December 2020 report from the Reproducible Builds project. In these monthly reports, we try to outline most important things that have happened in and around the Reproducible Builds project.


In mid-December, it was announced that there was a substantial and wide-reaching supply-chain attack that targeted many departments of the United States government including the Treasury, Commerce and Homeland Security (DHS). The attack, informally known as ‘SolarWinds’ after the manufacturer of the network management software that was central to the compromise, was described by the Washington Post as:

The far-reaching Russian hack that sent U.S. government and corporate officials scrambling in recent days appears to have been a quietly sophisticated bit of online spying. Investigators at cybersecurity firm FireEye, which itself was victimized in the operation, marveled that the meticulous tactics involved “some of the best operational security” its investigators had seen, using at least one piece of malicious software never previously detected.

This revelation is extremely relevant to Reproducible Builds project because, according to the SANS Institute, it appears that the source code and distribution systems were not compromised — instead, the build system was, and is therefore precisely the kind of attack that reproducible builds is designed to prevent. The SolarWinds attack is further evidence that reproducible builds is important and that it becomes a pervasive software engineering principle.

More information on the attack may be found on CNN, CSO ComputerWeekly, BBC News, etc., and David A. Wheeler started a discussion on our mailing list. Kim Setter, author of Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon, posted on Twitter that:


Last month, we reported on a fork of the official German Corona App called ‘Corona Contact Tracing Germany’. Since then, the application has been made available on the F-Droid free-software application store. The app is not using the proprietary Google exposure notification framework, but a free software reimplementation by the microG project, staying fully compatible with the official app. The version on F-Droid also supports reproducible builds, and instructions on how to rebuild the package are available from the upstream Git repository. (FSFE’s announcement.)

The Reproducible Central project is an attempt to rebuild binaries published to the Maven Central Repository in a reproducibility context. This month, Hervé Boutemy announced that Reproducible Central was able to successfully rebuild the 100th reproducible release of a project published to the Maven Central Repository (and counting…).

We also first wrote about the Threema messaging application in September 2020. This month, however, the Threema developers announced that their applications have been released under the GNU Affero General Public License (AGPL) ( announcement) and that they are now reproducible from version 4.5-beta1 onwards. (Spiegel.de announcement.)

Community news

Vagrant Cascadian announced that there will be another Reproducible Builds ‘office hours’ session on Thursday January 7th, where members of the Reproducible Builds project will be available to answer any questions. (More info.)

On our mailing list, Jeremiah Orians sent a brief status update on the status of the Bootstrappable project, noting that it is now possible to build a C compiler requiring “nothing outside of a POSIX kernel”. Bernhard M. Wiedemann also published the minutes of a recent debugging-oriented meeting.

Chris Lamb recently took part in an interview with an intern at the Software Freedom Conservancy to talk about the Reproducible Builds project and the importance of reproducibility in software development:

VB: How would you relate the importance of reproducibility to a user who is non-technical?

CL: I sometimes use the analogy of the food ‘supply chain’ to quickly relate our work to non-technical audiences. The multiple stages of how our food reaches our plates today (such as seeding, harvesting, picking, transportation, packaging, etc.) can loosely translate to how software actually ends up on our computers, particularly in the way that if any of the steps in the multi-stage food supply chain has an issue then it quickly becomes a serious problem.

The full interview can be found on the Conservancy webpages.

Distributions

openSUSE

Adrian Schröter added an option to the scripts powering the Open Build Service to enable deterministic filesystem ordering. Whilst this degrades performance slightly, it also enables dozens of packages in openSUSE Tumbleweed to become reproducible. [] Also, Bernhard M. Wiedemann published his monthly Reproducible Builds status update for openSUSE Tumbleweed.

Debian

In Debian, Holger Levsen uploaded 540 packages to the unstable distribution that were missing .buildinfo files for Architecture: all packages. Holger described his rationale and approach in a blog post titled On doing 540 no-source-change source-only uploads in two weeks, and also he posted the full list of packages he intends to upload during January 2021 to the debian-devel mailing list:

There are many binary (and source) packages in Debian which were uploaded before 2016 (which is when .buildinfo files were introduced) or were uploaded with binaries until that change in release policy July 2019.

Ivo De Decker scheduled binNMUs for all the affected packages but due to the way binNMUs work, he couldn’t do anything about arch:all packages as they currently cannot be rebuilt with binNMUs.

In recent months, Debian Developer Stuart Prescott has been improving python-debian, a Python library that is used to parse Debian-specific files such as changelogs, .dscs, etc. In particular, Stuart has been working on adding support for .buildinfo files used for recording reproducibility-related build metadata. This month, however, Stuart uploaded python-debian version 0.1.39 with many changes, including adding a type for .buildinfo files (#875306).

Chris Lamb identified two new issues (timestamps_in_3d_files_created_by_survex & build_path_in_direct_url_json_file_generated_by_flit), and Vagrant Cascadian discovered four ecbuild-related issues (records_build_flags_from_ecbuild, captures_kernel_version_via_ecbuild, captures_build_arch_via_ecbuild & timestamps_in_h_generated_by_ecbuild). 94 reviews of Debian packages were added, 84 were updated and 34 were removed this month, adding to our knowledge about identified issues.

Vagrant Cascadian made a large number of uploads to Debian fix a number of reproducible issues in packages that do not have an owner, including a2ps (4.14-6), autoconf (2.69-13 & 2.69-14), calife (3.0.1-6), coinor-symphony (5.6.16+repack1-3), epm (4.2-9 & 4.2-10), grap (1.45-4), hpanel (0.3.2-7), libcommoncpp2 (1.8.1-9 & 1.8.1-10), libdigidoc (3.10.5-2), libnss-ldap (265-6), lprng (3.8.B-5), magicfilter (1.2-66), massif-visualizer (0.7.0-2), milter-greylist (4.6.2-2), minlog (4.0.99.20100221-7), mp3blaster (3.2.6-2), nis (3.17.1-6 & 3.17.1-8), spamassassin-heatu (3.02+20101108-4), webauth (4.7.0-8) & wily (0.13.41-9 & 0.13.41-10).

Similarly, Chris Lamb made two uploads of the sendfile package.

NixOS

NixOS made good progress towards having all packages required to build the minimal installation ISO image reproducible. Remaining work includes the python, isl and gcc9 packages and removing the use of Python 2.x in asciidoc.

Elsewhere in NixOS, Adam Hoese of tweag.io also announced trustix, an NGI Zero PET-funded initiative to provide infrastructure for sharing and enforcing reproducibility results for Nix-based systems.

Finally, the following NixOS-specific changes were made:

  • Arnout Engelen:

    • compress-man-pages (create symlinks deterministically)
    • git (reproducible manual)
    • libseccomp (filesystem dates and ordering)
    • linux (omit build ID)
    • pytest (removed unreproducible test artifacts from the pytest package)
    • rustc (generate deterministic manifest)
    • setuptools (stable file ordering for sdist)
    • talloc (avoid Python 2.x build dependency)
  • Atemu:

    • linux (disable module signing)

Tools

diffoscope is our project in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it provides human-readable diffs from many kinds of binary format. This month, Chris Lamb made the following changes, including releasing version 163 on multiple platforms:

  • New features & bug fixes:

    • Normalise ret to retq in objdump output in order to support multiple versions of GNU binutils. (#976760)
    • Don’t show any progress indicators when running zstd. (#226)
    • Correct the grammatical tense in the --debug log output. []
  • Codebase improvements:

    • Update the debian/copyright file to match the copyright notices in the source tree. (#224)
    • Update various years across the codebase in .py copyright headers. []
    • Rewrite the filter routine that post-processes the output from readelf(1). []
    • Remove unnecessary PEP 263 encoding header lines; unnecessary after PEP 3120. []
    • Use minimal instead of basic as a variable name to match the underlying package name. []
    • Use pprint.pformat in the JSON comparator to serialise the differences from jsondiff. []

In addition, Jean-Romain Garnier added tests for OpenJDK 14. [][]

In disorderfs (our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues), Chris Lamb added support for testing on Salsa’s CI system [][][] and added a quick benchmark []. For the GNU Guix distribution, Vagrant Cascadian diffoscope to version 162 [].

Homepage/documentation updates

There were a number of updates to the main Reproducible Builds website and documentation this month, including:

  • Calum McConnell fixed a broken link. []

  • Chris Lamb applied a typo fix from Roland Clobus [], fixed the draft detection logic (#28), added more academic articles to our list [] and corrected a number of grammar issues [][].

  • Holger Levsen documented the #reproducible-changes, #debian-reproducible-changes and #archlinux-reproducible IRC channels. []

  • kpcyrd added rebuilderd and archlinux-repro to the list of tools. []

Testing framework

The Reproducible Builds project operates a large Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Update code copy of debrebuild. []
    • Add Debian sid sources.list entry on a node for test rebuilds. []
    • Use focal instead of the (deprecated) eoan release for hosts running Ubuntu. []
  • Jenkins administration:

    • Show update frequency on the Jenkins shell monitor. []
    • In the Jenkins shutdown monitor, force precedence to find only log files. []
    • Update /etc/init.d script from the latest jenkins package. []
  • System health checks & notifications:

    • Detect database locks in the pacman Arch Linux package manager. []
    • Detect hosts running in the ‘wrong future’. [][]
    • Install the apt-utils package on all Debian-based hosts. []
    • Use /citests/ as the landing page. []
    • Update our debrebuild fork. []

In addition, Mattia Rizzolo made some Debian-related changes, including refreshing the GnuPG key of our repository [], dropping some unused code [] and skipping pbuilder/sdist updates on nodes that are not performing Debian-related rebuilds []. Marcus Hoffmann updated his mailing list subscription status too [].

Lastly, build node maintenance was also performed by Holger Levsen [][][], Mattia Rizzolo [] and Vagrant Cascadian [].

Upstream patches

The following patches were created this month:


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 January, 2021 03:41PM