May 16, 2021

Carl Chenet

How to save up to 500€/year switching from Mailchimp to Open Source Mailtrain and AWS SES

My newsletter Le Courrier du hacker (3,800 subscribers, 176 issues) is 3 years old and Mailchimp costs were becoming unbearable for a small project ($50 a month, $600 a year), with still limited revenues nowadays. Switching to the Open Source Mailtrain plugged to the AWS Simple Email Service (SES) will dramatically reduce the associated costs.

First things first, thanks a lot to Pierre-Gilles Leymarie for his own article about switching to Mailtrain/SES. I owe him (and soon you too) so much. This article will be a step-by-step about how to set up Mailtrain/SES on a dedicated server running Linux.

What’s the purpose of this article?

Mailchimp is more and more expensive following the growth of your newsletter subscribers and you need to leave it. You can use Mailtrain, a web app running on your own server and use the AWS SES service to send emails in an efficient way, avoiding to be flagged as a spammer by the other SMTP servers (very very common, you can try but… you have been warned against 😉

Prerequisites

You will need the following prerequisites :

  • An AWS account with admin rights
  • Full control over the domain name you use to send your newsletter
  • A baremetal or virtual Linux Debian server you have the root access to
  • NodeJS installed (8x and 10x are ok with Mailtrain)
  • A MySQL/MariaDB instance with the root access to
  • Redis 3x (not 5) if you want to use Redis (not mandatory)

Steps

This is a fairly straightforward setup if you know what you’re doing. In the other case, you may need the help of a professional sysadmin.

You will need to complete the following steps in order to complete your setup:

  • Configure AWS SES
  • Configure your server by:
    • configuring your database
    • install Mailtrain
    • configuring your web server
  • Configure your Mailtrain setup

Configure AWS SES

Verify your domain

You need to configure the DKIM to certify that the emails sent are indeed from your own domain. DKIM is mandatory, it’s the de-facto standard in the mail industry.

Ask to verify your domain

Ask AWS SES to verify a domain

Generate the DKIM settings

Generate the DKIM settings

Use the DKIM settings

Now you have your DKIM settings and Amazon AWS is waiting for finding the TXT field in your DNS zone.

Configure your DNS zone to include DKIM settings

I can’t be too specific for this section because it varies A LOT depending on your DNS provider. The keys is: as indicated by the previous image you have to create one TXT record and two CNAME records in your DNS zone. The names, the types and the values are indicated by AWS SES.

If you don’t understand what’s going here, there is a high probabiliy you’ll need a system administrator to apply these modifications and the next ones in this article.

Am I okay for AWS SES ?

As long as the word verified does not appear for your domain, as shown in the image below, something is wrong. Don’t wait too long, you have a misconfiguration somewhere.

AWS SES pending verification

When your domain is verified, you’ll also receive an email to inform you about the successful verification.

SMTP settings

The last step is generating your credentials to use the AWS SES SMTP server. IT is really straightforward, providing the STMP address to use, the port, and a pair of username/password credentials.

AWS SES SMTP settings and credentials

Just click on Create My SMTP Credentials and follow the instructions. Write the SMTP server address somewhere and store the file with credentials on your computer, we’ll need them below.

Configure your server

As we said before, we need a baremetal server or a virtual machine running a recent Linux.

Configure your MySQL/MariaDB database

We create a user mailtrain having all rights on a new database mailtrain.

MariaDB [(none)]> create database mailtrain;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> CREATE USER 'mailtrain' IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.01 sec)
MariaDB [(none)]> GRANT ALL PRIVILEGES ON mailtrain.* TO 'mailtrain'@localhost IDENTIFIED BY 'V3rYD1fF1cUlTP4sSW0rd!';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mailtrain          |
| mysql              |
| performance_schema |
+--------------------+
6 rows in set (0.00 sec)
MariaDB [(none)]> Bye

Configure your web server

I use Nginx and I’ll give you the complete setup for it, including generating Let’s Encrypt.

Configure Let’s Encrypt

You need to stop Nginx as root:

systemctl stop nginx

Then get the certificate only, I’ll give the Nginx Vhost configuration:

certbot certonly -d mailtrain.toto.com

Install Mailtrain

On your server create the following directory:

mkdir -p /var/www/
cd /var/www
wget https://github.com/Mailtrain-org/mailtrain/archive/refs/tags/v1.24.1.tar.gz
tar zxvf v1.24.1.tar.gz

Modify the file /var/www/mailtrain/config/production.toml to use the MySQL settings:

[mysql]
host="localhost"
user="mailtrain"
password="V3rYD1ff1culT!"
database="mailtrain"

Now launch the Mailtrain process in a screen:

screen
NODE_ENV=production npm start

Now Mailtrain is launched and should be running. Yeah I know it’s ugly to launch like this (root process in a screen, etc) you can improve security with the following commands:

groupadd mailtrain
useradd -g mailtrain
chown -R mailtrain:mailtrain /var/www/mailtrain 

Now create the following file in /etc/systemd/system/mailtrain.service

[Unit]
 Description=mailtrain
 After=network.target

[Service]
 Type=simple
 User=mailtrain
 WorkingDirectory=/var/www/mailtrain/
 Environment="NODE_ENV=production"
 Environment="PORT=3000"
 ExecStart=/usr/bin/npm run start
 TimeoutSec=15
 Restart=always

[Install]
 WantedBy=multi-user.target

To register the following systemd unit and to launch the new Mailtrain daemon, use the following commands (do not forget to kill your screen session if you used it before):

systemctl daemon-reload
systemctl start mailtrain.service

Now Mailtrain is running under the classic user mailtrain of the mailtrain system group.

Configure the Nginx Vhost configuration for your domain

Here is my configuration for the Mailtrain Nginx Vhost:

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {
  listen 80; 
  listen [::]:80;
  server_name mailtrain.toto.com;
  return 301 https://$host$request_uri;
}

server {
  listen 443 ssl;
  listen [::]:443 ssl;
  server_name mailtrain.toto.com;

  access_log /var/log/nginx/mailtrain.toto.com.access.log;
  error_log /var/log/nginx/mailtrain.toto.com.error.log;

  ssl_protocols TLSv1.2;
  ssl_ciphers EECDH+AESGCM:EECDH+AES;
  ssl_ecdh_curve prime256v1;
  ssl_prefer_server_ciphers on; 
  ssl_session_cache shared:SSL:10m;

  ssl_certificate     /etc/letsencrypt/live/mailtrain.toto.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/mailtrain.toto.com/privkey.pem;

  keepalive_timeout    70; 
  sendfile             on;
  client_max_body_size 0;

  root /var/www/mailtrain;

  location ~ /\.well-known\/acme-challenge {
    allow all;
  }

  gzip on; 
  gzip_disable "msie6";
  gzip_vary on; 
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k; 
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / { 
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_pass http://127.0.0.1:3000;

  }
}

Now Nginx is ready. Just start it:

systemctl start nginx

This Nginx vhost will redirect all http requests coming to the Mailtrain process running on the 3000 port. Now it’s time to setup Mailtrain!

Setup Mailtrain

You should be able to access your Mailtrain at https://mailtrain.toto.com

Mailtrain is quite simple to configure, Here is my mailer setup. Mailtrain just forwards emails to AWS SES. We only have to plug Mailtrain to AWS SES.

Mailtrain mailer setup

The hostname is provided by AWS SES in the STMP Settings section. Use the 465 port and USE TLS option. Next is providing your AWS SES username and password you generated above and stored somewhere on your computer.

One of the issues I encountered is the AWS SES rate limit. Send too many emails too fast will get you flagged as a spammer.

So I had to throttle Mailtrain. Because I’m a lazy man, I asked Pierre-Gilles Leymarie his setup. Quite easier than determining myself the good one. Here is my setup. Works fine for my soon-to-be 4k subscribers. The idea is: if your AWS SES lets you know you send too fast then… just slow down.

Mailtrain to throttle sending emails to AWS SES

Conclusion

That’s it! You’re ready! Almost. You need an HTML template for your newsletter and a list of subscribers. Buf if you’re not new in the newsletter field, fleeing Mailchimp because of their expensive prices, you should have them both already.

After sending almost ten issues with this setup, I’m really happy with it. Open/click rates are the same.

When leaving Mailchimp, do not leave any list of subscribers because they’ll charge you $8 for a 0 to 500 contacts, that’s crazy expensive!

About the author

The post How to save up to 500€/year switching from Mailchimp to Open Source Mailtrain and AWS SES appeared first on Carl Chenet's Blog.

16 May, 2021 11:01PM by Carl Chenet

Vincent Fourmond

Tutorial: analyze redox inactivations/reactivations

Redox-dependent inactivations are actually rather common in the field of metalloenzymes, and electrochemistry can be an extremely powerful tool to study them, providing one can analyze the data quantitatively. The point of this point is to teach the reader how to do so using QSoas. For more general information about redox inactivations and how to study them using electrochemical techniques, the reader is invited to read the review del Barrio and Fourmond, ChemElectroChem 2019.

This post is a tutorial to learn the analysis of data coming from the study of the redox-dependent substrate inhibition of periplasmic nitrate reductase NapAB, which has the advantage of being relatively simple. The whole processed is discussed in Jacques et al, BBA, 2014. What you need to know in order to follow this tutorial is the following:
  • the whole inactivation/reactivation process can be modelled by a simple reversible reaction: $$ \mathrm{A} \rightleftharpoons \mathrm{I} $$ A is the active form, I the inactive form;
  • the forward rate constant is \(k_i(E)\) (dependent on potential) and the backward rate constant is \(k_a(E)\), also dependent on potential;
  • the experiment is done in a series of 5 steps at 3 different potentials: \(E_0\) then \(E_1\) then \(E_2\) then \(E_1\) then, finally, \(E_0\);
  • the enzyme is assumed to be fully active at the beginning of the first step;
  • a single experiment is used to obtain the values of \(k_i\) and \(k_a\) for the three potentials (although not reliably for the value at \(E_0\)
  • the current given by the active species depends on potential (and it is negative because the enzyme catalyzes a reduction), and the inactive species gives no current;
  • in addition to the reversible reaction above, there is an irreversible, potential-dependent loss.
You can download the data files from the GitHub repository. Before fitting the data to determine the values of the rate constants at the potentials of the experiment, we will first subtract the background current, assuming that the respective contributions of faradaic and non-faradaic currents is additive. Start QSoas, go to the directory where you saved the files, and load both the data file and the blank file thus:
QSoas> cd
QSoas> load 27.oxw
QSoas> load 27-blanc.oxw
QSoas> S 1 0
(after the first command, you have to manually select the directory in which you downloaded the data files). The S 1 0 command just subtracts the dataset 1 (the first loaded) from the dataset 0 (the last loaded), see more there. blanc is the French for blank...

Then, we remove a bit of the beginning and the end of the data, corresponding to one half of the steps at \(E_0\), which we don't exploit much here (they are essentially only used to make sure that the irreversible loss is taken care of properly). This is done using strip-if:
QSoas> strip-if x<30||x>300
Then, we can fit ! The fit used is called fit-linear-kinetic-system, which is used to fit kinetic models with only linear reactions (like here) and steps which change the values of the rate constants but do not instantly change the concentrations. The specific command to fit the data is:
QSoas> fit-linear-kinetic-system /species=2 /steps=0,1,2,1,0
The /species=2 indicates that there are two species (A and I). The /steps=0,1,2,1,0 indicates that there are 5 steps, with three different conditions (0 to 2) in order 0,1,2,1,0. This fits needs a bit of setup before getting started. The species are numbered, 1 and 2, and the conditions (potentials) are indicated by #0, #1 and #2 suffixes.
  • The I_1 and I_2 are the currents for the species 1 and 2, so something for 1 (active form) and 0 for 2 (inactive form). Moreover, the parameters I_2_#0 (and _#1, _#2) should be fixed and not free (since we don't need to adjust a current for the inactive form).
  • The k_11 and k_22 correspond to species-specific irreversible loss. It is generally best to leave them fixed to 0.
  • k_12 is the formation of 2 (I) from 1 (A), and k_21 is the formation of A from I. Their values will be determined for the three conditions. The default values should work here.
  • The k_loss parameters are the rates of irreversible loss that apply indiscriminately on all species (unlike k_11 and k_22). They are adjusted and ther default values should work too.
  • alpha_1_0 and alpha_2_0 are the initial concentrations of species 1 and 2, so they should be fixed to 1 and 0.
  • Last, the xstart_a and (_b, _c, _d and _e) correspond to the starting times for the steps, here, 0, 60, 120, 210 and 270.
For the sake of simplicity, you can also simply load the starting-parameters.params parameters to have all setup the correct way. Then, just hit Fit, enjoy this moment when QSoas works and you don't have to... The screen should now look like this:
Now, it's done ! The fit is actually pretty good, and you can read the values of the inactivation and reactivation rate constants from the fit parameters.

You can train also on the 21.oxw and 21-blanc.oxw files. Usually, re-loading the best fit parameters from other potentials as starting parameters work really well. Gathering the results of several fits into a real curve of rate constants as a function of potentials is left as an exercise for the reader (or maybe a later post), although you may find these series of posts useful in this context !

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

16 May, 2021 08:01PM by Vincent Fourmond (noreply@blogger.com)

Russ Allbery

Review: A Desolation Called Peace

Review: A Desolation Called Peace, by Arkady Martine

Series: Teixcalaan #2
Publisher: Tor
Copyright: 2021
ISBN: 1-250-18648-X
Format: Kindle
Pages: 496

A Desolation Called Peace is a direct sequel to A Memory Called Empire and picks up shortly after that book's ending. It would completely spoil the first book and builds heavily on previous events. This is not a series to read out of order.

It's nearly impossible to discuss anything about the plot of this book without at least minor spoilers for the previous book, so beware. If you've not read A Memory Called Empire, I highly recommend it, and you may want to skip this review until you have.

Mahit Dzmare has returned to Lsel Station and escaped, mostly, the pull of the Teixcalaan Empire in all its seductive arrogance. That doesn't mean Lsel Station is happy to see her. The maneuverings of the station council were only a distant part of the complex political situation she was navigating at the Teixcalaanli capital. Now home, it is far harder to ignore powerful councilors who would be appalled by the decisions she made. The ambassador to a hated foreign empire does not have many allies.

Yaotlek Nine Hibiscus, the empire's newest commander of commanders, is the spear the empire has thrust towards a newly-discovered alien threat. The aliens have already slaughtered all the inhabitants of a mining outpost for no obvious reason, and their captured communications are so strange as to provoke nausea in humans. Their cloaking technology makes the outcome of pitched warfare dangerously uncertain. Nine Hibiscus needs someone who can talk to aliens without mouths, and that means the Information Ministry.

The Information Ministry means a newly promoted Three Seagrass, who is suffering from insomnia, desperately bored, and missing Mahit Dzmare. And who sees in Nine Hibiscus's summons an opportunity to address several of those problems at once.

A Memory Called Empire had an SFnal premise and triggering plot machinery, but it was primarily a city political thriller. A Desolation Called Peace moves onto the more familiar SF ground of first contact with a very alien species, but Martine makes the unusual choice of revealing one of the secrets of the aliens to the reader at the start of the book. This keeps the reader's focus more on the political maneuvering than on the mystery, but with a classic first-contact communication problem as the motivating backdrop.

That's only one of the threads of this book, though. Another is the unfinished business between Three Seagrass and Mahit Dzmare, and between Mahit Dzmare and the all-consuming culture of Teixcalaan. A third is the political education of a very exceptional boy, whose mere existence is a spoiler for A Memory Called Empire and therefore not something I will discuss in detail. And then there are the internal politics of Lsel Station, although I thought that was the least effective part of the book and never reached a satisfying conclusion.

This is a lot to balance, and I think that's one of the reasons why A Desolation Called Peace doesn't replicate the magic that made me love A Memory Called Empire so much. Full-steam-ahead pacing with characters who are thinking on their feet and taking desperate risks has a glorious momentum. Here, there's too much going on (not to mention four major viewpoint characters) to maintain the same pace. Once Mahit and Three Seagrass get into the same room, there are moments that are as good as the highlights of A Memory Called Empire, but it's not as sustained as I was hoping for.

This book also spends more time on Mahit and Three Seagrass's relationship, and despite liking both of the characters, this didn't entirely work for me. Martine uses them to make a subtle and powerful point about relationships across power gradients and the hurt that comes from someone trivializing a conflict that is central to your identity. It took me a while to understand the strength of Mahit's reaction, but it eventually felt right. But that fight wasn't what I was looking for in the book, and there was a bit too much of both of them failing (or refusing) to communicate for my taste. I appreciated what Martine was exploring, but personally I wanted a different sort of catharsis.

That said, this is still a highly enjoyable book. Nine Hibiscus is a solid military SF character who is a good counterweight to the more devious approaches of the other characters. I enjoyed the subplot of the kid in the Teixcalaanli capital more than I expected, although it felt more like setup for future novels than critical to the plot of this one. And then there's Three Seagrass.

Three Seagrass always made decisions wholly and entire. All at once. choosing information as her aptitudes. Choosing the position of cultural liaison to the Lsel Ambassador. Choosing to trust her. choosing to come here, to take this assignment — entirely, completely, and without pausing to look to see how deep the water was that she was leaping into.

Every word of this is true, and it's so much fun to read. Three Seagrass was a bit overshadowed in A Memory Called Empire, a supporting character in someone else's story. Here, she has moments where she can take the lead, and she's so delightfully different than Mahit. I loved every moment of her viewpoint.

A Desolation Called Peace isn't as taut or as coherent as A Memory Called Empire. The plot sags in a few places, and I think there was a bit too much hopeless Lsel politics, nebulous alien horror, and injured silence between characters. But the high points are nearly as good as the high points of A Memory Called Empire and I adore these characters. If you liked the first book, I think you'll like this one too.

More, please!

Rating: 8 out of 10

16 May, 2021 04:16AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Memo on using a KVM switch with my Debian desktop and ChromeBox.

Memo on using a KVM switch with my Debian desktop and ChromeBox. KVM these days mean Kernel virtualization things but before it used to mean KVM switch; where PS/2 and DSUB15 pin was a popular interface. These days it's USB + HDMI; USB-C with dP alt mode would make connections simpler including power supply, but that doesn't exist as much. This ends up 3 cables each for PC and display, with 2 USB cables and 1 HDMI cable per endpoint. I bought a cheap manual switcher. Automatic switcher can be annoying especially when suspend-resume case, there's no rational behavior, especially when I want to wake devices through keyboard. Having a wired remote control was good because although the device itself is compact, the 9 or so cables running out of it is bulky and I moved it under my desk to hide it from my view. Sometimes suspending / resume causes issues with the USB hub, it's recognized as SuperTop USB Hub, and from Linux kernel it seems like hub is disconnected and reconnected, however on reinitialization sometimes USB hub fails to be initialized. That results in keyboard not working, but re-connecting the USB cable fixes the issue. Sound is recorded via USB mixer, and played back via HDMI audio, video is recorded via USB camera. These things didn't exist back in the old KVM days, maybe they used audio cables.

16 May, 2021 01:24AM by Junichi Uekawa

May 15, 2021

Waiting for network to be up from a service on Debian.

Waiting for network to be up from a service on Debian. I've noticed that when I observed in journalctl that many services were starting before dhclient started running and configured DHCP. They are waiting for network-online.target, however network-online.target seems to be triggered before networking is available. After a few internet searches, ifupdown is the default network manager for Debian, and it seems like there's a specific systemd target for ifupdown. /usr/lib/systemd/system/ifupdown-wait-online.service contains that service. So, I could do this to fix the situation. Now, should this have been the default? filed a bug: 988533.

15 May, 2021 06:38AM by Junichi Uekawa

hackergotchi for Sean Whitton

Sean Whitton

pinebookpro

I recently bought a Pinebook Pro. This was mainly out of general interest, but also because I wanted to have a spare portable computer. When I was recently having some difficulty with my laptop not charging, I realised that I am dependent on having access to Emacs, notmuch.el and my usual git repositories in the way that most people are dependent on their smartphones – all the info I need to get things done is in there, and it’s very disabling not to have it. So, good to have a spare.

I decided to get the machine running the hard way, and have been working to add a facility to install the device-specific bootloader to Consfigurator. It has been good to learn about how ARM machines boot. The only really hard part turned out to be coming up with the right abstractions within Consfigurator, thanks to the hard work of the Debian U-Boot maintainers. This left me with a chroot and a corresponding disk image, properly partitioned and with the bootloader installed. It was only then that the difficulties began: getting a kernel and initrd combination which can output to the Pinebook Pro’s screen and take input from its keyboard is not really straightforward yet, but that’s required for inputting disk encryption passwords, which are required on portable devices. I don’t have the right hardware to make a serial connection to the machine, so all this took a lot of trial and error. I’ve ended up using Manjaro’s patched upstream kernel build for now, because that compiles in the right drivers, and debugging an initrd without a serial connection is far too inefficient.

What I keep having to remind myself is that this device isn’t really a laptop in the usual sense – it’s a single board computer that’s powering several pieces of hardware which together roughly constitute a laptop. I think something which epitomises this is how the power light doesn’t come on when you hit the power button, but only when the bootloader or operating system kernel thinks to turn on the LED. You start up this SBC and it loads up some software and then once it has got itself going – several seconds later – that software starts turning on the screen, keyboard, power LEDs etc. Whereas on an ordinary laptop it’s more than you turn on the keyboard, screen, power LEDs etc. all at once, and then /they/ go off and load some software. Of course this description is nothing like what’s actually going on, but it’s my attempt to capture how it feels as a user, who is installing operating systems, but otherwise treating the laptop’s hardware, including things like boot ROMs, as a black box. There are tangible differences between what it is like to do that with an ordinary laptop and with the Pinebook Pro.

Thanks to Vagrant Cascadian for all the work on U-Boot in Debian and for help on IRC, Cyril Brulebois for help with crossbuilding, and Birger Schacht for a useful blog post.

15 May, 2021 12:25AM

May 14, 2021

hackergotchi for Junichi Uekawa

Junichi Uekawa

pomodoro timer in elisp.

pomodoro timer in elisp. I put up my current configuration here if you're interested. Not that I recommend this to anybody but this is tightly integrated with my daily notes taken in markdown format, so that I have a new entry for my notes every pomodoro. I don't time my break time because my water heater takes 5 minutes to boil and that's a good enough timer for me. I tried using notifications-notify to make sound but it turned out some environments I use don't forward this correctly, maybe someone already could have done the integration with OSC9/OSC777 to make sound work but not quite sure, so I make sound via sshing to my local raspberry pi and making it trigger chrome cast commands in my room to play sound.

14 May, 2021 11:57PM by Junichi Uekawa

Jelmer Vernooij

Ognibuild

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

The FOSS world uses a wide variety of different build tools; given a git repository or tarball, it can be hard to figure out how to build and install a piece of software.

Humans will generally know what build tool a project is using when they check out a project from git, or they can read the README. And even then, the answer may not always be straightforward to everybody. For automation, there is no obvious place to figure out how to build or install a project.

Debian

For Debian packages, Debian maintainers generally will have determined that the appropriate tools to invoke are, and added appropriate invocations to debian/rules. This is really nice when rebuilding all of Debian - one can just invoke debian/rules - a consistent interface - and it will in turn invoke the right tools to build the package, meeting a long list of requirements.

With newer versions of debhelper and most common build systems, debhelper can figure a lot of this out automatically - the maintainer just has to add the appropriate build and run time dependencies.

However, debhelper needs to be consistent in its behaviour per compat level - otherwise builds might start failing with different versions of debhelper, when the autodetection logic is changed. debhelper can also only do the right thing if all the necessary dependencies are present. debhelper also only functions in the context of a Debian package.

Ognibuild

Ognibuild is a new tool that figures out the build system in use by an upstream project, as well as the other dependencies it needs. This information can then be used to invoke said build system, or to e.g. add missing build dependencies to a Debian package.

Ognibuild uses a variety of techniques to work out what the dependencies for an upstream package are:

  • Extracting dependencies and other requirements declared in build system metadata (e.g. setup.py)
  • Attempting builds and parsing build logs for missing dependencies (repeating until the build succeeds), calling out to buildlog-consultant

Once it is determined which dependencies are missing, they can be resolved in a variety of ways. Apt can be invoked to install missing dependencies on Debian systems (optionally in a chroot) or ecosystem-specific tools can be used to do so (e.g. pypi or cpan). Instead of installing packages, the tool can also simply inform the user about the missing packages and commands to install them, or update a Debian package appropriately (this is what deb-fix-build does).

The target audience of ognibuild are people who need to (possibly from automation) build a variety of projects from different ecosystems or users who are looking to just install a project from source. Developers who are just hacking on e.g. a Python project are better off directly invoking the ecosystem-native tools rather than a wrapper like ognibuild.

Supported ecosystems

(Partially) supported ecosystems currently include:

  • Combinations of make and autoconf, automake or CMake
  • Python, including fetching packages from pypi
  • Perl, including fetching packages from cpan
  • Haskell, including fetching from hackage
  • Ninja/Meson
  • Maven
  • Rust, including fetching packages from crates.io
  • PHP Pear
  • R, including fetching packages from CRAN and Bioconductor

For a full list, see the README.

Usage

Ognibuild provides a couple of top-level subcommands that will seem familiar to anybody who has used a couple of other build systems:

  • ogni clean - remove build artifacts
  • ogni dist - create a dist tarball
  • ogni build - build the project in the current directory
  • ogni test - run the test suite
  • ogni install - install the project somewhere
  • ogni info - display project information including discovered build system and dependencies
  • ogni exec - run an arbitrary command but attempt to resolve issues like missing dependencies

These tools all take a couple of common options:

—resolve=apt|auto|native

Specifies how to resolve any missing dependencies:

  • apt: install the appropriate dependency using apt
  • native: install dependencies using native tools like pip or cpan
  • auto: invoke either apt or native package install, depending on whether the current user is allowed to invoke apt

—schroot=name

Run inside of a schroot.

—explain

do not make any changes but tell the user which native on apt packages they could install.

There are also subcommand-specific options, e.g. to install to a specific directory on restrict which tests are run.

Examples

Creating a dist tarball

1
2
3
4
5
6
7
8
9
% git clone https://github.com/dulwich/dulwich
% cd dulwich
% ogni --schroot=unstable-amd64-sbuild dist

Writing dulwich-0.20.21/setup.cfg
creating dist
Creating tar archive
removing 'dulwich-0.20.21' (and everything under it)
Found new tarball dulwich-0.20.21.tar.gz in /var/run/schroot/mount/unstable-amd64-sbuild-974d32d7-6f10-4e77-8622-b6a091857e85/build/tmpucazj7j7/package/dist.

Installing ldb from source, resolving dependencies using apt

1
2
3
4
5
6
7
8
9
% wget https://download.samba.org/pub/ldb/ldb-2.3.0.tar.gz
% tar xvfz ldb-2.3.0.tar.gz
% cd ldb-2.3.0
% ogni install --prefix=/tmp/ldb

+ install /tmp/ldb/include/ldb.h (from include/ldb.h)

Waf: Leaving directory `/tmp/ldb-2.3.0/bin/default'
'install' finished successfully (11.395s)

Running all tests from XML::LibXML::LazyBuilder

1
2
3
4
5
6
% wget ``https://cpan.metacpan.org/authors/id/T/TO/TORU/XML-LibXML-LazyBuilder-0.08.tar.gz`_ <https://cpan.metacpan.org/authors/id/T/TO/TORU/XML-LibXML-LazyBuilder-0.08.tar.gz>`_

% tar xvfz XML-LibXML-LazyBuilder-0.08.tar.gz
Cd XML-LibXML-LazyBuilder-0.08
% ogni test

Current Status

ognibuild is still in its early stages, but works well enough that it can detect and invoke the build system for most of the upstream projects packaged in Debian. If there are buildsystems that it currently lacks support for or other issues, then I’d welcome any bug reports.

14 May, 2021 06:00PM by Jelmer Vernooij

May 13, 2021

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (March and April 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Jeroen Ploemen (jcfp)
  • Mark Hindley (leepen)
  • Scarlett Moore (sgmoore)
  • Baptiste Beauplat (lyknode)

The following contributors were added as Debian Maintainers in the last two months:

  • Gunnar Ingemar Hjalmarsson
  • Stephan Lachnit

Congratulations!

13 May, 2021 02:00PM by Jean-Pierre Giraud

hackergotchi for Shirish Agarwal

Shirish Agarwal

Population, Immigration, Vaccines and Mass-Surveilance.

The Population Issue and its many facets

Another couple of weeks passed. A Lot of things happening, lots of anger and depression in folks due to handling in pandemic, but instead of blaming they are willing to blame everybody else including the population. Many of them want forced sterilization like what Sanjay Gandhi did during the Emergency (1975). I had to share ‘So Long, My son‘. A very moving tale of two families of what happened to them during the one-child policy in China. I was so moved by it and couldn’t believe that the Chinese censors allowed it to be produced, shot, edited, and then shared worldwide. It also won a couple of awards at the 69th Berlin Film Festival, silver bear for the best actor and the actress in that category. But more than the award, the theme, and the concept as well as the length of the movie which was astonishing. Over a 3 hr. something it paints a moving picture of love, loss, shame, relief, anger, and asking for forgiveness. All of which can be identified by any rational person with feelings worldwide.

Girl child

What was also interesting though was what it couldn’t or wasn’t able to talk about and that is the Chinese leftover men. In fact, a similar situation exists here in India, only it has been suppressed. This has been more pronounced more in Asia than in other places. One big thing in this is human trafficking and mostly women trafficking. For the Chinese male, that was happening on a large scale from all neighboring countries including India. This has been shared in media and everybody knows about it and yet people are silent. But this is not limited to just the Chinese, even Indians have been doing it. Even yesteryear actress Rupa Ganguly was caught red-handed but then later let off after formal questioning as she is from the ruling party. So much for justice.

What is and has been surprising at least for me is Rwanda which is in the top 10 of some of the best places in equal gender. It, along with other African countries have also been in news for putting quite a significant amount of percentage of GDP into public healthcare (between 20-10%), but that is a story for a bit later. People forget or want to forget that it was in Satara, a city in my own state where 220 girls changed their name from nakusha or ‘unwanted’ to something else and that had become a piece of global news. One would think that after so many years, things would have changed, the only change that has happened is that now we have two ministries, The Ministry of Women and Child Development (MoWCD) and The Ministry of Health and Welfare (MoHFW). Sadly, in both cases, the ministries have been found wanting, Whether it was the high-profile Hathras case or even the routine cries of help which given by women on the twitter helpline. Sadly, neither of these ministries talks about POSH guidelines which came up after the 2012 gangrape case. For both these ministries, it should have been a pinned tweet. There is also the 1994 PCPNDT Act which although made in 1994, actually functioned in 2006, although what happens underground even today nobody knows 😦 .

On the global stage, about a decade ago, Stephen J. Dubner and Steven Levitt argued in their book ‘ Freakonomics‘ how legalized abortion both made the coming population explosion as well as expected crime rates to be reduced. There was a huge pushback on the same from the conservatives and has become a matter of debate, perhaps something that the Conservatives wanted. Interestingly, it hasn’t made them go back but go forward as can be seen from the Freakonomics site.

Climate Change

Another topic that came up for discussion was repeatedly climate change, but when I share Shell’s own 1998 Confidential report titled ‘Greenhouse effect‘ all become strangely silent. The silence here is of two parts, there probably is a large swathe of Indians who haven’t read the report and there may be a minority who have read it and know what already has been shared with U.S. Congress. The Conservative’s argument has been for it is ‘jobs’ and a weak ‘we need to research more ‘. There was a partial debunk of it on the TBD podcast by Matt Farell and his brother Sean Farell as to how quickly the energy companies are taking to the coming change.

Health Budget

Before going to Covid stories. I first wanted to talk about Health Budgets. From the last 7 years the Center’s allocation for health has been between 0.34 to 0.8% per year. That amount barely covers the salaries to the staff, let alone any money for equipment or anything else. And here by allocation I mean, what is actually spent, not the one that is shared by GOI as part of budget proposal. In fact, an article on Wire gives a good breakdown of the numbers. Even those who are on the path of free markets describe India’s health business model as a flawed one. See the Bloomberg Quint story on that. Now let me come to Rwanda. Why did I chose Rwanda, I could have chosen South Africa where I went for Debconf 2016, I chose because Rwanda’s story is that much more inspiring. In many ways much more inspiring than that South Africa in many ways. Here is a country which for decades had one war or the other, culminating into the Rwanda Civil War which ended in 1994. And coincidentally, they gained independence on a similar timeline as South Africa ending Apartheid in 1994. What does the country do, when it gains its independence, it first puts most of its resources in the healthcare sector. The first few years at 20% of GDP, later than at 10% of GDP till everybody has universal medical coverage.

Coming back to the Bloomberg article I shared, the story does not go into the depth of beyond-expiry date medicines, spurious medicines and whatnot. Sadly, most media in India does not cover the deaths happening in rural areas and this I am talking about normal times. Today what is happening in rural areas is just pure madness. For last couple of days have been talking with people who are and have been covering rural areas. In many of those communities, there is vaccine hesitancy and why, because there have been whatsapp forwards sharing that if you go to a hospital you will die and your kidney or some other part of the body will be taken by the doctor. This does two things, it scares people into not going and getting vaccinated, at the same time they are prejudiced against science. This is politics of the lowest kind. And they do it so that they will be forced to go to temples or babas and what not and ask for solutions. And whether they work or not is immaterial, they get ‘fixed’ and property and money is seized. Sadly, there are not many Indian movies of North which have tried to show it except for ‘oh my god‘ but even here it doesn’t go the distance. A much more honest approach was done in ‘Trance‘. I have never understood how the South Indian movies are able to do a more honest job of story-telling than what is done in Bollywood even though they do in 1/10th the budget that is needed in Bollywood. Although, have to say with OTT, some baggage has been shed but with the whole film certification rearing its ugly head through MEITY orders, it seems two steps backward instead of forward. The idea being simply to infantilize the citizens even more. That is a whole different ball-game which probably will require its own space.

Vaccine issues

One good news though is that Vaccination has started. But it has been a long story full of greed by none other than GOI (Government of India) or the ruling party BJP. Where should I start with. I probably should start with this excellent article done by Priyanka Pulla. It is interesting and fascinating to know how vaccines are made, at least one way which she shared. She also shared about the Cutter Incident which happened in the late 50’s. The response was on expected lines, character assassination of her and the newspaper they published but could not critique any of the points made by her. Not a single point that she didn’t think about x or y. Interestingly enough, in January 2021 Bharati Biotech was supposed to be share phase 3 trial data but hasn’t been put up in public domain till May 2021. In fact, there have been a few threads raised by both well-meaning Indians as well as others globally especially on twitter to which GOI/ICMR (Indian Council of Medical Research) is silent. Another interesting point to note is that Russia did say in its press release that it is possible that their vaccine may not be standard (read inactivation on their vaccines and another way is possible but would take time, again Brazil has objected, but India hasn’t till date.)

What also has been ‘interesting’ is the homegrown B.1.617 lineage or known as ‘double mutant’. This was first discovered from my own state, Maharashtra and then transported around the world. There is also B.1.618 which was found in West Bengal and is same or supposed to be similar to the one found in South Africa. This one is known as ‘Triple mutant’. About B.1.618 we don’t know much other than knowing that it is much more easily transferable, much more infectious. Most countries have banned flights from India and I cannot fault them anyway. Hell, when even our diplomats do not care for procedures to be followed during the pandemic then how a common man is supposed to do. Of course, now for next month, Mr. Modi was supposed to go and now will not attend the G7 meeting. Whether, it is because he would have to face the press (the only Prime Minister and the only Indian Prime Minister who never has faced free press.) or because the Indian delegation has been disinvited, we would never know.

A good article which shares lots of lows with how things have been done in India has been an article by Arundhati Roy. And while the article in itself is excellent and shares a bit of the bitter truth but is still incomplete as so much has been happening. The problem is that the issue manifests in so many ways, it is difficult to hold on. As Arundhati shared, should we just look at figures and numbers and hold on, or should we look at individual ones, for e.g. the one shared in Outlook India. Or the one shared by Dr. Dipshika Ghosh who works in Covid ICU in some hospital –

Dr. Dipika Ghosh sharing an incident in Covid Ward

Interestingly as well, while in the vaccine issue, Brazil Anvisa doesn’t know what they are doing or the regulator just isn’t knowledgeable etc. (statements by various people in GOI, when it comes to testing kits, the same is an approver.)

ICMR/DGCI approving internationally validated kits, Press release.

Twitter

In the midst of all this, one thing that many people have forgotten and seem to have forgotten that Twitter and other tools are used by only the elite. The reason why the whole thing has become serious now than in the first phase is because the elite of India have also fallen sick and dying which was not the case so much in the first phase. The population on Twitter is estimated to be around 30-34 million and people who are everyday around 20 odd million or so, which is what 2% of the Indian population which is estimated to be around 1.34 billion. The other 98% don’t even know that there is something like twitter on which you can ask help. Twitter itself is exclusionary in many ways, with both the emoticons, the language and all sorts of things. There is a small subset who does use Twitter in regional languages, but they are too small to write anything about. The main language is English which does become a hindrance to lot of people.

Censorship

Censorship of Indians critical of Govt. mishandling has been non-stop. Even U.S. which usually doesn’t interfere into India’s internal politics was forced to make an exception. But of course, this has been on deaf ears. There is and was a good thread on Twitter by Gaurav Sabnis, a friend, fellow Puneite now settled in U.S. as a professor.

Gaurav on Trump-Biden on vaccination of their own citizens

Now just to surmise what has been happened in India and what has been happening in most of the countries around the world. Most of the countries have done centralization purchasing of the vaccine and then is distributed by the States, this is what we understand as co-operative federalism. While last year, GOI took a lot of money under the shady ‘PM Cares fund’ for vaccine purchase, donations from well-meaning Indians as well as Industries and trade bodies. Then later, GOI said it would leave the states hanging and it is they who would have to buy vaccines from the manufacturers. This is again cheap politics. The idea behind it is simple, GOI knows that almost all the states are strapped for cash. This is not new news, this I have shared a couple of months back. The problem has been that for the last 6-8 months no GST meeting has taken place as shared by Punjab’s Finance Minister Amarinder Singh. What will happen is that all the states will fight in-between themselves for the vaccine and most of them are now non-BJP Governments. The idea is let the states fight and somehow be on top. So, the pandemic, instead of being a public health issue has become something of on which politics has to played. The news on whatsapp by RW media is it’s ok even if a million or two also die, as it is India is heavily populated. Although that argument vanishes for those who lose their dear and near ones. But that just isn’t the issue, the issue goes much more deeper than that –

Oxygen:12%
Remedisivir:12%
Sanitiser:12%
Ventilator:12%
PPE:18%
Ambulances 28%

Now all the products above are essential medical equipment and should be declared as essential medical equipment and should have price controls on which GST is levied. In times of pandemic, should the center be profiting on those. States want to let go and even want the center to let go so that some relief is there to the public, while at the same time make them as essential medical equipment with price controls. But GOI doesn’t want to. Leaders of opposition parties wrote open letters but no effect. What is sad to me is how Ambulances are being taxed at 28%. Are they luxury items or ‘sin goods’? This also reminds of the recent discovery shared by Mr. Pappu Yadav in Bihar. You can see the color of ambulances as shared by Mr. Yadav, and the same news being shared by India TV news showing other ambulances. Also, the weak argument being made of not having enough drivers. Ideally, you should have 2-3 people, both 9-1-1 and Chicago Fire show 2 people in ambulance but a few times they have also shown to be flipped over. European seems to have three people in ambulance, also they are also much more disciplined as drivers, at least an opinion shared by an American expat. –

Pappu Yadav, President Jan Adhikar Party, Bihar – May 11, 2021

What is also interesting to note is GOI plays this game of Health is State subject and health is Central subject depending on its convenience. Last year, when it invoked the Epidemic and DMA Act it was a Central subject, now when bodies are flowing down the Ganges and pyres being lit everywhere, it becomes a State subject. But when and where money is involved, it again becomes a Central subject. The States are also understanding it, but they are fighting on too many fronts.

Snippets from Karnataka High Court hearing today, 13th March 2021

One of the good things is most of the High Courts have woken up. Many of the people on the RW think that the Courts are doing ‘Judicial activism‘ . And while there may be an iota of truth in it, the bitter truth is that many judges or relatives or their helpers have diagnosed and some have even died due to Covid. In face of the inevitable, what can they do. They are hauling up local Governments to make sure they are accountable while at the same time making sure that they get access to medical facilities. And I as a citizen don’t see any wrong in that even if they are doing it for selfish reasons. Because, even if justice is being done for selfish reasons, if it does improve medical delivery systems for the masses, it is cool. If it means that the poor and everybody else are able to get vaccinations, oxygen and whatever they need, it is cool. Of course, we are still seeing reports of patients spending in the region of INR 50k and more for each day spent in hospital. But as there are no price controls, judges cannot do anything unless they want to make an enemy of the medical lobby in the country. A good story on medicines and what happens in rural areas, see no further than Laakhon mein ek.

Allahabad High Court hauling Uttar Pradesh Govt. for lack of Oxygen is equal to genocide, May 11, 2021

The censorship is not just related to takedown requests on twitter but nowadays also any articles which are critical of the GOI’ s handling. I have been seeing many articles which have shared facts and have been critical of GOI being taken down. Previously, we used to see 404 errors happen 7-10 years down the line and that was reasonable. Now we see that happen, days weeks or months. India seems to be turning more into China and North Korea and become more anti-science day-by-day 😦

Fake websites

Before going into fake websites, let me start with a fake newspaper which was started by none other than the Gujarat CM Mr. Modi in 2005 .

Gujarat – Satya Samachar – 2005 launched by Mr. Modi.

And if this wasn’t enough than on Feb 8, 2005, he had invoked Official Secrets Act –

Mr. Modi invoking Official Secrets Act, Feb 8 2005 Gujarat Samachar

The headlines were ‘In Modi’s regime press freedom is in peril-Down with Modi’s dictatorship.’ So this was a tried and tested technique. The above information was shared by Mr. Urvish Kothari, who incidentally also has his own youtube channel.

Now cut to 2021, and we have a slew of fake websites being done by the same party. In fact, it seems they started this right from 2011. A good article on BBC itself tells the story. Hell, Disinfo.eu which basically combats disinformation in EU has a whole pdf chronicling how BJP has been doing it. Some of the sites it shared are –

Times of New York
Manchester Times
Times of Los Angeles
Manhattan Post
Washington Herald
and many more.

The idea being take any site name which sounds similar to a brand name recognized by Indians and make fool of them. Of course, those of who use whois and other such tools can easily know what is happening. Two more were added to the list yesterday, Daily Guardian and Australia Today. There are of course, many features which tell them apart from genuine websites. Most of these are on shared hosting rather than dedicated hosting, most of these are bought either from Godaddy and Bluehost. While Bluehost used to be a class act once upon a time, both the above will do anything as long as they get money. Don’t care whether it’s a fake website or true. Capitalism at its finest or worst depending upon how you look at it. But most of these details are lost on people who do not know web servers, at all and instead think see it is from an exotic site, a foreign site and it chooses to have same ideas as me. Those who are corrupt or see politics as a tool to win at any cost will not see it as evil. And as a gentleman Raghav shared with me, it is so easy to fool us. An example he shared which I had forgotten. Peter England which used to be an Irish brand was bought by Aditya Birla group way back in 2000. But even today, when you go for Peter England, the way the packaging is done, the way the prices are, more often than not, people believe they are buying the Irish brand. While sharing this, there is so much of Naom Chomsky which comes to my mind again and again 🙂

Caste Issues

I had written about caste issues a few times on this blog. This again came to the fore as news came that a Hindu sect used forced labor from Dalit community to make a temple. This was also shared by the hill. In both, Mr. Joshi doesn’t tell that if they were volunteers then why their passports have been taken forcibly, also I looked at both minimum wage prevailing in New Jersey as a state as well as wage given to those who are in the construction Industry. Even in minimum wage, they were giving $1 when the prevailing minimum wage for unskilled work is $12.00 and as Mr. Joshi shared that they are specialized artisans, then they should be paid between $23 – $30 per hour. If this isn’t exploitation, then I don’t know what is.

And this is not the first instance, the first instance was perhaps the case against Cisco which was done by John Doe. While I had been busy with other things, it seems Cisco had put up both a demurrer petition and a petition to strike which the Court stayed. This seemed to all over again a type of apartheid practice, only this time applied to caste. The good thing is that the court stayed the petition.

Dr. Ambedkar’s statement “if Hindus migrate to other regions on earth, Indian caste would become a world problem” given at Columbia University in 1916, seems to be proven right in today’s time and sadly has aged well. But this is not just something which is there only in U.S. this is there in India even today, just couple of days back, a popular actress Munmun Dutta used a casteist slur and then later apologized giving the excuse that she didn’t know Hindi. And this is patently false as she has been in the Bollywood industry for almost now 16-17 years. This again, was not an isolated incident. Seema Singh, a lecturer in IIT-Kharagpur abused students from SC, ST backgrounds and was later suspended. There is an SC/ST Atrocities Act but that has been diluted by this Govt. A bit on the background of Dr. Ambedkar can be found at a blog on Columbia website. As I have shared and asked before, how do we think, for what reason the Age of Englightenment or the Age of Reason happened. If I were a fat monk or a priest who was privileges, would I have let Age of Enlightenment happen. It broke religion or rather Church which was most powerful to not so powerful and that power was more distributed among all sort of thinkers, philosophers, tinkers, inventors and so on and so forth.

Situation going forward

I believe things are going to be far more complex and deadly before they get better. I had to share another term called ‘Comorbidities‘ which fortunately or unfortunately has also become part of twitter lexicon. While I have shared what it means, it simply means when you have an existing ailment or condition and then Coronavirus attacks you. The Virus will weaken you. The Vaccine in the best case just stops the damage, but the damage already done can’t be reversed. There are people who advise and people who are taking steroids but that again has its own side-effects. And this is now, when we are in summer. I am afraid for those who have recovered, what will happen to them during the Monsoons. We know that the Virus attacks most the lungs and their quality of life will be affected. Even the immune system may have issues. We also know about the inflammation. And the grant that has been given to University of Dundee also has signs of worry, both for people like me (obese) as well as those who have heart issues already. In other news, my city which has been under partial lockdown since a month, has been extended for another couple of weeks. There are rumors that the same may continue till the year-end even if it means economics goes out of the window.There is possibility that in the next few months something like 2 million odd Indians could die –

The above is a conversation between Karan Thapar and an Oxford Mathematician Dr. Murad Banaji who has shared that the under-counting of cases in India is huge. Even BBC shared an article on the scope of under-counting. Of course, those on the RW call of the evidence including the deaths and obituaries in newspapers as a ‘narrative’. And when asked that when deaths used to be in the 20’s or 30’s which has jumped to 200-300 deaths and this is just the middle class and above. The poor don’t have the money to get wood and that is the reason you are seeing the bodies in Ganges whether in Buxar Bihar or Gajipur, Uttar Pradesh. The sights and visuals makes for sorry reading 😦

Pandit Ranjan Mishra son on his father’s death due to unavailability of oxygen, Varanasi, Uttar Pradesh, 11th May 2021.

For those who don’t know Pandit Ranjan Mishra was a renowned classical singer. More importantly, he was the first person to suggest Mr. Modi’s name as a Prime Ministerial Candidate. If they couldn’t fulfil his oxygen needs, then what can be expected for the normal public.

Conclusion

Sadly, this time I have no humorous piece to share, I can however share a documentary which was shared on Feluda . I have shared about Feluda or Prodosh Chandra Mitter a few times on this blog. He has been the answer of James Bond from India. I have shared previously about ‘The Golden Fortress‘. An amazing piece of art by Satyajit Ray. I watched that documentary two-three times. I thought, mistakenly that I am the only fool or fan of Feluda in Pune to find out that there are people who are even more than me. There were so many facets both about Feluda and master craftsman Satyajit Ray that I was unaware about. I was just simply amazed. I even shared few of the tidbits with mum as well, although now she has been truly hooked to Korean dramas. The only solace from all the surrounding madness. So, if you have nothing to do, you can look up his books, read them and then see the movies. And my first recommendation would be the Golden Fortress. The only thing I would say, do not have high hopes. The movie is beautiful. It starts slow and then picks up speed, just like a train. So, till later.

Update – The Mass surveillance part I could not do justice do hence removed it at the last moment. It actually needs its whole space, article. There is so much that the Govt. is doing under the guise of the pandemic that it is difficult to share it all in one article. As it is, the article is big 😦

13 May, 2021 10:43AM by shirishag75

May 11, 2021

Jamie McClelland

From openbox to sway

I've been running the Openbox window manager since 2005. That's longer then I've lived in any one apartment in my entire life!

However, over the years I've been bracing for a change.

It seems clear the Wayland is the future, although when that future is supposed to begin is much more hazy.

Really, I've felt a bit like a ping pong ball, from panicking over whether Xorg is abandoned (with a follow up from a X server maintainer) to anxiously wondering if literally everything will break the moment I switch to Wayland.

In fact, I started this blog post over a year ago when I first decided to switch from the Openbox to Sway.

This is my third major attempt to make the change and I think it will finally stick this time.

In retrospect, it would have been more sensible to first switch from openbox to i3 (which is a huge transition) and then from i3 to sway, but I decided to dive into the deep end with both changes. Note: thanks to a helpful comment on this blog, I learned that there is waybox, an openbox clone for wayland, which would have been another version of a less drastic change.

So... I'm on debian bullseye so I installed sway and friends (from sid).

Then I copied /etc/sway/config to ~/.config/sway/config.

I start openbox after logging in with exec startx so after rebooting, I ran exec sway and to my astonishment sway started. Hooray!

However, I found that ssh-agent wasn't running so I couldn't ssh into any servers. That's kinda a problem.

Launching ssh-agent under openbox was buried deep in /etc/X11/Xsession.d/90x11-common_ssh-agent and clearly was not going to happen via wayland.

Since programs using ssh-agent depend on the environment variables SSH_AUTH_SOCK and SSH_AGENT_PID being globally available I thought I could simply run $(eval ssh-agent) via my tty terminal before running exec sway.

And, that would have worked. Except... I like to add my keys via ssh-add -c so that everytime my key is being used I get a ssh-askpass prompt to confirm the use.

It seems that since ssh-add is started before a window manager is running, it can't run the prompt.

Ok, we can fix this. After searching the web, I came upon a solution of running ssh-agent via systemctl --user:

# This service myst be started manually after sway
# starts.
[Unit]

Description=OpenSSH private key agent
IgnoreOnIsolate=true

[Service]
Type=forking
Environment=SSH_AUTH_SOCK=%t/ssh-agent.socket
ExecStart=/usr/bin/ssh-agent -a $SSH_AUTH_SOCK

Then, in my ~/.bashrc file I have:

if [ -n WAYLAND_DISPLAY ]; then
  export SSH_AUTH_SOCK=/run/user/1000/ssh-agent.socket
fi

I think $SSH_AGENT_PID is only used by ssh-agent to kill itself. Now that is running via systemd - killing it should be do-able without a global environment variable. Done? Hardly.

I've been using impass (nee assword) happily for years but alas it is tightly integrated with xdo and xclip.

So... I've switched to keepassxc which works out of the box with wayland.

My next challenge was the status bar. Farewell faithful tint2. One of the reasons I failed on my first two attempts to switch to Sway was the difficulty of getting the swaybar to work how I wanted, particularly with nm-applet. Two things allowed me to move forward:

  • waybar was added to Debian. Thank you Debian waybar maintainers!
  • I gave up on having nm-applet work the way I'm used to working and resigned myself to using nmtui. Sigh.

Next up: the waybar clock module doesn't work, but that is easy enough to work around.

Replacing my uses of xclip with wl-clipboard was a little tedious but really not that difficult.

Getting my screen shot and screen recorder functionality was a bit harder. I did a lot of searching before I finally found and compiled both swappy, screen shot and wf-recorder.

In the course of all my adventures, I came across the following helpful tips:


Updates

  1. I've installed libreoffice-gtk3 to ensure libre office runs under wayland
  2. I've installed the latest Gimp via flatpak to get proper wayland support
  3. I've exported MOZ_ENABLE_WAYLAND to ensure firefox works properly.
  4. I've found that passing -c to my ssh-add command to ensure I am prompted for each use of my key seems to cause sway to crash intermittently.
  5. I learned about a work around to get screen sharing to work in zoom. Somehow this actually works. Amazing. Unfortunately, though, sharing your screen in the context of a Zoom meeting pins your screen for all participants. So, sharing your desktop through your camera really doesn't cut it. I finally landed an an obvious work around: Install chromium (which runs under X11); Install the the chrome zoom redirector extension; Open zoom links in chromium; You can now share other chromium windows. Not the full desktop or any wayland window, but if you only need to share a web browser window, you are set; For the record, Zoom links normally are in the format: https://us04web.zoom.us/j/123456. If you want to force the use of the web client, just change the "j" to "wc": https://us04web.zoom.us/wc/123456.
  6. Speaking of screen sharing - when using Firefox, I can only share Xwayland screens. Firefox is running under wayland so I can't share it. Chromium is running under xwayland, so I have to use Chromium when screen sharing.
  7. Wait, scratch that about screen sharing in Firefox. I've installed xdg-desktop-portal-wlr, added export XDG_CURRENT_DESKTOP=sway and export XDG_SESSION_TYPE=wayland to my .bashrc, and after hours of frustration, realize that I needed to configured firejail to allow it so that I can share my entire screen in Firefox. It doesn't yet support sharing a specific window, so I still have to keep chromium around for that (and Chromium can only share xwayland windows). Sigh. Oh, one more thing about Firefox: the option to choose what to share doesn't have "Entire Screen" as an option, you are just supposed to know that you should choose "Use operating system settings".
  8. I still am getting weekly crashes. Some of them I've fixed by switching to wayland friendly versions (e.g. Libre Office and Gimp) but others I haven't yet tracked down.
  9. My keyboard does not have an altgr key, so even though I have selected the "English (US) - English (intl., with AltGr dead keys)" I can't get accent marks. I went down a rabbit hole of trying to re-map the Alt key to the right of my space bar but it all seemed too complicated. So - I found a way easier approach. In my ~/.config/sway/config file I have: bindsym Mod4+e exec wtype "é". I have repeated that line for the main accent marks I need.
  10. Due to a Firefox Bug, when I share my desktop or mic or camera, the sharing indicator expands like a normal tiling window instead of remaining a tiny little box on each desktop reminding me that I'm sharing something. I'd prefer to have it be a tiny little box, but since I can't figure that out, I've disabled it by typing about:config in the Firefox location window, searching for privacy.webrtc.legacyGlobalIndicator and setting it to False. The reddit thread also suggested finding privacy.webrtc.hideGlobalIndicator and setting it to True, but that setting doesn't seem to be available and setting the first one alone seems to do the trick.
  11. Oh, one more environment variable to set: GDK_BACKEND=wayland,x11. First I just set it to wayland to get gtk3 apps to use wayland (like gajim). But that broke electron apps (like signal) which notice that variable but don't have a way to display via wayland (at least not yet). Setting it to "wayland,x11" shows the priority. Thank you ubuntu community.
  12. I've also finally consolidated where my environment variables go. I've added them all to ~/.config/sway/env. That seems like an official sway place to put them, but sway doesn't pay any attention to them. So I start sway via my own bash script which sources that file via [ -f "$HOME/.config/sway/env" ] && . "$HOME/.config/sway/env" before exec'ing sway.

11 May, 2021 12:33PM

May 10, 2021

Sandro Tosi

Empire State Building Lights iCalendar

I'm very lucky to be able to see the Empire State Building from my apartment windows, and at night the lights are fantastic! But i'm also curious to know what's going to be today's lights, and tomorrow, etc.

I thought I'd easily find a calendar to add to gCal to show that, but i wasn't able to find any, so I made it myself: https://sandrotosi.github.io/esb-lights-calendar/

10 May, 2021 09:07PM by Sandro Tosi (noreply@blogger.com)

hackergotchi for Jonathan Carter

Jonathan Carter

Free software activities for 2021-04

Here are some uploads for April.

2021-04-06: Upload package bundlewrap (4.7.1-1) to Debian unstable.

2021-04-06: Upload package calamares (3.2.39.2-1) to Debian experimental.

2021-04-06: Upload package flask-caching (1.10.1-1) to Debian unstable.

2021-04-06: Upload package xabacus (8.3.5-1) to Debian unstable.

2021-04-06: Upload package python-aniso8601 (9.0.1-1) to Debian experimental.

2021-04-07: Upload package gdisk (1.0.7-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-disconnect-wifi (28-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-draw-on-your-screen (11-1) to Debian unstable.

2021-04-12: Upload package s-tui (1.1.1-1) to Debian experimental.

2021-04-12: Upload package speedtest-cli (2.1.3-1) to Debian unstable.

2021-04-19: Spnsor package bitwise (0.42-1) to Debian unstable (E-mail request).

2021-04-19: Upload package speedtest-cli (2.1.3-2) to Debian unstable.

2021-04-23: Upload package speedtest-cli (2.0.2-1+deb10u2) to Debian buster (Closes: #986637)

10 May, 2021 03:01PM by jonathan

Russell Coker

Minikube and Debian

I just started looking at the Kubernetes documentation and interactive tutorial [1], which incidentally is really good. Everyone who is developing a complex system should look at this to get some ideas for online training. Here are some notes on setting it up on Debian.

Add Kubernetes Apt Repository

deb https://apt.kubernetes.io/ kubernetes-xenial main

First add the above to your apt sources configuration (/etc/apt/sources.list or some file under /etc/apt/sources.list.d) for the kubectl package. Ubuntu Xenial is near enough to Debian/Buster and Debian/Unstable that it should work well for both of them. Then install the GPG key “6A030B21BA07F4FB” for use by apt:

gpg --recv-key 6A030B21BA07F4FB
gpg --list-sigs 6A030B21BA07F4FB
gpg --export 6A030B21BA07F4FB | apt-key add -

The Google key in question is not signed.

Install Packages for the Tutorial

The online training is based on “minikube” which uses libvirt to setup a KVM virtual machine to do stuff. To get this running you need to have a system that is capable of running KVM (IE the BIOS is set to allow hardware virtualisation). It MIGHT work on QEMU software emulation without KVM support (technically it’s possible but it would be slow and require some code to handle that), I didn’t test if it does. Run the following command to install libvirt, kvm, and dnsmasq (which minikube requires) and kubectl on Debian/Buster:

apt install libvirt-clients libvirt-daemon-system qemu-kvm dnsmasq kubectl

For Debian/Unstable run the following command:

apt install libvirt-clients libvirt-daemon-system qemu-system-x86 dnsmasq kubectl

To run libvirt as non-root without needing a password for everything you need to add the user in question to the libvirt group. I recommend running things as non-root whenever possible. In this case entering a password for everything will probably be more pain than you want. The Debian Wiki page about KVM [2] is worth reading.

Install Minikube Test Environment

Here is the documentation for installing Minikube [3]. Basically just download a single executable from the net, put it in your $PATH, and run it. Best to use non-root for that. Also you need at least 3G of temporary storage space in the home directory of the user that runs it.

After installing minikube run “minikube start” which will download container image data and start it up. Then you can run commands like the following to see what it has done.

# get overview of virsh commands
virsh help
# list domains
virsh --connect qemu:///system list
# list block devices a domain uses
virsh --connect qemu:///system domblklist minikube
# show stats on block device usage
virsh --connect qemu:///system domblkstat minikube hda
# list virtual networks
virsh --connect qemu:///system net-list
# list dhcp leases on a virtual network
virsh --connect qemu:///system net-dhcp-leases minikube-net
# list network filters
virsh --connect qemu:///system nwfilter-list
# list real network interfaces
virsh --connect qemu:///system iface-list

10 May, 2021 08:50AM by etbe

Echo Chambers vs Epistemic Bubbles

C Thi Nguyen wrote an interesting article about the difficulty of escaping from Echo Chambers and also mentions Epistemic Bubbles [1].

An Echo Chamber is a group of people who reinforce the same ideas and who often preemptively strike against opposing ideas (for example the right wing denigrating “mainstream media” to prevent their followers from straying from their approved message). An Epistemic Bubble is a group of people who just happen to not have contact with certain different ideas.

When reading that article I wondered about what bubbles I and the people I associate with may be in. One obvious issue is that I have little communication with people who don’t write in English and also little communication with people who are poor. So people who are poor and who can’t write in English (which means significant portions of the population of India and Africa) are out of communication range for me. There are many situations that are claimed to be bubbles such as white people who are claimed to be innocent of racial issues because they only associate with other white people and men in the IT industry who are claimed to be innocent of sexism because they don’t associate with women in the IT industry. But I think they are more of an echo chamber issue, if a white American doesn’t access any of the variety of English language media produced by Afro Americans and realise that there’s a racial problem it’s because they don’t want to see it and deliberately avoid looking at evidence. If a man in the IT industry doesn’t access any of the media produced by women in tech and realise there are problems with sexism then it’s because they don’t want to see it.

When is it OK to Reject a Group?

The Ad Hominem Wikipedia page has a good analysis of different types of Ad Hominem arguments [2]. But the criteria for refuting a point in a debate are very different to the criteria used to determine which sources you should trust when learning about a topic.

For example it’s theoretically possible for someone to be good at computer science while also thinking the world is flat. In a debate about some aspect of computer programming it would be a fallacious Ad Hominem argument to say “you think the Earth is flat therefore you can’t program a computer”. But if you do a Google search for information on computer programming and one of the results is from earthisflat.com then it would probably save time to skip reading that one. If only one person supports an idea then it’s quite likely to be wrong. Good ideas tend to be supported by multiple people and for any good idea you will find a supporter who doesn’t appear to have any ideas that are obviously wrong.

One of the problems we have as a society now is determining the quality of data (ideas, claims about facts, opinions, communication/spam, etc). When humans have to do that it takes time and energy. Shortcuts can make things easier. Some shortcuts I use are that mainstream media articles are usually more reliable than social media posts (even posts by my friends) and that certain media outlets are untrustworthy (like Breitbart). The next step is that anyone who cites a bad site like Breitbart as factual (rather than just an indication of what some extremists believe) is unreliable. For most questions that you might search for on the Internet there is a virtually endless supply of answers, the challenge is not finding an answer but finding a correct answer. So eliminating answers that are unlikely to be correct is an important part of the search.

If someone is citing references to support their argument and they can only cite fringe or extremist sites then I won’t be convinced. Now someone could turn that argument around and claim that a site I reference such as the New York Times is wrong. If I find that my ideas were based on a claim that can only be found on the NYT then I will reconsider the issue. While I think that the NYT is generally accurate they are capable of making mistakes and if they are the sole source for claims that go against other claims then I will be hesitant to accept such claims. Newspapers often have exclusive articles based on their own research, but such articles always inspire investigation from other newspapers so other articles appear either supporting or questioning the claims in the exclusive.

Saving Time When Interacting With Members of Echo Chambers

Convincing a member of a cult or echo chamber of anything is not likely. When in discussions with them the focus should be on the audience and on avoiding wasting much time while also not giving them the impression that you agree with them.

A common thing that members of echo chambers say is “I don’t have time to read about that” when you ask if they have read a research paper or a news article. I don’t have time to listen to people who can’t or won’t learn before speaking, there just isn’t any value in that. Also if someone has a list of memes that takes more than 15 minutes to recite then they have obviously got time for reading things, just not reading outside their echo chamber.

Conversations with members of echo chambers seem to be state free. They make a claim and you reject it, but regardless of the logical flaws you point out or the counter evidence you cite they make the same claim again the next time you speak to them. This seems to be evidence supporting the claim that evangelism is not about converting other people but alienating cult members from the wider society [3] (the original Quora text seems unavailable so I’ve linked to a Reddit copy). Pointing out that they had made a claim previously and didn’t address the issues you had with it seems effective, such discussions seem to be more about performance so you want to perform your part quickly and efficiently.

Be aware of false claims about etiquette. It’s generally regarded as polite not to disagree much with someone who invites you to your home or who has done some favour for you, but that is no reason for tolerating an unwanted lecture about their echo chamber. Anyone who tries to create a situation where it seems rude of you not to listen to them saying things that they know will offend you is being rude, much ruder than telling them you are sick of it.

Look for specific claims that can be disproven easily. The claim that the “Roman Salute” is different from the “Hitler Salute” is one example that is easy to disprove. Then they have to deal with the issue of their echo chamber being wrong about something.

10 May, 2021 08:47AM by etbe

More EVM

This is another post about EVM/IMA which has it’s main purpose providing useful web search results for problems. However if reading it on a planet feed inspires someone to play with EVM/IMA then that’s good too, it’s interesting technology.

When using EVM/IMA in the Linux kernel if dmesg has errors like “op=appraise_data cause=missing-HMAC” the “missing-HMAC” means that the error code in the kernel source is INTEGRITY_NOLABEL which has a comment “No security.evm xattr“. You can check for the xattr on a file with the following command (this example has the security.evm xattr):

# getfattr -d -m - /etc/fstab 
getfattr: Removing leading '/' from absolute path names
# file: etc/fstab
security.evm=0sAwICqGOsfwCAvgE9y9OP74QxJ/I+3eOSF2n2dM51St98z/7LYHFd9rfGTvssvhTSYL9G8cTdRAH8ozggJu7VCzggW1REoTjnLcPeuMJsrMbW3DwVrB6ldDmJzyenLMjnIHmRDDeK309aRbLVn2ueJZ07aMDcSr+sxhOOAQ/GIW4SW8L1AKpKn4g=
security.ima=0sAT+Eivfxl+7FYI+Hr9K4sE6IieZ+
security.selinux="system_u:object_r:etc_t:s0"

If dmesg has errors like “op=appraise_data cause=invalid-HMAC” the “invalid-HMAC” means that the error code in the kernel source is INTEGRITY_FAIL which has a comment “Invalid HMAC/signature“.

These errors are from the evm_verifyxattr() function in Linux kernel 5.11.14.

The error “evm: HMAC key is not set” means that the evm key is not initialised, this means the key needs to be loaded into the kernel and EVM is initialised by the command “echo 1 > /sys/kernel/security/evm” (or possibly some equivalent from a utility like evmctl). When the key is loaded the kernel gives the message “evm: key initialized” and after that /sys/kernel/security/evm is read-only. If there is something wrong with the key the kernel gives the message “evm: key initialization failed“, it seems that the way to determine if your key is good is to try writing 1 to /sys/kernel/security/evm and see what happens. After that the command “cat /sys/kernel/security/evm” should return “3”.

The Gentoo wiki has good documentation on how to create and load the keys which has to be done before initialising EVM [1]. I’ll write more about that in another post.

10 May, 2021 02:39AM by etbe

May 09, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, April 2021

In April I was assigned 16 hours of work by Freexian's Debian LTS initiative and carried over 2.5 hours from earlier months. I worked 14 hours and will carry over the remainder.

I spent a long time trying to verify that the futex issue in was now properly fixed in Linux 4.9, and reviewing the merge of these changes with the real-time (PREEMPT_RT) kernel patchset. Unfortunately this work is not complete and I did not make another upload this month.

09 May, 2021 10:27PM

hackergotchi for Norbert Preining

Norbert Preining

bash: passing around arguments with quotes

Update: I just learned on IRC that checking with shellcheck would have told me everything, including the usage of arrays, and the linked page even mentions ffmpeg … how stupid to reinvent the wheel again and again … Thanks to pabs for telling me!

It has hit me several times, and searching the internet gives lots of suggestions: How to pass and argument containing quotes to another program from a bash program.

My case was a script that automatized video processing using ffmpeg, and I needed to call ffmpeg with arguments like

ffmpeg ... --filter_complex "some arg with spaces" ...

My first (failed) shot at doing this in the shell script was:

filterarg='--filter_complex "some arg with spaces"'
...
ffmpeg $someargs $filterarg ...

which, as most bash gurus will know, will fail. Solutions are using eval that are complicated.

Recently I realized a much simpler method using bash arrays: Assume the program show_arguments just prints the arguments passed in, maybe something like (again in bash):

#!/bin/bash
for i in "$@" ; do
  echo "=== $i"
done

The program under consideration would be – in the broken version:

#!/bin/bash
# Variant 1 - NOT WORKING
myargs='-filter_complex "arg with space"'
show_arguments $myargs

This would give

=== -filter_complex
=== "arg
=== with
=== space"

which is not what we want.

The solution is to stuff the arguments into an array, and expand that as arguments.

#!/bin/bash
# Variant 2 - WORKING
declare -a myargs
myargs=(-filter_complex "arg with space")
show_arguments "${myargs[@]}"

which, correctly, would give

=== -filter_complex
=== arg with space

Now I only need to remember that trick for the next time it bites me!

09 May, 2021 09:32PM by Norbert Preining

Reproducible Builds

Reproducible Builds in April 2021

Welcome to the April 2021 report from the Reproducible Builds project!

In these reports we try to the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.


A preprint of a paper by Chris Lamb and Stefano Zacchiroli (which will shortly appear in IEEE Software) has been made available on the arXiv.org service. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains (PDF), the abstract of the paper contains the follows:

We first define the problem, and then provide insight into the challenges of making real-world software build in a “reproducible” manner-this is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA). []


Elsewhere on the internet, Igor Golovin on the Kaspersky security blog reported that APKPure, an alternative app store for Android apps, began to distribute “an advertising SDK from an unverified source [that] turned out to be malicious” []. Elaborating elsewhere on the internet, Igor wrote that the malicious code had “much in common with the notorious Triada malware and can perform a range of actions: from displaying and clicking ads to signing up for paid subscriptions and downloading other malware”.


Closer to home, Jeremiah Orians wrote to our mailing list reporting that it is now possible to bootstrap the GCC compiler without using the pre-generated Bison grammar files, part of a broader attempt to provide a “reproducible, automatic [and] complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system” []. In addition, Richard Clobus started a thread on potential problems the -Wl,--build-id=sha1 linker flag which can later be used when analysing core dumps and tracebacks. According to the Red Hat Customer Portal:

Each executable or shared library built with Red Hat Enterprise Linux Server 6 or later is assigned a unique identification 160-bit SHA-1 string, generated as a checksum of selected parts of the binary. This allows two builds of the same program on the same host to always produce consistent build-ids and binary content. (emphasis added)


Lastly, Felix C. Stegerman reported on the latest release of apksigcopier. apksigcopier is a tool to copy, extract and patch .apk signatures that is needed to facilitate reproducible builds on the F-Droid Android application store and elsewhere. Holger Levsen subsequently sponsored an upload to Debian.


Software development

Distribution work

An issue was discovered in Arch Linux regarding packages that where previously considered reproducible. After some investigation, it was determined that the build’s CFLAGS could vary between two previously ‘reproducible’ builds

The cause was attributed the fact that in Arch Linux, the devtools package determines the build configuration, but in the development branch it had been inadvertently copying the makepkg.conf file from the pacman package — the devtools version had been fixed in the recent release. This meant that when Arch Linux released or releases a devtools package with updated CFLAGS` (or similar), old packages could fail to build reproducibly as they would be reproduced in a different build environment.

To address this problem, Levente Polyak sent a patch to the pacman mailing list to include the version of devtools in the relevant BUILDINFO file. This means that the repro tool can now install the corresponding makepkg.conf file when attempting to validate a reproducible build.


In Debian, Frédéric Pierret continued working on debian.notset.fr, a partial copy of the snapshot.debian.org “wayback machine” service for the Debian archive that is limited to the packages needed to rebuild the bullseye distribution on the amd64 architecture. This is to workaround some perceived limitations of snapshot.debian.org. Since last month, the service covers from mid-2020 onwards, and request was made to the Debian sysadmin team to obtain better access to snapshot.debian.org in order to further accelerate the initial seeding. In addition, the service supports now more endpoints in the API (full documentation), including a timestamp endpoint to track the sync in a machine-readable way.

Twenty-one reviews of Debian packages were performed, nine were updated and sixteen were removed this month adding to our large taxonomy of identified issues. A number of issue types have been updated too, including removing the random_order_in_javahelper_substvars issue type [], but also the addition of a new timestamps_in_pdf_generated_by_libreoffice toolchain issue by Chris Lamb [].


Lastly, Bernhard M. Wiedemann posted his monthly reproducible builds status report for the openSUSE distribution.

Upstream patches

diffoscope

diffoscope is the Reproducible Builds project in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it provides human-readable diffs from many kinds of binary formats. This month, Chris Lamb made a number of changes including releasing version 172 and version 173:

  • Add support for showing annotations in PDF files. (#249)
  • Move to the assert_diff helper in `test_pdf.py. []

In addition, Mattia Rizzolo attempted to make the testsuite pass with file(1) version 5.40 [] and Zachary T. Welch updated the __init__ magic method of the Difference class to demote the unified_diff argument to a Python ‘kwarg’ [].

Website and documentation

Quite a few changes were made to the main Reproducible Builds website and documentation this month, including:

  • Chris Lamb:

    • Highlight our mailing list on the Contribute. page []
    • Add a noun (and drop an unnecessary full-stop) on the landing page. [][]
    • Correct a reference to the date metadata attribute on reports, restoring the display of months on the homepage. []
    • Correct a typo of “instalment” within a previous news entry. []
    • Added a conspicuous “draft” banner to unpublished blog posts in order to match the report draft banner. []
  • Mattia Rizzolo:

    • Various improvements to the sponsors pages. [][][]
    • Add the project’s platinum-level sponsors to the homepage. []
    • Use a CSS class instead of specifying an inline style HTML attribute. []

Testing framework

The Reproducible Builds project operates a Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:

    • Debian:

      • Update README to reflect that Debian buster is now the ‘stable’ distribution. []
      • Support fully-qualified domain names in the powercycle script for the armhf architecture. []
      • Improve the handling of node names (etc.) for armhf nodes. [][]
      • Improve the detection and classification of packages maintained by the Debian accessibility team. [][]
      • Count the number configured armhf nodes correctly by ignoring comments at the end of line. []
    • Health checks

      • Improve checks for broken OpenSSH ports. [][]
      • Detect failures of NetBSD’s make release. []
      • Catch another log message variant that specifies a host is running an outdated kernel. []
      • Automatically restart failed systemd-journal-flush systemd services. []
    • Other:

      • Update FreeBSD to 13.0. []
      • Be less picky about “too many” installed kernels on hosts which have large enough /boot partition. [][][]
      • Unify some IRC output. []
  • Mattia Rizzolo:

    • Fix a regular expression in automatic log-parsing routines. []
  • Vagrant Cascadian:

    • Add some new armhf architecture build nodes, virt32b and virt64b. [][][]
    • Rearrange armhf build jobs to only use active nodes. []
    • Add a health check for broken OpenSSH ports on virt32a. []
    • Mark which armhf architecture jobs are not systematically varying 32-bit and 64-bit kernels. []
    • Disable creation of Debian stretch build tarballs and update the README file to mention bullseye instead. []

Finally, build node maintenance was performed by Holger Levsen [][][][], Mattia Rizzolo [] and Vagrant Cascadian [] [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

09 May, 2021 03:12PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Python HTTP server IPv6

“By default, the Python http.server module listens on IPv4 only, so to make it listen on IPv6, you should subclass it, and …”

WHYYYYYY

09 May, 2021 11:06AM

A ball-throwing problem

I recently solved a physics problem that seems to be as old as the dawn of time, and solvable using high school maths only, yet it took me hours to get everything right even with a computer, so I don't know if I'm stupid or if everybody else would be as surprised by the solution. :-)

The basic setup is: You throw a ball starting at (0,0), and you want to hit a target at (x,y) with as little energy (starting velocity, v0) as possible. You can choose any angle θ (unlike xkcd, we assume you can throw equally efficiently at any angle), but some will require a lower v0 than others. What is the optimal θ, and its corresponding v0? We assume point masses, no air resistance, and generally Newtonian physics. (Actually, what I was interested in was jumping, but that's much harder since your body isn't rigid, so we'll approximate it with a ball throw.)

The throw equations are simple and well-known:

  • x(t) = v0 t cos θ
  • y(t) = v0 t sin θ - 1/2 gt²

I initially assumed that the goal had to be at the apex of the throw curve; why waste energy on throwing higher than you need, right? This makes things fairly easy; you differentiate y'(t) to find at what t is the highest (t = sin θ v/g), insert that into both equations, insert the target x and y, divide the second equation by the first, and out pops… θ = arctan(2y / x). (You also get v0 = sqrt(2g y) / sin θ.)

Curve, at apex

(The curve is drawn with a bezier curve in Inkscape, so it's not necessarily exact.)

To me, this was very surprising; it seems you should aim for a point exactly twice as high as the goal, no matter the gravity, and let the gravity do the rest. Of course, gravity determines how hard you need to throw, but not the angle. And again, in an actual throw, of course, your brain does all of this stuff automatically for you, just based on experience.

But after making a small interactive calculator, it turned it this didn't make much sense. If I moved the goal downwards, sometimes it would require more energy—in the extreme case, if y=0, everybody who's ever taken a physics course knows the throw angle should be 45 degrees, and this ended up with 0.

So I had to drop the apex assumption. I'll spare you the derivations and all the false starts (how do you even start thinking about a five-variable equation sets where some things are to be minimized and some are to be solved for?), because it ended up with me fighting the computer algebra system a lot; it seems that it's sometimes easier to withhold some information from it if you don't want it to be bogged down in the details (e.g. I invented new “cosx” and “sinx” variables instead of actually telling it it was cos and sin of something), or I'm just incompetent using them. Also, I'm seemingly better at simplifying by hand than telling them how to simplify formulas. But the end result came out in terms of v0 first:

v0 = x sqrt(1/2 g / (cos θ (x sin θ - y cos θ)))

I initially solved for lowest θ numerically by golden-section search, but it turns out it has a solution, readily given by the CAS:

θ = arctan((y + sqrt(x² + y²)) / x)

Whoa. This is even stranger:

Most efficient curve

So essentially, you need to move the goal upwards by the distance to the goal. The difference looks dramatic, but the actual angle change isn't so big for typical upwards angles; usually just a few degrees. For the case of y=0, though, it gives exactly 45 degrees, which is what we wanted.

I'm not sure if there's some deep reason for this, but I am glad that I'm not expected to do these things in five minutes anymore. :-)

09 May, 2021 10:03AM

May 08, 2021

Thorsten Alteholz

My Debian Activities in April 2021

FTP master

This month I accepted 103 and rejected 10 packages, which is again an increase compared to last month. The overall number of packages that got accepted was only 107.

Debian LTS

This was my eighty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS and normal security uploads of:

  • [DLA 2629-1] libebml security update for one CVE
  • debdiff for libebml/buster
  • [DLA 2636-1] pjproject security update for one CVE
  • NMU leptonlib/unstable for four CVEs
  • PU bug #987376 leptonlib/buster for four CVEs
  • debdiff for ring/unstable which resulted in upload of version 20210112.2.b757bac~ds1-1 that fixed two CVEs
  • PU bug #987246 tnef/buster for one CVE

I also created debdiffs of tnef and ring for other suites, which did not result in any upload yet. Further I started to work on gpac and struggle with dozens of issues here.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-fourth ELTS month.

Unfortunately my work on python2.7 and python3.4 did not result in an upload before the end of the month.

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded lots of packages either for NEW or as source upload.

Last but not least I voted.

08 May, 2021 12:50PM by alteholz

May 06, 2021

Abhijith PA

Part-2 Transition from Thunderbird to Mutt

If you read my last blog. You might know that I moved my email away from thunderbird to mutt. I thought I will miss thunderbird, nope, not even for a bit. This transition was very smooth. Only things left in thunderbird was my calendar and RSS reader.

Couple of weeks ago I switched to calcurse for handling my calendar and To-do list. Like its name, it has a curses TUI. They have its own to-do list and native support for syncing with caldav servers. The to-do list seems buggy when sync between OpenTask Android app. And newsboat is my RSS reader.

06 May, 2021 07:25PM

hackergotchi for Matthew Garrett

Matthew Garrett

More doorbell adventures

Back in my last post on this topic, I'd got shell on my doorbell but hadn't figured out why the HTTP callbacks weren't always firing. I still haven't, but I have learned some more things.

Doorbird sell a chime, a network connected device that is signalled by the doorbell when someone pushes a button. It costs about $150, which seems excessive, but would solve my problem (ie, that if someone pushes the doorbell and I'm not paying attention to my phone, I miss it entirely). But given a shell on the doorbell, how hard could it be to figure out how to mimic the behaviour of one?

Configuration for the doorbell is all stored under /mnt/flash, and there's a bunch of files prefixed 1000eyes that contain config (1000eyes is the German company that seems to be behind Doorbird). One of these was called 1000eyes.peripherals, which seemed like a good starting point. The initial contents were {"Peripherals":[]}, so it seemed likely that it was intended to be JSON. Unfortunately, since I had no access to any of the peripherals, I had no idea what the format was. I threw the main application into Ghidra and found a function that had debug statements referencing "initPeripherals and read a bunch of JSON keys out of the file, so I could simply look at the keys it referenced and write out a file based on that. I did so, and it didn't work - the app stubbornly refused to believe that there were any defined peripherals. The check that was failing was pcVar4 = strstr(local_50[0],PTR_s_"type":"_0007c980);, which made no sense, since I very definitely had a type key in there. And then I read it more closely. strstr() wasn't being asked to look for "type":, it was being asked to look for "type":". I'd left a space between the : and the opening " in the value, which meant it wasn't matching. The rest of the function seems to call an actual JSON parser, so I have no idea why it doesn't just use that for this part as well, but deleting the space and restarting the service meant it now believed I had a peripheral attached.

The mobile app that's used for configuring the doorbell now showed a device in the peripherals tab, but it had a weird corrupted name. Tapping it resulted in an error telling me that the device was unavailable, and on the doorbell itself generated a log message showing it was trying to reach a device with the hostname bha-04f0212c5cca and (unsurprisingly) failing. The hostname was being generated from the MAC address field in the peripherals file and was presumably supposed to be resolved using mDNS, but for now I just threw a static entry in /etc/hosts pointing at my Home Assistant device. That was enough to show that when I opened the app the doorbell was trying to call a CGI script called peripherals.cgi on my fake chime. When that failed, it called out to the cloud API to ask it to ask the chime[1] instead. Since the cloud was completely unaware of my fake device, this didn't work either. I hacked together a simple server using Python's HTTPServer and was able to return data (another block of JSON). This got me to the point where the app would now let me get to the chime config, but would then immediately exit. adb logcat showed a traceback in the app caused by a failed assertion due to a missing key in the JSON, so I ran the app through jadx, found the assertion and from there figured out what keys I needed. Once that was done, the app opened the config page just fine.

Unfortunately, though, I couldn't edit the config. Whenever I hit "save" the app would tell me that the peripheral wasn't responding. This was strange, since the doorbell wasn't even trying to hit my fake chime. It turned out that the app was making a CGI call to the doorbell, and the thread handling that call was segfaulting just after reading the peripheral config file. This suggested that the format of my JSON was probably wrong and that the doorbell was not handling that gracefully, but trying to figure out what the format should actually be didn't seem easy and none of my attempts improved things.

So, new approach. Rather than writing the config myself, why not let the doorbell do it? I should be able to use the genuine pairing process if I could mimic the chime sufficiently well. Hitting the "add" button in the app asked me for the username and password for the chime, so I typed in something random in the expected format (six characters followed by four zeroes) and a sufficiently long password and hit ok. A few seconds later it told me it couldn't find the device, which wasn't unexpected. What was a little more unexpected was that the log on the doorbell was showing it trying to hit another bha-prefixed hostname (and, obviously, failing). The hostname contains the MAC address, but I hadn't told the doorbell the MAC address of the chime, just its username. Some more digging showed that the doorbell was calling out to the cloud API, giving it the 6 character prefix from the username and getting a MAC address back. Doing the same myself revealed that there was a straightforward mapping from the prefix to the mac address - changing the final character from "a" to "b" incremented the MAC by one. It's actually just a base 26 encoding of the MAC, with aaaaaa translating to 00408C000000.

That explained how the hostname was being generated, and in return I was able to work backwards to figure out which username I should use to generate the hostname I was already using. Attempting to add it now resulted in the doorbell making another CGI call to my fake chime in order to query its feature set, and by mocking that up as well I was able to send back a file containing X-Intercom-Type, X-Intercom-TypeId and X-Intercom-Class fields that made the doorbell happy. I now had a valid JSON file, which cleared up a couple of mysteries. The corrupt name was because the name field isn't supposed to be ASCII - it's base64 encoded UTF16-BE. And the reason I hadn't been able to figure out the JSON format correctly was because it looked something like this:

{"Peripherals":[]{"prefix":{"type":"DoorChime","name":"AEQAbwBvAHIAYwBoAGkAbQBlACAAVABlAHMAdA==","mac":"04f0212c5cca","user":"username","password":"password"}}]}


Note that there's a total of one [ in this file, but two ]s? Awesome. Anyway, I could now modify the config in the app and hit save, and the doorbell would then call out to my fake chime to push config to it. Weirdly, the association between the chime and a specific button on the doorbell is only stored on the chime, not on the doorbell. Further, hitting the doorbell didn't result in any more HTTP traffic to my fake chime. However, it did result in some broadcast UDP traffic being generated. Searching for the port number led me to the Doorbird LAN API and a complete description of the format and encryption mechanism in use. Argon2I is used to turn the first five characters of the chime's password (which is also stored on the doorbell itself) into a 256-bit key, and this is used with ChaCha20 to decrypt the payload. The payload then contains a six character field describing the device sending the event, and then another field describing the event itself. Some more scrappy Python and I could pick up these packets and decrypt them, which showed that they were being sent whenever any event occurred on the doorbell. This explained why there was no storage of the button/chime association on the doorbell itself - the doorbell sends packets for all events, and the chime is responsible for deciding whether to act on them or not.

On closer examination, it turns out that these packets aren't just sent if there's a configured chime. One is sent for each configured user, avoiding the need for a cloud round trip if your phone is on the same network as the doorbell at the time. There was literally no need for me to mimic the chime at all, suitable events were already being sent.

Still. There's a fair amount of WTFery here, ranging from the strstr() based JSON parsing, the invalid JSON, the symmetric encryption that uses device passwords as the key (requiring the doorbell to be aware of the chime's password) and the use of only the first five characters of the password as input to the KDF. It doesn't give me a great deal of confidence in the rest of the device's security, so I'm going to keep playing.

[1] This seems to be to handle the case where the chime isn't on the same network as the doorbell

comment count unavailable comments

06 May, 2021 06:26AM

May 05, 2021

Antoine Beaupré

Building a status page service with Hugo

The Tor Project now has a status page which shows the state of our major services.

You can check status.torprojet.org for news about major outages in Tor services, including v3 and v2 onion services, directory authorities, our website (torproject.org), and the check.torproject.org tool. The status page also displays outages related to Tor internal services, like our GitLab instance.

This post documents why we launched status.torproject.org, how the service was built, and how it works.

Why a status page

The first step in setting up a service page was to realize we needed one in the first place. I surveyed internal users at the end of 2020 to see what could be improved, and one of the suggestions that came up was to "document downtimes of one hour or longer" and generally improve communications around monitoring. The latter is still on the sysadmin roadmap, but a status page seemed like a good solution for the former.

We already have two monitoring tools in the sysadmin team: Icinga (a fork of Nagios) and Prometheus, with Grafana dashboards. But those are hard to understand for users. Worse, they also tend to generate false positives, and don't clearly show users which issues are critical.

In the end, a manually curated dashboard provides important usability benefits over an automated system, and all major organisations have one.

Picking the right tool

It wasn't my first foray in status page design. In another life, I had setup a status page using a tool called Cachet. That was already a great improvement over the previous solutions, which were to use first a wiki and then a blog to post updates. But Cachet is a complex Laravel app, which also requires a web browser to update. It generally requires more maintenance than what we'd like, needing silly things like a SQL database and PHP web server.

So when I found cstate, I was pretty excited. It's basically a theme for the Hugo static site generator, which means that it's a set of HTML, CSS, and a sprinkle of Javascript. And being based on Hugo means that the site is generated from a set of Markdown files and the result is just plain HTML that can be hosted on any web server on the planet.

Deployment

At first, I wanted to deploy the site through GitLab CI, but at that time we didn't have GitLab pages set up. Even though we do have GitLab pages set up now, it's not (yet) integrated with our mirroring infrastructure. So, for now, the source is hosted and built in our legacy git and Jenkins services.

It is nice to have the content hosted in a git repository: sysadmins can just edit Markdown in the git repository and push to deploy changes, no web browser required. And it's trivial to setup a local environment to preview changes:

hugo serve --baseUrl=http://localhost/
firefox https://localhost:1313/

Only the sysadmin team and gitolite administrators have access to the repository, at this stage, but that could be improved if necessary. Merge requests can also be issued on the GitLab repository and then pushed by authorized personnel later on, naturally.

Availability

One of the concerns I have is that the site is hosted inside our normal mirror infrastructure. Naturally, if an outage occurs there, the site goes down. But I figured it's a bridge we'll cross when we get there. Because it's so easy to build the site from scratch, it's actually trivial to host a copy of the site on any GitLab server, thanks to the .gitlab-ci.yml file shipped (but not currently used) in the repository. If push comes to shove, we can just publish the site elsewhere and point DNS there.

And, of course, if DNS fails us, then we're in trouble, but that's the situation anyway: we can always register a new domain name for the status page when we need to. It doesn't seem like a priority at the moment.

Comments and feedback are welcome!


This article was first published on the Tor Project Blog.

05 May, 2021 05:54PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wrote a pomodoro timer in elisp.

Wrote a pomodoro timer in elisp. Why? Because I try to keep my workflow simple, and to keep the simplicity I sometimes need to re-implement stuff. No this is a lame excuse. I have been living in emacs for the past week and felt like it. However writing elisp has been challenging, maybe because I haven't done it for a while. I noticed there's lexical-binding, but I didn't quite get it, my lambda isn't getting the function parameter in scope.

05 May, 2021 12:29AM by Junichi Uekawa

May 04, 2021

hackergotchi for Daniel Pocock

Daniel Pocock

Adapting and localizing Tryton and other free, open source accounting software for your country

I previously wrote a comparison of free, open source accounting software. Most of these applications only come with a generic Chart of Accounts and no support for tax reporting. This makes them suitable for personal finances and small volunteer groups. For many freelance workers, consultants and small businesses, it is now essential to have some basic tax reporting.

Tryton is one of the few packages that is now addressing these needs. The Tryton modules directory includes five official localizations with a business-ready chart of accounts and tax codes. A discussion in the forum reveals more countries coming soon.

I had a look at how to add more, starting with Switzerland and I'm sharing my observations here for anybody else who wants to try this. The procedure described here is valid for any accounting software but I give examples with Tryton.

Share your progress, find collaborators

To avoid duplication, it is a really good idea to search the forum and search the bug tracker to see if anybody else is working on the same country. If you are first, you can publish an announcement in each of these places so other people can find you.

Many hands make light work.

Translation of the user interface

Translation of the user interface into your local language is independent from the localization effort. For example, if you have a team of five people sharing the workload, three people could work on translation while two people work on the chart of accounts and tax codes.

People who want to translate Tryton can find details here. Please open a bug asking for your language to be added to the list for contributions. Here is an example of how to file the bug report to begin adding your language.

Research your localization, share the links

Before starting, it is a good idea to find all of the following and create links to them from the forum topic or bug tracker for your country:

  1. Example chart of accounts for your country, preferably in CSV format, see the examples I found for Switzerland
  2. Any key document explaining your chart of accounts, notes are included in the Swiss PDFs above
  3. If your country has VAT, GST or a similar consumption tax, obtain a copy of the VAT return form, Swiss example here.
  4. Any guidance notes that help explain the VAT return form, in the same directory as the last example
  5. if your country has automatic, electronic submission of tax reports, obtain a copy of the specification, for example, an XML XSD schema file and supporting documentation. The Swiss eCH-0217 documents are copied here.

Create Chart of Accounts in a CSV, add columns for Tryton

I created a short script that converts a CSV file into a Tryton XML file. This makes it easy to edit the initial draft Chart of Accounts in a spreadsheet. You need to edit the script to include your CSV filename and list of languages.

Please see all the columns used in the Swiss Chart of Accounts spreadsheet here. The script needs all these columns. You can add the account names in multiple languages, simply add an extra column for each language.

Save or export your spreadsheet as a CSV. Run the script to process it.

Explore the tax code tree for one of the supported countries

It is a good idea to create an empty database using one of the supported countries, for example, France so that you can see what the finished result looks like. Look at the tax code tree on-screen and compare it to a copy of the French VAT return form. Try to understand how the tree heirarchy relates to the form structure.

Here is the Swiss example:

Tryton tax code tree, Switzerland

Draw a tax code tree for your country on paper

This helps you visualize the result you are aiming for. You can see the Swiss VAT form here and you can see the tree drawing here. Compare the tree drawing to the screenshot above.

Notice the Swiss VAT form has three sections and each one is a different subtree. They don't easily combine into a single tree. The circled numbers (1), (2) and (3) in the drawing correspond to the sections of the form.

It is a good idea to share this diagram so other people can see how you designed the tax codes.

Begin creating the tax.xml file

If you look at one of the existing countries, for example, the XML file for French taxes, it may appear quite intimidating. It is 3,629 lines.

If you go about it in the right order then it becomes manageable.

  1. Create the account.tax.group records first. France, Switzerland and other existing examples four of those, you can cut and paste them.
  2. Create the account.tax.template records. These describe the rates, usually percentages, for each tax level. Create them in groups like the French and Swiss examples. Looking at the examples, you will see some old tax rates are there with the date for changing the rate. You don't need to create records for past tax rates if more than a year has passed since they changed. You do need to create entries for any new rates that will commence in the near future.
  3. Create the tree structure using the account.tax.code.template records. In the examples, you will see each account.tax.code.template has some account.tax.code.line.template. Don't create those yet, just create account.tax.code.template to describe the tree structure.
  4. At this point, you could load the XML into Tryton to see how the tree structure looks on screen.
  5. Finally, add in the account.tax.code.line.template records for each account.tax.code.template. This is the most tedious part of the process. You can use <field name="amount">base</field> to sum the pre-tax values of sales and use <field name="amount">tax</field> to sum the tax amounts from invoices.

Here are the steps from the original document, the tree drawing, the XML and the final view:

04 May, 2021 10:30PM

hackergotchi for Steve Kemp

Steve Kemp

Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/steve@steve.org.uk.OLD.gpg
  ..
  4 years, 8 months ago Domains/Domain.fi.gpg
  4 years, 7 months ago Mobile/dna.fi.gpg
  ..
  1 year, 3 months ago Websites/netlify.com.gpg
  1 year ago Financial/ukko.fi.gpg
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg
  ..

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   site: http://example.com/login/blah/
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username
   steve

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ldap.py internal/cli/tool-name
   you are steve
   ..

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ldap.py --username=steve --password=secrit")

04 May, 2021 03:27PM

hackergotchi for Erich Schubert

Erich Schubert

Machine Learning Lecture Recordings

I have uploaded most of my “Machine Learning” lecture to YouTube.

The slides are in English, but the audio is in German.

Some very basic contents (e.g., a demo of standard k-means clustering) were left out from this advanced class, and instead only a link to recordings from an earlier class were given. In this class, I wanted to focus on the improved (accelerated) algorithms instead. These are not included here (yet). I believe there are some contents covered in this class you will find nowhere else (yet).

The first unit is pretty long (I did not split it further yet). The later units are shorter recordings.

ML F1: Principles in Machine Learning

ML F2/F3: Correlation does not Imply Causation & Multiple Testing Problem

ML F4: Overfitting – Überanpassung

ML F5: Fluch der Dimensionalität – Curse of Dimensionality

ML F6: Intrinsische Dimensionalität – Intrinsic Dimensionality

ML F7: Distanzfunktionen und Ähnlichkeitsfunktionen

ML L1: Einführung in die Klassifikation

ML L2: Evaluation und Wahl von Klassifikatoren

ML L3: Bayes-Klassifikatoren

ML L4: Nächste-Nachbarn Klassifikation

ML L5: Nächste Nachbarn und Kerndichteschätzung

ML L6: Lernen von Entscheidungsbäumen

ML L7: Splitkriterien bei Entscheidungsbäumen

ML L8: Ensembles und Meta-Learning: Random Forests und Gradient Boosting

ML L9: Support Vector Machinen - Motivation

ML L10: Affine Hyperebenen und Skalarprodukte – Geometrie für SVMs

ML L11: Maximum Margin Hyperplane – die “breitest mögliche Straße”

ML L12: Training Support Vector Machines

ML L13: Non-linear SVM and the Kernel Trick

ML L14: SVM – Extensions and Conclusions

ML L15: Motivation of Neural Networks

ML L16: Threshold Logic Units

ML L17: General Artificial Neural Networks

ML L18: Learning Neural Networks with Backpropagation

ML L19: Deep Neural Networks

ML L20: Convolutional Neural Networks

ML L21: Recurrent Neural Networks and LSTM

ML L22: Conclusion Classification

ML U1: Einleitung Clusteranalyse

ML U2: Hierarchisches Clustering

ML U3: Accelerating HAC mit Anderberg’s Algorithmus

ML U4: k-Means Clustering

ML U5: Accelerating k-Means Clustering

ML U6: Limitations of k-Means Clustering

ML U7: Extensions of k-Means Clustering

ML U8: Partitioning Around Medoids (k-Medoids)

ML U9: Gaussian Mixture Modeling (EM Clustering)

ML U10: Gaussian Mixture Modeling Demo

ML U11: BIRCH and BETULA Clustering

ML U12: Motivation Density-Based Clustering (DBSCAN)

ML U13: Density-reachable and density-connected (DBSCAN Clustering)

ML U14: DBSCAN Clustering

ML U15: Parameterization of DBSCAN

ML U16: Extensions and Variations of DBSCAN Clustering

ML U17: OPTICS Clustering

ML U18: Cluster Extraction from OPTICS Plots

ML U19: Understanding the OPTICS Cluster Order

ML U20: Spectral Clustering

ML U21: Biclustering and Subspace Clustering

ML U22: Further Clustering Approaches

04 May, 2021 01:18PM by Erich Schubert

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

NSF CAREER Award

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.

04 May, 2021 02:29AM by Benjamin Mako Hill

May 03, 2021

Russell Coker

DNS, Lots of IPs, and Postal

I decided to start work on repeating the tests for my 2006 OSDC paper on Benchmarking Mail Relays [1] and discover how the last 15 years of hardware developments have changed things. There have been software changes in that time too, but nothing that compares with going from single core 32bit systems with less than 1G of RAM and 60G IDE disks to multi-core 64bit systems with 128G of RAM and SSDs. As an aside the hardware I used in 2006 wasn’t cutting edge and the hardware I’m using now isn’t either. In both cases it’s systems I bought second hand for under $1000. Pedants can think of this as comparing 2004 and 2018 hardware.

BIND

I decided to make some changes to reflect the increased hardware capacity and use 2560 domains and IP addresses, which gave the following errors as well as a startup time of a minute on a system with two E5-2620 CPUs.

May  2 16:38:37 server named[7372]: listening on IPv4 interface lo, 127.0.0.1#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.2.45#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.1#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.2#53
May  2 16:38:37 server named[7372]: listening on IPv4 interface eno4, 10.0.40.3#53
[...]
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.47.0#53
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.48.0#53
May  2 16:39:33 server named[7372]: listening on IPv4 interface eno4, 10.0.49.0#53
May  2 16:39:33 server named[7372]: listening on IPv6 interface lo, ::1#53
[...]
May  2 16:39:36 server named[7372]: zone localhost/IN: loaded serial 2
May  2 16:39:36 server named[7372]: all zones loaded
May  2 16:39:36 server named[7372]: running
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)
May  2 16:39:36 server named[7372]: managed-keys-zone: Unable to fetch DNSKEY set '.': not enough free resources
May  2 16:39:36 server named[7372]: socket: file descriptor exceeds limit (123273/21000)

The first thing I noticed is that a default configuration of BIND with 2560 local IPs (when just running in the default recursive mode) takes a minute to start and needed to open over 100,000 file handles. BIND also had some errors in that configuration which led to it not accepting shutdown requests. I filed Debian bug report #987927 [2] about this. One way of dealing with the errors in this situation on Debian is to edit /etc/default/named and put in the following line to allow BIND to access to many file handles:

OPTIONS="-u bind -S 150000"

But the best thing to do for BIND when there are many IP addresses that aren’t going to be used for DNS service is to put a directive like the following in the BIND configuration to specify the IP address or addresses that are used for the DNS service:

listen-on { 10.0.2.45; };

I have just added the listen-on and listen-on-v6 directives to one of my servers with about a dozen IP addresses. While 2560 IP addresses is an unusual corner case it’s not uncommon to have dozens of addresses on one system.

dig

When doing tests of Postfix for relaying mail I noticed that mail was being deferred with DNS problems (error was “Host or domain name not found. Name service error for name=a838.example.com type=MX: Host not found, try again“. I tested the DNS lookups with dig which failed with errors like the following:

dig -t mx a704.example.com
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument
socket.c:1740: internal_send: 10.0.2.45#53: Invalid argument

; <
> DiG 9.16.13-Debian <
> -t mx a704.example.com
;; global options: +cmd
;; connection timed out; no servers could be reached

Here is a sample of the strace output from tracing dig:

bind(20, {sa_family=AF_INET, sin_port=htons(0), 
sin_addr=inet_addr("0.0.0.0")}, 16) = 0
recvmsg(20, {msg_namelen=128}, 0)       = -1 EAGAIN (Resource temporarily 
unavailable)
write(4, "\24\0\0\0\375\377\377\377", 8) = 8
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("10.0.2.45")}, msg_
namelen=16, msg_iov=[{iov_base="86\1 
\0\1\0\0\0\0\0\1\4a704\7example\3com\0\0\17\0\1\0\0)\20\0\0\0\0
\0\0\f\0\n\0\10's\367\265\16bx\354", iov_len=57}], msg_iovlen=1, 
msg_controllen=0, msg_flags=0}, 0) 
= -1 EINVAL (Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: 10.0.2.45#53: Invalid argument", 45) = 45
write(2, "\n", 1)                       = 1
futex(0x7f5a80696084, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x7f5a80696010, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f5a8069809c, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f5a80698020, FUTEX_WAKE_PRIVATE, 1) = 1
sendmsg(20, {msg_name={sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("10.0.2.45")}, msg_namelen=16, msg_iov=[{iov_base="86\1 
\0\1\0\0\0\0\0\1\4a704\7example\3com\0\0\17\0\1\0\0)\20\0\0\0\0\0\0\f\0\n\0\10's\367\265\16bx\354", 
iov_len=57}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = -1 EINVAL 
(Invalid argument)
write(2, "socket.c:1740: ", 15)         = 15
write(2, "internal_send: 10.0.2.45#53: Invalid argument", 45) = 45
write(2, "\n", 1)

Ubuntu bug #1702726 claims that an insufficient ARP cache was the cause of dig problems [3]. At the time I encountered the dig problems I was seeing lots of kernel error messages “neighbour: arp_cache: neighbor table overflow” which I solved by putting the following in /etc/sysctl.d/mine.conf:

net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024

Making that change (and having rebooted because I didn’t need to run the server overnight) didn’t entirely solve the problems. I have seen some DNS errors from Postfix since then but they are less common than before. When they happened I didn’t have that error from dig. At this stage I’m not certain that the ARP change fixed the dig problem although it seems likely (it’s always difficult to be certain that you have solved a race condition instead of made it less common or just accidentally changed something else to conceal it). But it is clearly a good thing to have a large enough ARP cache so the above change is probably the right thing for most people (with the possibility of changing the numbers according to the required scale). Also people having that dig error should probably check their kernel message log, if the ARP cache isn’t the cause then some other kernel networking issue might be related.

Preliminary Results

With Postfix I’m seeing around 24,000 messages relayed per minute with more than 60% CPU time idle. I’m not sure exactly how to count idle time when there are 12 CPU cores and 24 hyper-threads as having only 1 process scheduled for each pair of hyperthreads on a core is very different to having half the CPU cores unused. I ran my script to disable hyper-threads by telling the Linux kernel to disable each processor core that has the same core ID as another, it was buggy and disabled the second CPU altogether (better than finding this out on a production server). Going from 24 hyper-threads of 2 CPUs to 6 non-HT cores of a single CPU didn’t change the thoughput and the idle time went to about 30%, so I have possibly halved the CPU capacity for these tasks by disabling all hyper-threads and one entire CPU which is surprising given that I theoretically reduced the CPU power by 75%. I think my focus now has to be on hyper-threading optimisation.

Since 2006 the performance has gone from ~20 messages per minute on relatively commodity hardware to 24,000 messages per minute on server equipment that is uncommon for home use but which is also within range of home desktop PCs. I think that a typical desktop PC with a similar speed CPU, 32G of RAM and SSD storage would give the same performance. Moore’s Law (that transistor count doubles approximately every 2 years) is often misquoted as having performance double every 2 years. In this case more than 1024* the performance over 15 years means the performance doubling every 18 months. Probably most of that is due to SATA SSDs massively outperforming IDE hard drives but it’s still impressive.

Notes

I’ve been using example.com for test purposes for a long time, but RFC2606 specifies .test, .example, and .invalid as reserved top level domains for such things. On the next iteration I’ll change my scripts to use .test.

My current test setup has a KVM virtual machine running my bhm program to receive mail which is taking between 20% and 50% of a CPU core in my tests so far. While that is happening the kvm process is reported as taking between 60% and 200% of a CPU core, so kvm takes as much as 4* the CPU of the guest due to the virtual networking overhead – even though I’m using the virtio-net-pci driver (the most efficient form of KVM networking for emulating a regular ethernet card). I’ve also seen this in production with a virtual machine running a ToR relay node.

I’ve fixed a bug where Postal would try to send the SMTP quit command after encountering a TCP error which would cause an infinite loop and SEGV.

03 May, 2021 04:54AM by etbe

Russ Allbery

Review: The Voyage of the Dawn Treader

Review: The Voyage of the Dawn Treader, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #3
Publisher: Collier Books
Copyright: 1952
Printing: 1978
ISBN: 0-02-044260-2
Format: Mass market
Pages: 216

There was a boy named Eustace Clarence Scrubb and he almost deserved it.

The Voyage of the Dawn Treader is the third Narnia book in original publication order (see my review of The Lion, the Witch and the Wardrobe for more about reading order). You could arguably start reading here; there are a lot of references to the previous books, but mostly as background material, and I don't think any of it is vital. If you wanted to sample a single Narnia book to see if you'd get along with the series, this is the one I'd recommend.

Since I was a kid, The Voyage of the Dawn Treader has held the spot of my favorite of the series. I'm happy to report that it still holds up. Apart from one bit that didn't age well (more on that below), this is the book where the story and the world-building come together, in part because Lewis picks a plot shape that works with what he wants to write about.

The younger two Pevensie children, Edmund and Lucy, are spending the summer with Uncle Harold and Aunt Alberta because their parents are in America. That means spending the summer with their cousin Eustace. C.S. Lewis had strong opinions about child-raising that crop up here and there in his books, and Harold and Alberta are his example of everything he dislikes: caricatured progressive, "scientific" parents who don't believe in fiction or mess or vices. Eustace therefore starts the book as a terror, a whiny bully who has only read boring practical books and is constantly scoffing at the Pevensies and making fun of their stories of Narnia. He is therefore entirely unprepared when the painting of a ship in the guest bedroom turns into a portal to the Narnia and dumps the three children into the middle of the ocean.

Thankfully, they're in the middle of the ocean near the ship in the painting. That ship is the Dawn Treader, and onboard is Caspian from the previous book, now king of Narnia. He has (improbably) sorted things out in his kingdom and is now on a sea voyage to find seven honorable Telmarine lords who left Narnia while his uncle was usurping the throne. They're already days away from land, headed towards the Lone Islands and, beyond that, into uncharted seas.

MAJOR SPOILERS BELOW.

Obviously, Eustace gets a redemption arc, which is roughly the first half of this book. It's not a bad arc, but I am always happy when it's over. Lewis tries so hard to make Eustace insufferable that it becomes tedious. As an indoor kid who would not consider being dumped on a primitive sailing ship to be a grand adventure, I wanted to have more sympathy for him than the book would allow.

The other problem with Eustace's initial character is that Lewis wants it to stem from "modern" parenting and not reading the right sort of books, but I don't buy it. I've known kids whose parents didn't believe in fiction, and they didn't act anything like this (and kids pick up a lot more via osmosis regardless of parenting than Lewis seems to realize). What Eustace acts like instead is an entitled, arrogant rich kid who is used to the world revolving around him, and it's fascinating to me how Lewis ignores class to focus on educational philosophy.

The best part of Eustace's story is Reepicheep, which is just setup for Reepicheep becoming the best part of The Voyage of the Dawn Treader.

Reepicheep, the leader of Narnia's talking mice, first appears in Prince Caspian, but there he's mostly played for laughs: the absurdly brave and dashing mouse who rushes into every fight he sees. In this book, he comes into his own as the courage and occasionally the moral conscience of the party. Caspian wants to explore and to find the lords of his past, the Pevensie kids want to have a sea adventure, and Eustace is in this book to have a redemption arc, but Reepicheep is the driving force at the heart of the voyage. He's going to Aslan's country beyond the sea, armed with a nursemaid's song about his destiny and a determination to be his best and most honorable self every step of the way, and nothing is going to stop him.

Eustace, of course, takes an immediate dislike to a talking rodent. Reepicheep, in return, is the least interested of anyone on the ship in tolerating Eustace's obnoxious behavior and would be quite happy to duel him. But when Eustace is turned into a dragon, Reepicheep is the one who spends hours with him, telling him stories and ensuring he's not alone. It's beautifully handled, and my only complaint is that Lewis doesn't do enough with the Eustace and Reepicheep friendship (or indeed with Eustace at all) for the rest of the book.

After Eustace's restoration and a few other relatively short incidents comes the second long section of the book and the part that didn't age well: the island of the Dufflepuds. It's a shame because the setup is wonderful: a cultivated island in the middle of nowhere with no one in sight, mysterious pounding sounds and voices, the fun of trying to figure out just what these invisible creatures could possibly be, and of course Lucy's foray into the second floor of a house, braving the lair of a magician to find and read one of the best books of magic in fantasy.

Everything about how Lewis sets this scene is so well done. The kids are coming from an encounter with a sea serpent and a horrifically dangerous magic island and land on this scene of eerily normal domesticity. The most dangerous excursion is Lucy going upstairs in a brightly lit house with soft carpet in the middle of the day. And yet it's incredibly tense because Lewis knows exactly how to put you in Lucy's head, right down to having to stand with her back to an open door to read the book.

And that book! The pages only turn forward, the spells are beautifully illustrated, and the sense of temptation is palpable. Lucy reading the eavesdropping spell is one of the more memorable bits in this series, at least for me, and makes a surprisingly subtle moral point about the practical reasons why invading other people's privacy is unwise and can just make you miserable. And then, when Lucy reads the visibility spell that was her goal, there's this exchange, which is pure C.S. Lewis:

"Oh Aslan," said she, "it was kind of you to come."

"I have been here all the time," said he, "but you have just made me visible."

"Aslan!" said Lucy almost a little reproachfully. "Don't make fun of me. As if anything I could do would make you visible!"

"It did," said Aslan. "Did you think I wouldn't obey my own rules?"

I love the subtlety of what's happening here: the way that Lucy is much more powerful than she thinks she is, but only because Aslan decided to make the rules that way and chooses to follow his own rules, making himself vulnerable in a fascinating way. The best part is that Lewis never belabors points like this; the characters immediately move on to talk about other things, and no one feels obligated to explain.

But, unfortunately, along with the explanation of the thumping and the magician, we learn that the Dufflepuds are (remarkably dim-witted) dwarfs, the magician is their guardian (put there by Aslan, no less!), he transformed them into rather absurd shapes that they hate, and all of this is played for laughs. Once you notice that these are sentient creatures being treated essentially like pets (and physically transformed against their will), the level of paternalistic colonialism going on here is very off-putting. It's even worse that the Dufflepuds are memorably funny (washing dishes before dinner to save time afterwards!) and are arguably too dim to manage on their own, because Lewis made the authorial choice to write them that way. The "white man's burden" feeling is very strong.

And Lewis could have made other choices! Coriakin the magician is a fascinating and somewhat morally ambiguous character. We learn later in the book that he's a star and his presence on the island is a punishment of sorts, leading to one of my other favorite bits of theology in this book:

"My son," said Ramandu, "it is not for you, a son of Adam, to know what faults a star can commit."

Lewis could have kept most of the setup, kept the delightfully silly things the Dufflepuds believe, changed who was responsible for their transformation, and given Coriakin a less authoritarian role, and the story would have been so much stronger for it.

After this, the story gets stranger and wilder, and it's in the last part that I think the true magic of this book lies. The entirety of The Voyage of the Dawn Treader is a progression from a relatively mundane sea voyage to something more awe-inspiring. The last few chapters are a tour de force of wonder: rejuvenating stars, sunbirds, the Witch's stone knife, undersea kingdoms, a sea of lilies, a wall of water, the cliffs of Aslan's country, and the literal end of the world. Lewis does it without much conflict, with sparse description in a very few pages, and with beautifully memorable touches like the quality of the light and the hush that falls over the ship.

This is the part of Narnia that I point to and wonder why I don't see more emulation (although I should note that it is arguably an immram). Tolkien-style fantasy, with dwarfs and elves and magic rings and great battles, is everywhere, but I can't think of many examples of this sense of awe and discovery without great battles and detailed explanations. Or of characters like Reepicheep, who gets one of the best lines of the series:

"My own plans are made. While I can, I sail east in the Dawn Treader. When she fails me, I paddle east in my coracle. When she sinks, I shall swim east with my four paws. And when I can swim no longer, if I have not reached Aslan's country, or shot over the edge of the world in some vast cataract, I shall sink with my nose to the sunrise and Peepiceek shall be the head of the talking mice in Narnia."

The last section of The Voyage of the Dawn Treader is one of my favorite endings of any book precisely because it's so different than the typical ending of a novel. The final return to England is always a bit disappointing in this series, but it's very short and is preceded by so much wonder that I don't mind. Aslan does appear to the kids as a lamb at the very end of the world, making Lewis's intended Christian context a bit more obvious, but even that isn't belabored, just left there for those who recognize the symbolism to notice.

I was curious during this re-read to understand why The Voyage of the Dawn Treader is so much better than the first two books in the series. I think it's primarily due to two things: pacing, and a story structure that's better aligned with what Lewis wants to write about.

For pacing, both The Lion, the Witch and the Wardrobe and Prince Caspian have surprisingly long setups for short books. In The Voyage of the Dawn Treader, by contrast, it takes only 35 pages to get the kids in Narnia, introduce all the characters, tour the ship, learn why Caspian is off on a sea voyage, establish where this book fits in the Narnian timeline, and have the kids be captured by slavers. None of the Narnia books are exactly slow, but Dawn Treader is the first book of the series that feels like it knows exactly where it's going and isn't wasting time getting there.

The other structural success of this book is that it's a semi-episodic adventure, which means Lewis can stop trying to write about battles and political changes whose details he's clearly not interested in and instead focus wholeheartedly on sense-of-wonder exploration. The island-hopping structure lets Lewis play with ideas and drop them before they wear out their welcome. And the lack of major historical events also means that Aslan doesn't have to come in to resolve everything and instead can play the role of guardian angel.

I think The Voyage of the Dawn Treader has the most compelling portrayal of Aslan in the series. He doesn't make decisions for the kids or tell them directly what to do the way he did in the previous two books. Instead, he shows up whenever they're about to make a dreadful mistake and does just enough to get them to make a better decision. Some readers may find this takes too much of the tension out of the book, but I have always appreciated it. It lets nervous child readers enjoy the adventures while knowing that Aslan will keep anything too bad from happening. He plays the role of a protective but non-interfering parent in a genre that usually doesn't have parents because they would intervene to prevent adventures.

I enjoyed this book just as much as I remembered enjoying it during my childhood re-reads. Still the best book of the series.

This, as with both The Lion, the Witch and the Wardrobe and Prince Caspian, was originally intended to be the last book of the series. That, of course, turned out to not be the case, and The Voyage of the Dawn Treader is followed (in both chronological and original publication order) by The Silver Chair.

Rating: 9 out of 10

03 May, 2021 03:03AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

First email from my new machine.

First email from my new machine. I didn't have my desktop Debian machine for a long time and now I have one set up. I rewrote my procmail/formail recipe, especially the part where I wrote complex shell script to generate my folder rules. I rewrote that 4 lines of shell script in 200 lines of Go, with unit tests. The part that took the longest time was finding out how to write unit tests in Go, and how to properly use go.mod to be able to import packages from subdirectories. I guess that's part of the fun.

03 May, 2021 12:07AM by Junichi Uekawa

May 02, 2021

hackergotchi for Santiago García Mantiñán

Santiago García Mantiñán

Windows and Linux software Raid dual boot BIOS machine

One could think that nowadays having a machine with software raid doing dual boot should be easy, but... my experience showed that it is not that easy.

Having a Windows machine do software raid is easy (I still don't understand why it doesn't really work like it should, but that is because I'm used to Linux software raid), and having software raid on Linux is also really easy. But doing so on a BIOS booted machine, on mbr disks (as Windows doesn't allow GPT on BIOS) is quite a pain.

The problem is how Windows does all this, with it's dynamic disks. What happens with this is that you get from a partitioning like this:

/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT /dev/sda2 206848 312580095 312373248 149G 7 HPFS/NTFS/exFAT /dev/sda3 312580096 313165823 585728 286M 83 Linux /dev/sda4 313165824 957698047 644532224 307,3G fd Linux raid autodetect

To something like this:

/dev/sda1 63 2047 1985 992,5K 42 SFS /dev/sda2 * 2048 206847 204800 100M 42 SFS /dev/sda3 206848 312580095 312373248 149G 42 SFS /dev/sda4 312580096 976769006 664188911 316,7G 42 SFS

These are the physical partitions as seen by fdisk, logical partitions are still like before, of course, so there is no problem in accesing them under Linux or windows, but what happens here is that Windows is using the first sectors for its dynamic disks stuff, so... you cannot use those to write grub info there :-(

So... the solution I found here was to install Debian's mbr and make it boot grub, but then... where do I store grub's info?, well, to do this I'm using a btrfs /boot which is on partition 3, as btrfs has room for embedding grub's info, and I setup the software raid with ext4 on partition 4, like you can see on my first partition dump. Of course, you can have just btrfs with its own software raid, then you don't need the fourth partition or anything.

There are however some caveats on doing all this, what I found was that I had to install grub manually using grub-install --no-floppy on /dev/sda3 and /dev/sdb3, as Debian's grub refused to give me the option to install there, also... several warnings came as a result, but things work ok anyway.

One more warning, I did all this on Buster, but it looks like for Grub 2.04 which is included on Bullseye, things have gotten a bit bigger, so at least on my partitions there was no room for it, so I had to leave the old Buster's grub around for now, if anybody has any ideas on how to solve this... they are welcome.

02 May, 2021 10:51PM by Santiago García Mantiñán (noreply@blogger.com)

May 01, 2021

Ingo Juergensmann

The Fediverse – What About Resources?

Today ist May, 1st. In about two weeks on May, 15th WhatsApp will put their changed Terms of Service into action and when you don’t accept their rules you won’t be able to use WhatsApp any longer.

Early this year there was already a strong movement away from WhatsApp towards other solutions. Mainly to Signal, but also some other services like the Fediverse gained some new users. And also XMPP got their fair share of new users.

So, what to do about the WhatsApp ToS change then? Shall we go all to Signal? Surely not. Signal is another vendor lock-in silo. It’s centralistic and recent development plans want to implement some crypto payment system. Even Bruce Schneier thinks that this is a bad idea.

Other alternatives often named include Matrix/Element or XMPP. Today, Don di Dislessia in the (german) Fediverse asked about power consumption of the Fediverse incl. Matrix and XMPP and how much renewable energy is being used. Of course this is no easy answer to this question, but I tried my best – at least for my own server.

Here are my findings and conclusions…

Power

screenshot showing power consumption of server
screenshot showing power consumption of server

Currently my server in the colocation is using about 93W in average with 6c Xeon E5-2630L, 128 GB RAM, 4x 2 TB WD Red + 1 Samsung 960pro NVMe. The server is 7 years old. When I started with that server the power consumption was about 75W, but back then there were far less users on the server. So, 20W more over the past year…

Users

I’m running my Friendica node on Nerdica.net since 2013. Over the years it became one of the largest Friendica servers in the Fediverse, for some time it was the largest one. It has currently like 700 total users and 180 monthly active users. My Mastodon instance on Nerdculture.de has about 1000 total users and about 300 monthly active users.

Since last year I also run a Matrix-Synapse server. Although I invited my family I’m in fact the only active user on that server and have joined some channels.

My XMPP server is even older than my Friendica node. For long time I had like maybe 20 users. Now I setup a new website and added some domains like hookipa.net and xmpp.social the user count increased and currently I have like 130 users on those two domains and maybe like 50 monthly active users. Also note that all my Friendica and Mastodon users can use XMPP with their accounts, but won’t be counted the same way as on “native” users on ejabberd, because the auth backend is different.

So, let’s assume I do have like 2000 total users and 500 monthly active users.

CPU, Database Sizes and Disk I/O

Let’s have a look about how many resources are being used by those users.

Database Sizes:

  • Friendica (MariaDB): 31 GB for 700 users
  • Mastodon (PostgreSQL): 15 GB for 1000 users
  • Matrix-Synapse (PostgreSQL): 5 GB for 1 user
  • XMPP (PostgreSQL): 0.4 GB for 200 users

CPU times according to xentop:

  • Webserver VM (Matrix, Friendica & Mastodon): 13410130 s / 130%
  • XMPP VM: 944275 s / 5.4%

Friendica does use the largest database and causes most disk I/O on NVMe, but it’s difficult to differentiate between the load between the web apps on the webserver. So, let’s have a quick look on an simple metric:

Number of lines in webserver logfile:

  • Friendica: 11575 lines
  • Matrix: 8174 lines
  • Mastodon: 3212 lines

These metrics correlate to some degree with the database I/O load, at least for Friendica. If you take into account the number of users, things look quite different.

Conclusion

Overall, and my personal impression, is that Matrix is really bad in regards of resource usage. Given that I’m the only active user it uses exceptionally many resources. When you also consider that Matrix is using a distributed database for its chat rooms, you can assume that the resource usage is multiplied across the network, making things even worse.

Friendica is using a large database and many disk accesses, but has a fairly large user base, so it seems ok, but of course should be improved.

Mastodon seems to be quite good, considering the database size, the number of log lines and the user count.

XMPP turns out to be the most efficient contestant in this comparison: it uses much less CPU cycles and database disk I/O.

Of course, Mastdon/Friendica are different services than XMPP or Matrix. So, coming back to the initial question about alternatives to WhatsApp, the answer for me is: you should prefer XMPP over Matrix alone for reasons of saving resources and thus reducing power consumption. Less power consumption also means a smaller ecological footprint and fewer CO2 emissions for your communication with your family and friends.

XMPP is surely not the perfect replacement for WhatsApp, but I think it is the best thing to recommend. As said above, I don’t think that Signal is an viable option. It’s just another proprierary silo with all the problems that come with it. Matrix is a resource hog and not a messenger but a MS Teams replacement. Element as the main Matrix client is laggy and not multi-account/multi-server capable. Other Matrix clients do support multiple accounts but are not as feature-complete as Element. In the end the Matrix ecosystem will suffer from the same issues as XMPP did already a decade ago. But XMPP has learned to deal with it.

Also XMPP is proceeding fast in the last years and it has solved many problems many people are still complaining about. Sure, there still some open issues. The situation on IOS is still not as good as on Android with Conversations, but it is fairly close to it.

There are many efforts to improve XMPP. There is Quicksy IM, which is a service that will use your phone number as Jabber ID/JID and is thus comparable to Signal which uses phone numbers as well as unique identifier. But Quicksy is compatible with XMPP standards. Snikket is another new XMPP ecosystem aiming at smaller groups hosting their own server by simply installing a Docker container and setup some basic SRV records in the DNS. Or there is Mailcow, a Docker based mailserver setup that added XMPP server in their setup as well, so you can have the same mail and XMPP address. Snikket even got EU based funding for implementing XMPP Account Portability which also will improve the decentralization even further. Additionally XMPP helps vaccination in Canada and USA with vaxbot by Monal.

Be smart and use ecofriendly infrastructure.

01 May, 2021 03:50PM by ij

April 30, 2021

Paul Wise

FLOSS Activities April 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian: restart service killed by OOM killer, revert mirror redirection
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The flower/sptag work was sponsored by my employer. All other work was done on a volunteer basis.

30 April, 2021 08:51PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in April 2021

Here is my monthly update covering what I have been doing in the free software world during April 2021 (previous month):

§

Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Updated the main Reproducible Builds website and documentation:

    • Highlight our mailing list on the Contribute. page [...]
    • Add a noun (and drop an unnecessary full-stop) on the landing page. [...][...]
    • Correct a reference to the date metadata attribute on reports, restoring the display of months on the homepage. [...]
    • Correct a typo of "instalment" within a previous news entry. [...]
    • Added a conspicuous "draft" banner to unpublished blog posts in order to match the report draft banner. [...]
  • I also made the following changes to diffoscope, including uploading versions 172 and 173 to Debian:

    • Add support for showing annotations in PDF files. (#249)
    • Move to the assert_diff helper in test_pdf.py. [...]


§

Debian

  • redis (5:6.2.2-1) (to experimental) — New upstream release.

  • python-django:

    • 2.2.20-1 — New upstream security release.
    • 3.2-1 (to experimental) — New major upstream release (release notes).
  • hiredis (1.0.0-2) — Build with SSL/TLS support (#987114), and overhaul various aspects of the packaging.

  • mtools (4.0.27-1) — New upstream release.


Debian Long Term Support (LTS)

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project:

  • Investigated and triaged avahi (CVE-2021-3468), exiv2 (CVE-2021-3482), file-roller (CVE-2020-36314), fluidsynth (CVE-2021-28421), gnuchess (CVE-2021-30184), gpac (CVE-2021-28300), imagemagick (CVE-2021-20309, CVE-2021-20243), ircii (CVE-2021-29376), jetty9 (CVE-2021-28163), libcaca (CVE-2021-30498, CVE-2021-30499), libjs-handlebars, libpano13, libpodofo (CVE-2021-30469, CVE-2021-30470, CVE-2021-30471, CVE-2021-30472), mediawiki, mpv (CVE-2021-30145), nettle (CVE-2021-20305), nginx (CVE-2020-36309), nim (CVE-2021-21372, CVE-2021-21373, CVE-2021-21374), node-glob-parent (CVE-2020-28469), openexr (CVE-2021-3474), python-django-registration (CVE-2021-21416), qt4-x11 (CVE-2021-3481), qtsvg-opensource-src (CVE-2021-3481), ruby-kramdown, scrollz (CVE-2021-29376), syncthing (CVE-2021-21404), thunderbird (CVE-2021-23991, CVE-2021-23992, CVE-2021-23993) & wordpress (CVE-2021-29447).

  • Issued DLA 2620-1 to address a cross-site scripting (XSS) vulnerability in python-bleach, a whitelist-based HTML sanitisation library.

  • Issued DLA 2622-1 and ELA 402-1 as it was discovered that there was a potential directory traversal issue in Django, the popular Python-based web development framework. The vulnerability could have been exploited by maliciously crafted filenames. However, the upload handlers built into Django itself were not affected. (#986447)

  • Jan-Niklas Sohn discovered that there was an input validation failure in the X.Org display server. Insufficient checks on the lengths of the XInput extension's ChangeFeedbackControl request could have lead to out of bounds memory accesses in the X server. These issues could have led to privilege escalation for authorised clients, particularly on systems where the X server is running as a privileged user. I, therefore, issued both DLA 2627-1 and ELA 405-1 to address this problem.

  • Frontdesk duties, reviewing others' packages, participating in mailing list discussions, etc., as well as attending our monthly meeting.

You can find out more about the project via the following video:

30 April, 2021 05:24PM

April 29, 2021

hackergotchi for Norbert Preining

Norbert Preining

In memoriam of Areeb Jamal

We lost one of our friends and core developers of the FOSSASIA community. An extremely sad day.

We miss you.

29 April, 2021 01:13AM by Norbert Preining

April 28, 2021

hackergotchi for Jonathan McDowell

Jonathan McDowell

DeskPi Pro update

I wrote previously about my DeskPi Pro + 8GB Pi 4 setup. My main complaint at the time was the fact one of the forward facing USB ports broke off early on in my testing. For day to day use that hasn’t been a problem, but it did mar the whole experience. Last week I received an unexpected email telling me “The new updated PCB Board for your DeskPi order was shipped.”. Apparently this was due to problems with identifying SSDs and WiFi/HDMI issues. I wasn’t quite sure how much of the internals they’d be replacing, so I was pleasantly surprised when it turned out to be most of them; including the PCB with the broken USB port on my device.

DeskPi Pro replacement PCB

They also provided a set of feet allowing for vertical mounting of the device, which was a nice touch.

The USB/SATA bridge chip in use has changed; the original was:

usb 2-1: New USB device found, idVendor=152d, idProduct=0562, bcdDevice= 1.09
usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1: Product: RPi_SSD
usb 2-1: Manufacturer: 52Pi
usb 2-1: SerialNumber: DD5641988389F

and the new one is:

usb 2-1: New USB device found, idVendor=174c, idProduct=1153, bcdDevice= 0.01
usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1
usb 2-1: Product: AS2115
usb 2-1: Manufacturer: ASMedia
usb 2-1: SerialNumber: 00000000000000000000

That’s a move from a JMicron 6Gb/s bridge to an ASMedia 3Gb/s bridge. It seems there are compatibility issues with the JMicron that mean the downgrade is the preferred choice. I haven’t retried the original SSD I wanted to use (that wasn’t detected), but I did wonder if this might have resolved that issue too.

Replacing the PCB was easier than the original install; everything was provided pre-assembled and I just had to unscrew the Pi4 and slot it out, then screw it into the new PCB assembly. Everything booted fine without the need for any configuration tweaks. Nice and dull. I’ve tried plugging things into the new USB ports and they seem ok so far as well.

However I also then ended up pulling in a new backports kernel from Debian (upgrading from 5.9 to 5.10) which resulted in a failure to boot. The kernel and initramfs were loaded fine, but no login prompt ever appeared. Some digging led to the discovery that a change in boot ordering meant USB was not being enabled. The solution is to add reset_raspberrypi to the /etc/initramfs-tools/modules file - that way this module is available in the initramfs, the appropriate pre-USB reset can happen and everything works just fine again.

The other niggle with the new kernel was a regular set of errors in the kernel log:

mmc1: Timeout waiting for hardware cmd interrupt.
mmc1: sdhci: ============ SDHCI REGISTER DUMP ===========

and a set of registers afterwards, roughly every 10s or so. This seems to be fallout from an increase in the core clock due to the VC4 driver now being enabled, the fact I have no SD card in the device and a lack of working card-detect line for the MicroSD slot. There’s a GitHub issue but I solved it by removing the sdhci_iproc for now - I’m not using the wifi so loss of MMC isn’t a problem.

Credit to DeskPi for how they handled this. I didn’t have to do anything and didn’t even realise anything was happening until I got the email with my tracking number and a description of what they were sending out in it. Delivery took less than a week. This is a great example of how to handle a product issue - no effort required on the part of the customer.

28 April, 2021 07:27PM

Free Software Fellowship

Google, IBM and Microsoft shared psychotic episode with bipolar Molly de Blanc

We don't like a lot of the things Molly de Blanc says and does but we believe it is important to give credit where it is due. It is either very courageous or stupid for her to publicly disclose that she suffers from bipolar. Out of respect for all those who suffer from mental illness, we will assume it is courage and thank her for sharing her experience. You can see her statement in this video from FrOSCon 2019, we extract the key part below:

de Blanc's statement is an amazing revelation. She comments on the interaction of her bipolar and open source. In open source today, it is hard to separate the technology from the egos.

Back in his youth, one of Britain's royal princes would threaten to have fellow schoolchildren locked in the tower of London. Molly de Blanc's open letter to the FSF appears to have similar grandiose delusions when she writes We are calling for the removal of the entire Board of the Free Software Foundation... It is time for RMS to step back from the free software, tech ethics, digital rights, and tech communities, for he cannot provide the leadership we need.. She is talking about locking RMS in the tower. Molly is not a developer, she never wrote one line of code. She was employed to work for the developers and now she is giving us orders. Imagine the same thing in an airline: the wife of the pilot shouting orders at the cabin crew. Molly de Blanc was girlfriend of the Debian Project Leader, Chris Lamb. Thanks to her illness, she is now acting like she is the Queen of England.

As the medical experts tell us, this is all part of Molly's condition so we are going to give her the benefit of the doubt. We forgive her. We ask everybody to forgive her and focus on those around her.

Today is World Day for Safety and Health at Work

Today is the United Nations World Day for Safety and Health at Work.

Molly de Blanc's boss, Neil McGovern of the GNOME Foundation, is aware of de Blanc's health condition. Github records show he is the top contributor in her RMS Open Letter repository. Any credible employer would encourage Molly to take her meds or even have some time in medical care on full pay. McGovern, on the other hand, is actively engaging in what appears to be an employee going through a period of hypomania.

Neil McGovern, Molly de Blanc, Hypomania, grandiose delusions, bipolar

McGovern is not alone, the list published elsewhere shows thousands of employees of Google, IBM, Red Hat, Microsoft and even MIT came along for the ride too. Elana Hashman, the second highest in that ranking of cyberbullies, is a Red Hat / IBM employee.

The more to do still comment in the log suggests Molly is a dopamine junkie too. This shows that Github is actively aware of and cultivating dopamine addictions in the same way that Facebook does.

Molly de Blanc, Hypomania, grandiose delusions, bipolar, dopamine, more

Last but not least, we also see evidence of dyslexia in the logs:

Molly de Blanc, Hypomania, grandiose delusions, bipolar, dopamine, more

As with bipolar, we don't want to criticize anybody who suffers from dyslexia. We are too busy laughing at all the employees of Google, IBM, Red Hat, Microsoft and even MIT who are sharing this woman's mental illness and cultivating it as a cyberweapon. Molly's own words in the video above appear to confirm the engagement of these egos in her disorder.

Then again, maybe that isn't something to laugh at.

Molly de Blanc, Chris Lamb, Microsoft, Google, Debian, OSI, SPI Molly de Blanc, Chris Lamb, Microsoft, Google, Debian, OSI, SPI

28 April, 2021 07:20PM

hackergotchi for Martin Michlmayr

Martin Michlmayr

Research on FOSS foundations

I worked on research on FOSS foundations and published two reports:

Growing Open Source Projects with a Stable Foundation

This primer covers non-technical aspects that the majority of projects will have to consider at some point. It also explains how FOSS foundations can help projects grow and succeed.

This primer explains:

  • What issues and areas to consider
  • How other projects and foundations have approached these topics
  • What FOSS foundations bring to the table
  • How to choose a FOSS foundation

You can download Growing Open Source Projects with a Stable Foundation.

Research report

The research report describes the findings of the research and aims to help understand the operations and challenges FOSS foundations face.

This report covers topics such as:

  • Role and activities of foundations
  • Challenges faced and gaps in the service offerings
  • Operational aspects, including reasons for starting an org and choice of jurisdiction
  • Trends, such as the "foundation in a foundation" model
  • Recommendations for different stakeholders

You can download the research report.

Acknowledgments

This research was sponsored by Ford Foundation and Alfred P. Sloan Foundation. The research was part of their Critical Digital Infrastructure Research initiative, which investigates the role of open source in digital infrastructure.

28 April, 2021 08:29AM by Martin Michlmayr

Russell Coker

Russ Allbery

Review: Beyond Shame

Review: Beyond Shame, by Kit Rocha

Series: Beyond #1
Publisher: Kit Rocha
Copyright: December 2013
ASIN: B00GIA4GN8
Format: Kindle
Pages: 270

I read this book as part of the Beyond Series Bundle (Books 1-3), which is what the sidebar information is for.

Noelle is a child of Eden, the rich and technologically powerful city of a post-apocalyptic world. As the daughter of a councilman, she had everything she wanted except the opportunity to feel. Eden's religious elite embrace a doctrine of strict Puritanism: Even hugging one's children was frowned upon, let alone anything related to sex. Noelle was too rebellious to settle for that, which is why this book opens with her banished from Eden, ejected into Sector Four. The sectors are the city slums, full of gangs and degenerates and violence, only a slight step up from the horrific farming communes. Luckily for her, she literally stumbles into one of the lieutenants of the O'Kane gang, who are just as violent as their reputations but who have surprising sympathy for a helpless city girl.

My shorthand distinction between romance and erotica is that romance mixes some sex into the plot and erotica mixes some plot into the sex. Beyond Shame is erotica, specifically BDSM erotica. The forbidden sensations that Noelle got kicked out of Eden for pursuing run strongly towards humiliation, which is tangled up in the shame she was taught to feel about anything sexual. There is a bit of a plot surrounding the O'Kanes who take her in, their leader, some political skulduggery that eventually involves people she knows, and some inter-sector gang warfare, but it's quite forgettable (and indeed I've already forgotten most of it). The point of the story is Noelle navigating a relationship with Jasper (among others) that involves a lot of very graphic sex.

I was of two minds about reviewing this. Erotica is tricky to review, since to an extent it's not trying to do what most books are doing. The point is less to tell a coherent story (although that can be a bonus) than it is to turn the reader on, and what turns the reader on is absurdly personal and unpredictable. Erotica is arguably more usefully marked with story codes (which in this case would be something like MF, MMFF, FF, Mdom, Fdom, bd, ds, rom, cons, exhib, humil, tattoos) so that the reader has an idea whether the scenarios in the story are the sort of thing they find hot.

This is particularly true of BDSM erotica, since the point is arousal from situations that wouldn't work or might be downright horrifying in a different sort of book. Often the forbidden or taboo nature of the scene is why it's erotic. For example, in another genre I would complain about the exaggerated and quite sexist gender roles, where all the men are hulking cage fighters who want to control the women, but in male-dominant BDSM erotica that's literally the point.

As you can tell, I wrote a review anyway, primarily because of how I came to read this book. Kit Rocha (which is a pseudonym for the writing team of Donna Herren and Bree Bridges) recently published Deal with the Devil, a book about mercenary librarians in a post-apocalyptic future. Like every right-thinking person, I immediately wanted to read a book about mercenary librarians, but discovered that it was set in an existing universe. I hate not starting at the beginning of things, so even though there was probably no need to read the earlier books first, I figured out Beyond Shame was the first in this universe and the bundle of the first three books was only $2.

If any of you are immediately hooked by mercenary librarians but are back-story completionists, now you know what you'll be getting into.

That said, there are a few notable things about this book other than it has a lot of sex. The pivot of the romantic relationship was more interesting and subtle than most erotica. Noelle desperately wants a man to do all sorts of forbidden things to her, but she starts the book unable to explain or analyze why she wants what she wants, and both Jasper and the story are uncomfortable with that and unwilling to leave it alone. Noelle builds up a more coherent theory of herself over the course of the book, and while it's one that's obviously designed to enable lots of erotic scenes, it's not a bad bit of character development.

Even better is Lex, the partner (sort of) of the leader of the O'Kane gang and by far the best character in the book. She takes Noelle under her wing from the start, and while that relationship is sexualized like nearly everything in this book, it also turns into an interesting female friendship that I would have also enjoyed in a different genre. I liked Lex a lot, and the fact she's the protagonist of the next book might keep me reading.

Beyond Shame also has a lot more female gaze descriptions of the men than is often the case in male-dominant BDSM. The eye candy is fairly evenly distributed, although the gender roles are very much not. It even passes the Bechdel test, although it is still erotica and nearly all the conversations end up being about sex partners or sex eventually.

I was less fond of the fact that the men are all dangerous and violent and the O'Kane leader frequently acts like a controlling, abusive psychopath. A lot of that was probably the BDSM setup, but it was not my thing. Be warned that this is the sort of book in which one of the (arguably) good guys tortures someone to death (albeit off camera).

Recommendations are next to impossible for erotica, so I won't try to give one. If you want to read the mercenary librarian novel and are dubious about this one, it sounds like (although I can't confirm) that it's a bit more on the romance end of things and involves a lot fewer group orgies. Having read this book, I suspect it was entirely unnecessary to have done so for back-story. If you are looking for male-dominant BDSM, Beyond Shame is competently written, has a more thoughtful story than most, and has a female friendship that I fully enjoyed, which may raise it above the pack.

Rating: 6 out of 10

28 April, 2021 03:10AM

April 26, 2021

hackergotchi for Steve Kemp

Steve Kemp

Writing a text-based adventure game for CP/M

In my previous post I wrote about how I'd been running CP/M on a Z80-based single-board computer.

I've been slowly working my way through a bunch of text-based adventure games:

  • The Hitchhiker's Guide To The Galaxy
  • Zork 1
  • Zork 2
  • Zork 3

Along the way I remembered how much fun I used to have doing this in my early teens, and decided to write my own text-based adventure.

Since I'm not a masochist I figured I'd write something with only three or four locations, and solicited facebook for ideas. Shortly afterwards a "plot" was created and I started work.

I figured that the very last thing I wanted to be doing was to be parsing text-input with Z80 assembly language, so I hacked up a simple adventure game in C. I figured if I could get the design right that would ease the eventual port to assembly.

I had the realization pretty early that using a table-driven approach would be the best way - using structures to contain the name, description, and function-pointers appropriate to each object for example. In my C implementation I have things that look like this:

{name: "generator",
 desc: "A small generator.",
 use: use_generator,
 use_carried: use_generator_carried,
 get_fn: get_generator,
 drop_fn: drop_generator},

A bit noisy, but simple enough. If an object cannot be picked up, or dropped, the corresponding entries are blank:

{name: "desk",
 desc: "",
 edesc: "The desk looks solid, but old."},

Here we see something that is special, there's no description so the item isn't displayed when you enter a room, or LOOK. Instead the edesc (extended description) is available when you type EXAMINE DESK.

Anyway over a couple of days I hacked up the C-game, then I started work porting it to Z80 assembly. The implementation changed, the easter-eggs were different, but on the whole the two things are the same.

Certainly 99% of the text was recycled across the two implementations.

Anyway in the unlikely event you've got a craving for a text-based adventure game I present to you:

26 April, 2021 06:00PM

Vishal Gupta

Ramblings // On Sikkim and Backpacking

What I loved the most about Sikkim can’t be captured on cameras. It can’t be taped since it would be intrusive and it can’t be replicated because it’s unique and impromptu. It could be described, as I attempt to, but more importantly, it’s something that you simply have to experience to know.

Now I first heard about this from a friend who claimed he’d been offered free rides and Tropicanas by locals after finishing the Ladakh Marathon. And then I found Ronnie’s song, whose chorus goes : “Dil hai pahadi, thoda anadi. Par duniya ke maya mein phasta nahi” (My heart belongs to the mountains. Although a little childish, it doesn’t get hindered by materialism). While the song refers his life in Manali, I think this holds true for most Himalayan states.

Maybe it’s the pleasant weather, the proximity to nature, the sense of safety from Indian Army being round the corner, independence from material pleasures that aren’t available in remote areas or the absence of the pollution, commercialisation, & cutthroat-ness of cities, I don’t know, there’s just something that makes people in the mountains a lot kinder, more generous, more open and just more alive.

Sikkimese people, are honestly some of the nicest people I’ve ever met. The blend of Lepchas, Bhutias and the humility and the truthfulness Buddhism ingrains in its disciples is one that’ll make you fall in love with Sikkim (assuming the views, the snow, the fab weather and food, leave you pining for more).

As a product of Indian parenting, I’ve always been taught to be wary of the unknown and to stick to the safer, more-travelled path but to be honest, I enjoy bonding with strangers. To me, each person is a storybook waiting to be flipped open with the right questions and the further I get from home, the wilder the stories get. Besides there’s something oddly magical about two arbitrary curvilinear lines briefly running parallel until they diverge to move on to their respective paths. And I think our society has been so busy drawing lines and spreading hate that we forget that in the end, we’re all just lines on the universe’s infinite canvas. So the next time you travel, and you’re in a taxi, a hostel, a bar, a supermarket, or on a long walk to a monastery (that you’re secretly wishing is open despite a lockdown), strike up a conversation with a stranger. Small-talk can go a long way.


Header icon made by Freepik from www.flaticon.com

26 April, 2021 11:27AM by Vishal Gupta

April 25, 2021

Dominique Dumont

An improved GUI for cme and Config::Model

I’ve finally found the time to improve the GUI of my pet project: cme (aka Config::Model).

Several years ago, I stumbled on a usability problem on the GUI. Some configuration (like OpenSsh or Systemd) feature a lot of configuration parameters. Which means that the GUI displays all these parameters, so finding a specfic parameter might be challenging:

To workaround this problem, I’ve added a Filter widget in 2018 which did more or less the job, but it suffered from several bugs which made its behavior confusing.

This is now fixed. The Filter widget is now working in a more consistent way.

In the example below, I’ve typed “IdentityFile” (1) in the Filter widget to show the identityFile used for various hosts (2):

Which is quite good, but some hosts use the default identity file so no value show up in the GUI. You can then click on “hide empty value” checkbox to show only the hosts that use a specific identity file:

I hope that this new behavior of the Filter box will make this project more useful.

The improved GUI was released with Config::Model::TkUI 1.374. This new version is available on CPAN and on Debian/experimental). It will be released on Debian/unstable once the next Debian version is out.

All the best

25 April, 2021 03:15PM by dod

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

JavaScript madness

Yesterday, I had the problem that while socket.io from the browser would work just fine against a given server endpoint (which I do not control), talking to the same server from Node.js would just give hangs and/or inscrutinable “7:::1” messages (which I later learned meant “handshake missing”).

To skip six hours of debugging, the server set a cookie in the initial HTTP handshake, and expected to get it back when opening a WebSocket, presumably to steer the connection to the same backend that got the handshake. (Chrome didn't show the cookie in the WS debugging, but Firefox did.) So we need to keep track of chose cookies. While still remaining on socket.io 0.9.5 (for stupid reasons). No fear, we add this incredibly elegant bit of code:

var io = require('socket.io-client');
// Hook into XHR to pick out the cookie when we receive it.
var my_cookie;
io.util.request = function() {
        var XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest;
        var xhr = new XMLHttpRequest();
        xhr.setDisableHeaderCheck(true);
        const old_send = xhr.send;
        xhr.send = function() {
                // Add our own readyStateChange hook in front, to get the cookie if we don't have it.
                xhr.old_onreadystatechange = xhr.onreadystatechange;
                xhr.onreadystatechange = function() {
                        if (xhr.readyState == xhr.HEADERS_RECEIVED) {
                                const cookie = xhr.getResponseHeader('set-cookie');
                                if (cookie) {
                                        my_cookie = cookie[0].split(';')[0];
                                }
                        }
                        xhr.old_onreadystatechange.call(xhr, arguments);
                };
                // Set the cookie if we have it.
                if (my_cookie) {
                        xhr.setRequestHeader("Cookie", my_cookie);
                }
                return old_send.call(this, arguments);
        };
        return xhr;
};
;
// Now override the socket.io WebSockets transport to include our header.
io.Transport['websocket'].prototype.open = function() {
        const query = io.util.query(this.socket.options.query);
        const WebSocket = require('ws');
        // Include our cookie.
        let options = {};
        if (my_cookie) {
                options['headers'] = { 'Cookie': my_cookie };
        }
        this.websocket = new WebSocket(this.prepareUrl() + query, options);
        // The rest is just repeated from the existing function.
        const self = this;
        this.websocket.onopen = function () {
                self.onOpen();
                self.socket.setBuffer(false);
        };
        this.websocket.onmessage = function (ev) {
                self.onData(ev.data);
        };
        this.websocket.onclose = function () {
                self.onClose();
                self.socket.setBuffer(true);
        };
        this.websocket.onerror = function (e) {
                self.onError(e);
        };
        return this;
};
// And now, finally!
var socket = io.connect('https://example.com', { transports: ['websocket'] });

It's a reminder that talking HTTP and executing JavaScript does not make you into a (headless) browser. And that you shouldn't let me write JavaScript. :-)

(Apologies for the lack of blank lines; evidently, they confuse Markdown.)

25 April, 2021 10:13AM

Russ Allbery

Review: Learning React

Review: Learning React, by Alex Banks & Eve Porcello

Publisher: O'Reilly
Copyright: June 2020
ISBN: 1-4920-5172-1
Format: Trade paperback
Pages: 287

My first JavaScript project was a React frontend to a REST service. As part of that project, I read two books: JavaScript: The Definitive Guide to learn the language foundation and this book to learn the framework on top of it. This was an unintentional experiment in the ways programming books can approach the topic.

I commented in my review of JavaScript: the Definitive Guide that it takes the reference manual approach to the language. Learning React is the exact opposite. It's goal-driven, example-heavy, and has a problem and solution structure. The authors present a sample application, describe some desired new feature or deficiency in it, and then introduce the specific React technique that solves that problem. There is some rewriting of previous examples using more sophisticated techniques, but most chapters introduce new toy applications along with new parts of the React framework.

The best part of this book is its narrative momentum, so I think the authors were successful at their primary goal. The first eight chapters of the book (more on the rest of the book in a moment) feel like a whirlwind tour where one concept flows naturally into the next and one's questions while reading about one technique are often answered in the next section. I thought the authors tried too hard in places and overdid the enthusiasm, but it's very readable in a way that I think may appeal to people who normally find programming books dry. Learning React is also firm and definitive about the correct way to use React, which may appeal to readers who only want to learn the preferred way of using the framework. (For example, React class components are mentioned briefly, mostly to tell the reader not to use them, and the rest of the book only uses functional components.)

I had two major problems with this book, however. The first is that this breezy, narrative style turns out to be awful when one tries to use it as a reference. I read through most of this book with both enjoyment and curiosity, sat down to write a React component, and immediately struggled to locate the information I needed. Everything felt logically connected when I was focusing on the problems the authors introduced, but as soon as I started from my own problem, the structure of the book fell apart. I had to page through chapters to locate some nugget buried in the text, or re-read sections of the book to piece together which motivating problem my code was most similar to. It was a frustrating experience.

This may be a matter of learning style, since this is why I prefer programming books with a reference structure. But be warned that I can't recommend this book as a reference while you're programming, nor does it prepare you to use the official React documentation as a reference.

The second problem is less explicable and less defensible. I don't know what happened with O'Reilly's copy-editing for this book, but the code snippets are a train wreck. The Amazon reviews are full of people complaining about typos, syntax errors, omitted code, and glaring logical flaws, and they are entirely correct. It's so bad that I was left wondering if a very early, untested draft of the examples was somehow substituted into the book at the last minute by mistake.

I'm not the sort of person who normally types code in from a book, so I don't care about a few typos or obvious misprints as long as the general shape is correct. The general shape was not correct. In a few places, the code is so completely wrong and incomplete that even combined with the surrounding text I was unable to figure out what it was supposed to be. It's possible this is fixed in a later printing (I read the June 2020 printing of the second edition), but otherwise beware. The authors do include a link to a GitHub repository of the code samples, which are significantly different than what's printed in the book, but that repository is incomplete; many of the later chapter examples are only links to JavaScript web sandboxes, which bodes poorly for the longevity of the example code.

And then there's chapter nine of this book, which I found entirely baffling. This is a direct quote from the start of the chapter:

This is the least important chapter in this book. At least, that's what we've been told by the React team. They didn't specifically say, "this is the least important chapter, don't write it." They've only issued a series of tweets warning educators and evangelists that much of their work in this area will very soon be outdated. All of this will change.

This chapter is on suspense and error boundaries, with a brief mention of Fiber. I have no idea what I'm supposed to do with this material as a reader who is new to React (and thus presumably the target audience). Should I use this feature? When? Why is this material in the book at all when it's so laden with weird stream-of-consciousness disclaimers? It's a thoroughly odd editorial choice.

The testing chapter was similarly disappointing in that it didn't answer any of my concrete questions about testing. My instinct with testing UIs is to break out Selenium and do integration testing with its backend, but the authors are huge fans of unit testing of React applications. Great, I thought, this should be interesting; unit testing seems like a poor fit for UI code because of how artificial the test construction is, but maybe I'm missing some subtlety. Convince me! And then the authors... didn't even attempt to convince me. They just asserted unit testing is great and explained how to write trivial unit tests that serve no useful purpose in a real application. End of chapter. Sigh.

I'm not sure what to say about this book. I feel like it has so many serious problems that I should warn everyone away from it, and yet the narrative introduction to React was truly fun to read and got me excited about writing React code. Even though the book largely fell apart as a reference, I still managed to write a working application using it as my primary reference, so it's not all bad. If you like the problem and solution style and want a highly conversational and informal tone (that errs on the side of weird breeziness), this may still be the book for you. Just be aware that the code examples are a trash fire, so if you learn from examples, you're going to have to chase them down via the GitHub repository and hope that they still exist (or get a later edition of the book where this problem has hopefully been corrected).

Rating: 6 out of 10

25 April, 2021 05:15AM

Antoine Beaupré

Lost article ideas

I wrote for LWN for about two years. During that time, I wrote (what seems to me an impressive) 34 articles, but I always had a pile of ideas in the back of my mind. Those are ideas, notes, and scribbles lying around. Some were just completely abandoned because they didn't seem a good fit for LWN.

Concretely, I stored those in branches in a git repository, and used the branch name (and, naively, the last commit log) as indicators of the topic.

This was the state of affairs when I left:

remotes/private/attic/novena                    822ca2bb add letter i sent to novena, never published
remotes/private/attic/secureboot                de09d82b quick review, add note and graph
remotes/private/attic/wireguard                 5c5340d1 wireguard review, tutorial and comparison with alternatives
remotes/private/backlog/dat                     914c5edf Merge branch 'master' into backlog/dat
remotes/private/backlog/packet                  9b2c6d1a ham radio packet innovations and primer
remotes/private/backlog/performance-tweaks      dcf02676 config notes for http2
remotes/private/backlog/serverless              9fce6484 postponed until kubecon europe
remotes/private/fin/cost-of-hosting             00d8e499 cost-of-hosting article online
remotes/private/fin/kubecon                     f4fd7df2 remove published or spun off articles
remotes/private/fin/kubecon-overview            21fae984 publish kubecon overview article
remotes/private/fin/kubecon2018                 1edc5ec8 add series
remotes/private/fin/netconf                     3f4b7ece publish the netconf articles
remotes/private/fin/netdev                      6ee66559 publish articles from netdev 2.2
remotes/private/fin/pgp-offline                 f841deed pgp offline branch ready for publication
remotes/private/fin/primes                      c7e5b912 publish the ROCA paper
remotes/private/fin/runtimes                    4bee1d70 prepare publication of runtimes articles
remotes/private/fin/token-benchmarks            5a363992 regenerate timestamp automatically
remotes/private/ideas/astropy                   95d53152 astropy or python in astronomy
remotes/private/ideas/avaneya                   20a6d149 crowdfunded blade-runner-themed GPLv3 simcity-like simulator
remotes/private/ideas/backups-benchmarks        fe2f1f13 review of backup software through performance and features
remotes/private/ideas/cumin                     7bed3945 review of the cumin automation tool from WM foundation
remotes/private/ideas/future-of-distros         d086ca0d modern packaging problems and complex apps
remotes/private/ideas/on-dying                  a92ad23f another dying thing
remotes/private/ideas/openpgp-discovery         8f2782f0 openpgp discovery mechanisms (WKD, etc), thanks to jonas meurer
remotes/private/ideas/password-bench            451602c0 bruteforce estimates for various password patterns compared with RSA key sizes
remotes/private/ideas/prometheus-openmetrics    2568dbd6 openmetrics standardizing prom metrics enpoints
remotes/private/ideas/telling-time              f3c24a53 another way of telling time
remotes/private/ideas/wallabako                 4f44c5da talk about wallabako, read-it-later + kobo hacking
remotes/private/stalled/bench-bench-bench       8cef0504 benchmarking http benchmarking tools
remotes/private/stalled/debian-survey-democracy 909bdc98 free software surveys and debian democracy, volunteer vs paid work

Wow, what a mess! Let's see if I can make sense of this:

Attic

Those are articles that I thought about, then finally rejected, either because it didn't seem worth it, or my editors rejected it, or I just moved on:

  • novena: the project is ooold now, didn't seem to fit a LWN article. it was basically "how can i build my novena now" and "you guys rock!" it seems like the MNT Reform is the brain child of the Novena now, and I dare say it's even cooler!
  • secureboot: my LWN editors were critical of my approach, and probably rightly so - it's a really complex subject and I was probably out of my depth... it's also out of date now, we did manage secureboot in Debian
  • wireguard: LWN ended up writing extensive coverage, and I was biased against Donenfeld because of conflicts in a previous project

Backlog

Those were articles I was planning to write about next.

  • dat: I already had written Sharing and archiving data sets with Dat, but it seems I had more to say... mostly performance issues, beaker, no streaming, limited adoption... to be investigated, I guess?
  • packet: a primer on data communications over ham radio, and the cool new tech that has emerged in the free software world. those are mainly notes about Pat, Direwolf, APRS and so on... just never got around to making sense of it or really using the tech...
  • performance-tweaks: "optimizing websites at the age of http2", the unwritten story of the optimization of this website with HTTP/2 and friends
  • serverless: god. one of the leftover topics at Kubecon, my notes on this were thin, and the actual subject, possibly even thinner... the only lie worse than the cloud is that there's no server at all! concretely, that's a pile of notes about Kubecon which I wanted to sort through. Probably belongs in the attic now.

Fin

Those are finished articles, they were published on my website and LWN, but the branches were kept because previous drafts had private notes that should not be published.

Ideas

A lot of those branches were actually just an empty commit, with the commitlog being the "pitch", more or less. I'd send that list to my editors, sometimes with a few more links (basically the above), and they would nudge me one way or the other.

Sometimes they would actively discourage me to write about something, and I would do it anyways, send them a draft, and they would patiently make me rewrite it until it was a decent article. This was especially hard with the terminal emulator series, which took forever to write and even got my editors upset when they realized I had never installed Fedora (I ended up installing it, and I was proven wrong!)

Stalled

Oh, and then there's those: those are either "ideas" or "backlog" that got so far behind that I just moved them out of the way because I was tired of seeing them in my list.

  • stalled/bench-bench-bench benchmarking http benchmarking tools, a horrible mess of links, copy-paste from terminals, and ideas about benchmarking... some of this trickled out into this benchmarking guide at Tor, but not much more than the list of tools
  • stalled/debian-survey-democracy: "free software surveys and Debian democracy, volunteer vs paid work"... A long standing concern of mine is that all Debian work is supposed to be volunteer, and paying explicitly for work inside Debian has traditionally been frowned upon, even leading to serious drama and dissent (remember Dunc-Tank)? back when I was writing for LWN, I was also doing paid work for Debian LTS. I also learned that a lot (most?) Debian Developers were actually being paid by their job to work on Debian. So I was confused by this apparent contradiction, especially given how the LTS project has been mostly accepted, while Dunc-Tank was not... See also this talk at Debconf 16. I had hopes that this study would show the "hunch" people have offered (that most DDs are paid to work on Debian) but it seems to show the reverse (only 36% of DDs, and 18% of all respondents paid). So I am still confused and worried about the sustainability of Debian.

What do you think?

So that's all I got. As people might have noticed here, I have much less time to write these days, but if there's any subject in there I should pick, what is the one that you would find most interesting?

Oh! and I should mention that you can write to LWN! If you think people should know more about some Linux thing, you can get paid to write for it! Pitch it to the editors, they won't bite. The worst that can happen is that they say "yes" and there goes two years of your life learning to write. Because no, you don't know how to write, no one does. You need an editor to write.

That's why this article looks like crap and has a smiley. :)

25 April, 2021 01:12AM

April 24, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

FLISOL • Talking about Jitsi

Every year since 2005 there is a very good, big and interesting Latin American gathering of free-software-minded people. Of course, Latin America is a big, big, big place, and it’s not like we are the most economically buoyant region to meet in something equiparable to FOSDEM.

What we have is a distributed free software conference — originally, a distributed Linux install-fest (which I never liked, I am against install-fests), but gradually it morphed into a proper conference: Festival Latinoamericano de Instalación de Software Libre (Latin American Free Software Installation Festival)

This FLISOL was hosted by the always great and always interesting Rancho Electrónico, our favorite local hacklab, and has many other interesting talks.

I like talking about projects where I am involved as a developer… but this time I decided to do otherwise: I presented a talk on the Jitsi videoconferencing server. Why? Because of the relevance videoconferences have had over the last year.

So, without further ado — Here is a video I recorded locally from the talk I gave (MKV), as well as the slides (PDF).

24 April, 2021 11:24PM

Antoine Beaupré

A dead game clock

Time flies. Back in 2008, I wrote a game clock. Since then, what was first called "chess clock" was renamed to pychessclock and then Gameclock (2008). It shipped with Debian 6 squeeze (2011), 7 wheezy (4.0, 2013, with a new UI), 8 jessie (5.0, 2015, with a code cleanup, translation, go timers), 9 stretch (2017), and 10 buster (2019), phew! Eight years in Debian over 4 releases, not bad!

But alas, Debian 11 bullseye (2021) won't ship with Gameclock because both Python 2 and GTK 2 were removed from Debian. I lack the time, interest, and energy to port this program. Even if I could find the time, everyone is on their phone nowadays.

So finding the right toolkit would require some serious thinking about how to make a portable program that can run on Linux and Android. I care less about Mac, iOS, and Windows, but, interestingly, it feels it wouldn't be much harder to cover those as well if I hit both Linux and Android (which is already hard enough, paradoxically).

(And before you ask, no, Java is not an option for me thanks. If I switch to anything else than Python, it would be Golang or Rust. And I did look at some toolkit options a few years ago, was excited by none.)

So there you have it: that is how software dies, I guess. Alternatives include:

  • Chessclock - really old Ruby which made Gameclock rename
  • Ghronos - also really old Java app
  • Lichess - has a chess clock built into the app
  • Otter - if you squint a little

PS: Monkeysign also suffered the same fate, for what that's worth. Alternatives include caff, GNOME Keysign, and pius. Note that this does not affect the larger Monkeysphere project, which will ship with Debian bullseye.

24 April, 2021 06:04PM

hackergotchi for Joey Hess

Joey Hess

here's your shot

The nurse releases my shoulder and drops the needle in a sharps bin, slaps on a smiley bandaid. "And we're done!" Her cheeryness seems genuine but a little strained. There was a long line. "You're all boosted, and here's your vaccine card."

Waiting out the 15 minutes in observation, I look at the card.

Moderna COVID-19/22 vaccine booster
3/21/2025              lot #5829126

  🇺🇸 NOT A VACCINE PASSPORT 🇺🇸

(Tear at perforated line.)
- - - - - - - - - - - - - - - - - -

Here's your shot at
$$ ONE HUNDRED MILLION $$

       Scratch
       and win

I bite my nails, when I'm not wearing this mask. So I scrub inneffectively at the grainy silver box. Not like the woman across from me, three kids in tow, who's zipping through her sheaf of scratchers.

The message on mine becomes clear: 1 month free Amazon Prime

Ah well.

24 April, 2021 12:34AM

April 23, 2021

hackergotchi for Thomas Goirand

Thomas Goirand

Puppet and OS detection

As you may know, Puppet uses “facter” to get facts about the machine it is about to configure. That’s fine, and a nice concept. One can later use variables in a puppet manifest to do different things depending on what facter tells. For example, the operating system name … oh no! This thing is really stupid … Here’s the code one has to do to be compatible with puppet from version 3 up to 5:

if $::lsbdistcodename == undef{
# This works around differences between facter versions
if $facts['os']['lsb'] != undef{
$distro_codename = $facts['os']['lsb']['distcodename']
}else{
$distro_codename = $facts['os']['distro']['codename']
}
}else{
$distro_codename = downcase($::lsbdistcodename)
}

Indeed, the global variable $::lsbdistcodename still existed up to Stretch (and is gone in Buster). The global $::facts wasn’t an array before (but a hash), so in Jessie, it breaks with the error message “facts is not a hash or array when accessing it with os”. So, one need the full code above to make this work.

It’s ok to improve things. It is NOT OK to break os detection. To me it is a very bad practice from upstream Puppet authors. I’m publishing this in the hope to avoid others to fall in the same trap as I did.

23 April, 2021 12:56PM by Goirand Thomas

hackergotchi for Matthew Garrett

Matthew Garrett

An accidental bootsplash

Back in 2005 we had Debconf in Helsinki. Earlier in the year I'd ended up invited to Canonical's Ubuntu Down Under event in Sydney, and one of the things we'd tried to design was a reasonable graphical boot environment that could also display status messages. The design constraints were awkward - we wanted it to be entirely in userland (so we didn't need to carry kernel patches), and we didn't want to rely on vesafb[1] (because at the time we needed to reinitialise graphics hardware from userland on suspend/resume[2], and vesa was not super compatible with that). Nothing currently met our requirements, but by the time we'd got to Helsinki there was a general understanding that Paul Sladen was going to implement this.

The Helsinki Debconf ended being an extremely strange event, involving me having to explain to Mark Shuttleworth what the physics of a bomb exploding on a bus were, many people being traumatised by the whole sauna situation, and the whole unfortunate water balloon incident, but it also involved Sladen spending a bunch of time trying to produce an SVG of a London bus as a D-Bus logo and not really writing our hypothetical userland bootsplash program, so on the last night, fueled by Koff that we'd bought by just collecting all the discarded empty bottles and returning them for the deposits, I started writing one.

I knew that Debian was already using graphics mode for installation despite having a textual installer, because they needed to deal with more complex fonts than VGA could manage. Digging into the code, I found that it used BOGL - a graphics library that made use of the VGA framebuffer to draw things. VGA had a pre-allocated memory range for the framebuffer[3], which meant the firmware probably wouldn't map anything else there any hitting those addresses probably wouldn't break anything. This seemed safe.

A few hours later, I had some code that could use BOGL to print status messages to the screen of a machine booted with vga16fb. I woke up some time later, somehow found myself in an airport, and while sitting at the departure gate[4] I spent a while staring at VGA documentation and worked out which magical calls I needed to make to have it behave roughly like a linear framebuffer. Shortly before I got on my flight back to the UK, I had something that could also draw a graphical picture.

Usplash shipped shortly afterwards. We hit various issues - vga16fb produced a 640x480 mode, and some laptops were not inclined to do that without a BIOS call first. 640x400 worked basically everywhere, but meant we had to redraw the art because circles don't work the same way if you change the resolution. My brief "UBUNTU BETA" artwork that was me literally writing "UBUNTU BETA" on an HP TC1100 shortly after I'd got the Wacom screen working did not go down well, and thankfully we had better artwork before release.

But 16 colours is somewhat limiting. SVGALib offered a way to get more colours and better resolution in userland, retaining our prerequisites. Unfortunately it relied on VM86, which doesn't exist in 64-bit mode on Intel systems. I ended up hacking the X.org x86emu into a thunk library that exposed the same API as LRMI, so we could run it without needing VM86. Shockingly, it worked - we had support for 256 colour bootsplashes in any supported resolution on 64 bit systems as well as 32 bit ones.

But by now it was obvious that the future was having the kernel manage graphics support, both in terms of native programming and in supporting suspend/resume. Plymouth is much more fully featured than Usplash ever was, but relies on functionality that simply didn't exist when we started this adventure. There's certainly an argument that we'd have been better off making reasonable kernel modesetting support happen faster, but at this point I had literally no idea how to write decent kernel code and everyone should be happy I kept this to userland.

Anyway. The moral of all of this is that sometimes history works out such that you write some software that a huge number of people run without any idea of who you are, and also that this can happen without you having any fucking idea what you're doing.

Write code. Do crimes.

[1] vesafb relied on either the bootloader or the early stage kernel performing a VBE call to set a mode, and then just drawing directly into that framebuffer. When we were doing GPU reinitialisation in userland we couldn't guarantee that we'd run before the kernel tried to draw stuff into that framebuffer, and there was a risk that that was mapped to something dangerous if the GPU hadn't been reprogrammed into the same state. It turns out that having GPU modesetting in the kernel is a Good Thing.

[2] ACPI didn't guarantee that the firmware would reinitialise the graphics hardware, and as a result most machines didn't. At this point Linux didn't have native support for initialising most graphics hardware, so we fell back to doing it from userland. VBEtool was a terrible hack I wrote to try to re-execute the system's graphics hardware through a range of mechanisms, and it worked in a surprising number of cases.

[3] As long as you were willing to deal with 640x480 in 16 colours

[4] Helsinki-Vantaan had astonishingly comfortable seating for time

comment count unavailable comments

23 April, 2021 11:21AM

April 22, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.0: Now with ‘docs/’

drat user

A new release of drat arrived on CRAN today. This is the first release in a few months (with the last release in July of last year) and it (finally) makes the leap to supporting docs/ in the main branch as we are all so tired of the gh-pages branch. We also have new vignettes, new (and very shiny) documentation and refreshed vignettes!

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Or as we may now add: stay away from semi-random universes snapshots too.

Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by (now) six vignettes and just works.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.0 (2021-04-21)

  • A documentation website for the package was added at https://eddelbuettel.github.io/drat/ (Dirk)

  • The continuous integration was switched to using ‘r-ci’ (Dirk)

  • The docs/ directory of the main repository branch can now be used instead of gh-pages branch (Dirk in #112)

  • A new repository https://github.com/drat-base/drat can now be used to fork an initial drat repository (Dirk)

  • A new vignette “Drat Step-by-Step” was added (Roman Hornung and Dirk in #117 fixing #115 and #113)

  • The test suite was refactored for docs/ use (Felix Ernst in #118)

  • The minimum R version is now ‘R (>= 3.6)’ (Dirk fixing #119)

  • The vignettes were switched to minidown (Dirk fixing #116)

  • A new test file was added to ensure ‘NEWS.Rd’ is always at the current release version.

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 April, 2021 11:50PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

The Great Train Robbery

I had a twitter fight few days back with a gentleman and the article is a result of that fight. Sadly, I do not know the name of the gentleman as he goes via a psuedo name and then again I’ve not taken permission from him to quote him in either way. So I will just state the observations I was able to make from the conversations we had. As people who read this blog regularly would know, I am and have been against Railway Privatization which is happening in India. And will be sharing some of the case studies from other countries as to how it panned out for them.

UK Railways


How Privatization Fails : Railways

The Above video is by a gentleman called Shaun who basically shared that privatization as far as UK is concerned is nothing but monopolies and while there are complex reasons for the same, the design of the Railways is such that it will always be a monopoly structure. At the most what you can do is have several monopolies but that is all that can happen. The idea of competition just cannot happen. Even the idea that subsidies will be less or/and trains will run on time is far from fact. Both of these facts have been checked and found to be truthful by fullfact.org. It is and argued that UK is small and perhaps it doesn’t have the right conditions. It is probably true but still we do deserve to have a glance at the UK railway map.

UK railway map with operators
UK railway map with operators

The above map is copyrighted to Map Marketing where you could see it today . As can be seen above most companies had their own specified areas. Now if you had looked at the facts then you would have seen that UK fares have been higher. In fact, an oldish article from Metro (a UK publication) shares the same. In fact, UK nationalized its railways effectively as many large rail operators were running in red. Even Scotland is set to nationalised back in March 2022. Remember this is a country which hasn’t seen inflation go upwards of 5% in nearly a decade. The only outlier was 2011 where they indeed breached the 5% mark. So from this, what we see is ‘Private Gains’ and “Private Gains Public Losses’ perhaps seem fit. But then maybe we didn’t use the right example. Perhaps Japan would be better. They have bullet trains while UK is still thinking about it. (HS2).

Japanese Railway

Below is the map of Japanese Railway

Railway map of Japan with ‘private ownership’ – courtesy Wikimedia commons

Japan started privatizing its railway in 1987 and to date it has not been fully privatized. And on top of it, amount as much as Â¥24 trillion of the long-term JNR debt was shouldered by the government at the expense of taxpayers of Japan while also reducing almost 1/4th of it employees. To add to it, while some parts of Japanese Railways did make profits, many of them made profits by doing large-scale non-railway business mostly real estate of land adjacent to railway stations. In many cases, it seems this went all the way up to 60% of the revenue. The most profitable has been the Shinkansen though. And while it has been profitable, it has not been without safety scandals over the years, the biggest in recent years was the 2005 Amagasaki derailment. What was interesting to me was the Aftermath, while the Wikipedia page doesn’t share much, I had read at the time and probably could be found how a lot of ordinary people stood up to the companies in a country where it is a known fact that most companies are owned by the Yakuza. And this is a country where people are loyal to their corporation or company no matter what. It is a strange culture to west and also here in India where people change jobs on drop of hat, although nowadays we have record unemployment. So perhaps Japan too does not meet our standard as it doesn’t do competition with each other but each is a set monopoly in those regions. Also how much subsidy is there or not is not really transparent.

U.S. Railways

Last, but not the least I share the U.S. Railway map. This is provided by A Mr. Tom Alison on reddit on channel maporn. As the thread itself is archived and I do not know the gentleman concerned, nor have taken permission for the map, hence sharing the compressed version –


U.S. Railway lines with the different owners

Now the U.S. Railways is and has always been peculiar as unlike the above two the U.S. has always been more of a freight network. Probably, much of it has to do that in the 1960’s when oil was cheap, the U.S. made zillions of roadways and romanticized the ‘road trip’ and has been doing it ever since. Also the creation of low-cost airlines definitely didn’t help the railways to have more passenger services, in fact the opposite.

There are and have been smaller services and attempts of privatization in both New Zealand and Australia and both have been failures. Please see papers in that regard. My simple point is this, as can be seen above, there have been various attempts at privatization of railways and most of them have been a mixed bag. The only one which comes close to what we think as good is Japanese but that also used a lot of public debt which we don’t know what will happen on next. Also for higher-speed train services like a bullet train or whatever, you need to direct, no hair pen bends. In fact, a good talk on the topic is the TBD podcast which while it talks about hyperloop, the same questions is and would be asked if were to do in India. Another thing to be kept in mind is that the Japanese have been exceptional builders and this is because they have been forced to. They live in a seismically active zone which made Fukushima disaster a reality but at the same time, their buildings are earthquake-resistant.

Standard Disclaimer – The above is a simplified version of things. I could have added in financial accounts but that again has no set pattern. For e.g. some Railways use accrual, some use cash and some use hybrid. I could have also shared in either the guage or electrification but all have slightly different standards, although uniguage is something that all Railways aspire for and electrification is again something that all Railways want although in many cases it just isn’t economically feasible.

Indian Railways

Indian Railways itself recently made the move from Cash to Accrual couple of years back. In-between for a couple of years, it was hybrid. The sad part is and was you can now never measure against past performance in the old way because it is so different. Hence, whether the Railways will be making a loss or a profit, we would come to know only much later. Also, most accountants don’t know the new system well, so it is gonna take more time, how much unknown. Sadly, what GOI did a few years back is merge the Railway budget into the Union Budget. Of course, the excuse they gave is too many pressures of new trains, while the truth is, by doing this, they decreased transparency about the whole thing. For e.g. for the last few years, the only state which had significant work being done is in U.P. (Uttar Pradesh) and a bit in Goa, although that is has been protested time and again. I being from the neighborly state of Maharashtra , and have been there several times. Now it does feels all like a dream, going to Goa :(.

Covid news

Now before I jump on the news, I should share the movie ‘Virus’ (2019) which was made by the talented Aashiq Abu. Even though, am not a Malayalee, I still have enjoyed many of his movies simply because he is a terrific director and Malayalam movies, at least most of them have English subtitles and lot of original content.. Interestingly, unlike the first couple of times when I saw it a couple of years back. The first time I saw it, I couldn’t sleep a wink for a week. Even the next time, it was heavy. I had shared the movie with mum, and even she couldn’t see it in one go. It is and was that powerful Now maybe because we are headlong in the pandemic, and the madness is all around us. There are two terms that helped me though understand a great deal of what is happening in the movie, the first term was ‘altered sensorium’ which has been defined here. The other is saturation or to be more precise ‘oxygen saturation‘. This term has also entered the Indian twitter lexicon quite a bit as India has started running out of oxygen. Just today Delhi High Court did an emergency hearing on the subject late at night. Although there is much to share about the mismanagement of the center, the best piece on the subject has been by Miss Priya Ramani. Yup, the same lady who has won against M.J. Akbar and this is when Mr. Akbar had 100 lawyers for this specific case. It would be interesting to see what happens ahead.

There are however few things even she forgot in her piece, For e.g. reverse migration i.e. from urban to rural migration started again. Two articles from different entities sharing a similar outlook.Sadly, the right have no empathy or feeling for either the poor or the sick. Even the labor minister Santosh Gangwar’s statement that around 1.04 crores were the only people who walked back home. While there is not much data, however some work/research has been done on migration to cites that the number could be easily 10 times as much. And this was in the lockdown of last year. This year, again the same issue has re-surfaced and migrants learning lessons started leaving cities. And I’m ashamed to say I think they are doing the right thing. Most State Governments have not learned lessons nor have they done any work to earn the trust of migrants. This is true of almost all state Governments. Last year, just before the lockdown was announced, me and my friend spent almost 30k getting a cab all the way from Chennai to Pune, how much we paid for the cab, how much we bribed the various people just so we could cross the state borders to return home to our anxious families. Thankfully, unlike the migrants, we were better off although we did make a loss. I probably wouldn’t be alive if I were in their situation as many didn’t. That number is still in the air ”undocumented deaths’ 😦

Vaccine issues

Currently, though the issue has been the Vaccine and the pricing of the same. A good article to get a summation of the issues outlined has been shared on Economist. Another article that goes to the heart of the issue is at scroll. To buttress the argument, the SII chairman had shared this few weeks back –

Adar Poonawala talking to Vishnu Som on Left, right center, 7th April 2021.

So, a licensee manufacturer wants to make super-profits during the pandemic. And now, as shared above they can very easily do it. Even the quotes given to nearby countries is smaller than the quotes given to Indian states –

Prices of AstraZeneca among various states and countries.

The situation around beds, vaccines, oxygen, anything is so dire that people could go to any lengths to save their loved ones. Even if they know if a certain medicine doesn’t work. For e.g. Remdesivir, 5 WHO trials have concluded that it doesn’t increase mortality. Heck, even AIIMS chief said the same. But both doctors and relatives desperation to cling on hope has made Remdesivir as a black market drug with unoffical prices hovering anywhere between INR 14k/- to INR30k/- per vial. One of the executives of a top firm was also arrested in Gujarat. In Maharashtra, the opposition M.P. came to the ‘rescue‘ of the officials of Bruick pharms in Mumbai.

Sadly, this strange affliction to the party in the center is also there in my extended family. At one end, they will heap praise on Mr. Modi, at the same time they can’t get wait to get fast out of India. Many of them have settled in horrors of horror Dubai, as it is the best place to do business, get international schools for the young ones at decent prices, cheaper or maybe a tad more than what they paid in Delhi or elsewhere. Being an Agarwal or a Gupta makes it easier to compartmentalize both things. Ease of doing business, 5 days flat to get a business registered, up and running. And the paranoia is still there. They won’t talk on the phone about him because they are afraid they may say something which comes back to bite them. As far as their decision to migrate, can’t really blame them. If I were 20-25 yeas younger and my mum were in a better shape than she is, we probably would have migrated as well, although would have preferred Europe than anywhere else.

Internet Freedom and Aarogya Setu App.


Internet Freedom had shared the chilling effects of the Aarogya Setu App. This had also been shared by FSCI in the past, and recently had their handle being banned on Twitter. This was also apparent in a legal bail order which the high court judge gave. While I won’t go into the merits and demerits of the bail order, it is astounding for the judge to say that the accused, even though he would be on bail install an app. so he can be surveilled. And this is a high court judge, such a sad state of affairs. We seem to be putting up new lows every day when it comes to judicial jurisprudence. One interesting aspect of the whole case was shared by Aishwarya Iyer. She shared a story that she and her team worked on quint which raises questions on the quality of the work done by Delhi Police. This is of course, up to Delhi Police to ascertain the truth of the matter because unless and until they are able to tie in the PMO’s office in for a leak or POTUS’s office it hardly seems possible. For e.g. the dates when two heads of state can meet each other would be decided by the secretaries of the two. Once the date is known, it would be shared with the press while at the same time some sort of security apparatus would kick in place. It is incumbent, especially on the host to take as much care as he can of the guest. We all remember that World War 1 (the war to end all wars) started due to the murder of Archduke Ferdinand.

As nobody wants that, the best way is to make sure that a political murder doesn’t happen on your watch. Now while I won’t comment on what it would be, it would be safe to assume that it would be z+ security along with higher readiness. Especially if it as somebody as important as POTUS. Now, it would be quite a reach for Delhi Police to connect the two dates. They either will have to get creative with the dates or some other way. Otherwise, with practically no knowledge in the public domain, they can”t work in limbo. In either case, I do hope the case comes up for hearing soon and we see what the Delhi Police says and contends in the High Court about the same. At the very least, it would be irritating for them to talk of the dates unless they can contend some mass conspiracy which involves the PMO (and would bring into question the constant vetting done by the Intelligence dept. of all those who work in PMO). And this whole case is to kind of shelter to the Delhi riots which happened in which majorly the Muslims died but their deaths lay unaccounted till date 😦

Conclusion

In Conclusion, I would like to share a bit of humor because right now the atmosphere is humorless, both with authoritarian tendencies of the Central Govt. and the mass mismanagement of public health which they now have left to the state to do as they fit. The peice I am sharing is from arre, one of my goto sites whenever I feel low.

22 April, 2021 04:09AM by shirishag75

April 21, 2021

Enrico Zini

Python output buffering

Here's a little toy program that displays a message like a split-flap display:

#!/usr/bin/python3

import sys
import time

def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
            break
        time.sleep(0.09)
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))
    print()

message = " ".join(sys.argv[1:])
display(message.upper())

This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation.

What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient.

I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed.

Controlling buffering in Python

In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent.

Python's print() function also supports flush=True as an optional argument:

    print(cur, end="\r", flush=True)

If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:

sys.stdout.reconfigure(line_buffering=True)

Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):

import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)

If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

21 April, 2021 06:00PM

Sven Hoexter

bullseye: doveadm as unprivileged user with dovecot ssl config

The dovecot version which will be released with bullseye seems to require some subtle config adjustment if you

  • use ssl (ok that should be almost everyone)
  • and you would like to execute doveadm as a user, who can not read the ssl cert and keys (quite likely).

I guess one of the common cases is executing doveadm pw e.g. if you use postfixadmin. For myself that manifested in the nginx error log, which I use in combination with php-fpm, as.

2021/04/19 20:22:59 [error] 307467#307467: *13 FastCGI sent in stderr: "PHP message:
Failed to read password from /usr/bin/doveadm pw ... stderr: doveconf: Fatal: 
Error in configuration file /etc/dovecot/conf.d/10-ssl.conf line 12: ssl_cert:
Can't open file /etc/dovecot/private/dovecot.pem: Permission denied

You easily see the same error message if you just execute something like doveadm pw -p test123. The workaround is to move your ssl configuration to a new file which is only readable by root, and create a dummy one which disables ssl, and has a !include_try on the real one. Maybe best explained by showing the modification:

cd /etc/dovecot/conf.d
cp 10-ssl.conf 10-ssl_server
chmod 600 10-ssl_server
echo 'ssl = no' > 10-ssl.conf
echo '!include_try 10-ssl_server' >> 10-ssl.conf

Discussed upstream here.

21 April, 2021 09:45AM

April 20, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.11: Several Updates

A new version 0.3.11 of Rblpapi is now arriving at CRAN. It comes two years after the release of version Rblpapit 0.3.10 and brings a few updates and extensions.

Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the eleventh release since the package first appeared on CRAN in 2016. Changes are detailed below. Special thanks to James, Maxime and Michael for sending us pull requests.

Changes in Rblpapi version 0.3.11 (2021-04-20)

  • Support blpAutoAuthenticate and B-PIPE access, refactor and generalise authentication (James Bell in #285)

  • Deprecate excludeterm (John in #306)

  • Correct example in README.md (Maxime Legrand in #314)

  • Correct bds man page (and code) (Michael Kerber, and John, in #320)

  • Add GitHub Actions continuous integration (Dirk in #323)

  • Remove bashisms detected by R CMD check (Dirk #324)

  • Switch vignette to minidown (Dirk in #331)

  • Switch unit tests framework to tinytest (Dirk in #332)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

20 April, 2021 11:55PM

April 19, 2021

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Catching Up Your Sources

I’ve mostly had the preference of controlling my data rather than depend on someone else. That’s one reason why I still believe email to be my most reliable medium for data storage, one that is not plagued/locked by a single entity. If I had the resources, I’d prefer all digital data to be broken down to its simplest form for storage, like email format, and empower the user with it i.e. their data.

Yes, there are free services that are indirectly forced upon common users, and many of us get attracted to it. Many of us do not think that the information, which is shared for the free service in return, is of much importance. Which may be fair, depending on the individual, given that they get certain services without paying any direct dime.

New age communication

So first, we had email and usenet. As I mentioned above, email was designed with fine intentions. Intentions that make it stand even today, independently.

But not everything, back then, was that great either. For example, instant messaging was very closed and centralised then too. Things like: ICQ, MSN, Yahoo Messenger; all were centralized. I wonder if people still have access to their ICQ logs.

Not much has chagned in the current day either. We now have domination by: Facebook Messenger, Google Whatever the new marketing term they introdcue, WhatsApp, Telegram, Signal etc. To my knowledge, they are all centralized.

Over all this time, I’m yet to see a product come up with good (business) intentions, to really empower the end user. In this information age, the most invaluable data is user activity. That’s one data everyone is after. If you decline to share that bit of free data in exchange for the free services, mind you, that that free service like Facebook, Google, Instagram, WhatsApp, Truecaller, Twitter; none of it would come to you at all. Try it out.

So the reality is that while you may not be valuating the data you offer in exchange correctly, there’s a lot that is reaped from it. But still, I think each user has (and should have) the freedom to opt in for these tech giants and give them their personal bit, for free services in return. That is a decent barter deal. And it is a choice that one if free to choose

Retaining my data

I’m fond of keeping an archive folder in my mailbox. A folder that holds significant events in the form of an email usually, if documented. Over the years, I chose to resort to the email format because I felt it was more reliable in the longer term than any other formats.

The next best would be plain text.

In my lifetime, I have learnt a lot from the internet; so it is natural that my preference has been with it. Mailing Lists, IRCs, HOWTOs, Guides, Blog posts; all have helped. And over the years, I’ve come across hundreds of such content that I’d always like to preserve.

Now there are multiple ways to preserving data. Like, for example, big tech giants. In most usual cases, your data for your lifetime, should be fine with a tech giant. In some odd scenarios, you may be unlucky if you relied on a service provider that went bankrupt. But seriously, I think users should be fine if they host their data with Microsoft, Google etc; as long as they abide by their policies.

There’s also the catch of alignment. As the user, you should ensure to align (and transition) with the product offerings of your service provider. Otherwise, what may look constant and always reliable, will vanish in the blink of an eye. I guess Google Plus would be a good example. There was some Google Feed service too. Maybe Google Photos in the near decade future, just like Google Picasa in the previous (or current) decade.

History what is

On the topic of retaining information, lets take a small drift. I still admire our ancestors. I don’t know what went in their mind when they were documenting events in the form of scriptures, carvings, temples, churches, mosques etc; but one things for sure, they were able to leave a fine means of communication. They are all gone but a large number of those events are evident through the creations that they left. Some of those events have been strong enough that further rulers/invaders have had tough times trying to wipe it out from history. Remember, history is usually not the truth, but the statement to be believed by the teller. And the teller is usually the survivor, or the winner you may call.

But still, the information retention techniques were better.

I haven’t visited, but admire whosoever built the Kailasa Temple, Ellora, without which, we’d be made to believe what not by all invaders and rulers of the region. But the majestic standing of the temple, is a fine example of the history and the events that have occured in the past.

Ellora Temple -  The majectic carving believed to be carved out of a single stone

Ellora Temple - The majectic carving believed to be carved out of a single stone

Dominance has the power to rewrite history and unfortunately that’s true and it has done its part. It is just that in a mere human’s defined lifetime, it is not possible to witness the transtion from current to history, and say that I was there then and I’m here now, and this is not the reality.

And if not dominance, there’s always the other bit, hearsay. With it, you can always put anything up for dispute. Because there’s no way one can go back in time and produce a fine evidence.

There’s also a part about religion. Religion can be highly sentimental. And religion can be a solid way to get an agenda going. For example, in India - a country which today consitutionally is a secular country, there have been multiple attempts to discard the belief, that never ever did the thing called Ramayana exist. That the Rama Setu, nicely reworded as Adam’s Bridge by who so ever, is a mere result of science. Now Rama, or Hanumana, or Ravana, or Valmiki, aren’t going to come over and prove that that is true or false. So such subjects serve as a solid base to get an agenda going. And probably we’ve even succeeded in proving and believing that there was never an event like Ramayana or the Mahabharata. Nor was there ever any empire other than the Moghul or the British Empire.

But yes, whosoever made the Ellora Temple or the many many more of such creations, did a fine job of making a dent for the future, to know of what the history possibly could also be.

Enough of the drift

So, in my opinion, having events documented is important. It’d be nice to have skills documented too so that it can be passed over generations but that’s a debatable topic. But events, I believe should be documented. And documented in the best possible ways so that its existence is not diminished.

A documentation in the form of certain carvings on a rock is far more better than links and posts shared on Facebook, Twitter, Reddit etc. For one, these are all corporate entities with vested interests and can pretext excuse in the name of compliance and conformance.

So, for the helpless state and generation I am in, I felt email was the best possible independent form of data retention in today’s age. If I really had the resources, I’d not rely on digital age. This age has no guarantee of retaining and recording information in any reliable manner. Instead, it is just mostly junk, which is manipulative and changeable, conditionally.

Email and RSS

So for my communication, I like to prefer emails over any other means. That doesn’t mean I don’t use the current trends. I do. But this blog is mostly about penning my desires. And desire be to have communication over email format.

Such is the case that for information useful over the internet, I crave to have it formatted in email for archival.

RSS feeds is my most common mode of keeping track of information I care about. Not all that I care for is available in RSS feeds but hey such is life. And adaptability is okay.

But my preference is still RSS.

So I use RSS feeds through a fine software called feed2imap. A software that fits my bill fairly well.

feed2imap is:

  • An rss feed news aggregator
  • Pulls and extracts news feeds in the form of an email
  • Can push the converted email over pop/imap
  • Can convert all image content to email mime attachment

In a gist, it makes the online content available to me offline in the most admired email format

In my mail box, in today’s day, my preferred email client is Evolution. It does a good job of dealing with such emails (rss feed items). An image example of accessing the rrs feed item through it is below

RSS News Item through Evolution

RSS News Item through Evolution

The good part is that my actual data is always independent of such MUAs. Tomorrow, as technology - trends - economics evolve, something new would come as a replacement but my data would still be mine.

Trends have shown. Data mobility is a commodity expectation now. As such, I wanted to have something fill in that gap for me. So that I could access my information - which I’ve preferred in a format - easily in today’s trendy options.

I tried multiple options on my current preferred platform of choice for mobiles, i.e. Android. Finally I came across Aqua Mail, which fits in most of my requirements.

Aqua Mail does

  • Connect to my laptop over imap
  • Can sync the requested folders
  • Can sync requested data for offlince accessibility
  • Can present the synchronized data in a quite flexible and customizable manner, to match my taste
  • Has a very extensible User Interface, allowing me to customize it to my taste

Pictures can do a better job of describing my english words.

Aqua Mail Message List Interface

Aqua Mail Message List Interface

Aqua Mail Message View Normal

Aqua Mail Message View Normal

Aqua Mail Message View Zoomed - Fit to Screen

Aqua Mail Message View Zoomed - Fit to Screen

All of this done with no dependence on network connectivity, post the sync. And all information is stored is best possible simple format.

19 April, 2021 07:50AM by Ritesh Raj Sarraf (rrs@researchut.com)