June 14, 2021

hackergotchi for Jonathan Dowland

Jonathan Dowland

Opinionated IkiWiki v1

It's been more than a year since I wrote about Opinionated IkiWiki, a pre-configured, containerized deployment of Ikiwiki with opinions. My intention was to make something that is easy to get up and running if you are more experienced with containers than IkiWiki.

I haven't yet switched to Opinionated IkiWiki for this site, but that was my goal, and I think it's mature enough now that I can migrate over at some point, so it seems a good time to call it Version 1.0. I have been using it for my own private PIM systems for a while now.

You can pull built images from quay.io, here: https://quay.io/repository/jdowland/opinionated-ikiwiki The source lives here: https://github.com/jmtd/opinionated-ikiwiki A description of some of the changes made to the IkiWiki version lives here: https://github.com/jmtd/ikiwiki/blob/opinionated-doc/README.md

14 June, 2021 08:58PM

Enrico Zini

Pipelining

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Running actions on a server is nice, but a network round trip for each action is not very efficient. If I need to run a linear sequence of actions, I can stream them all to the server, and then read replies streamed from the server as they get executed.

This technique is called pipelining and one can see it used, for example, in Redis, or Mitogen.

Roles

Ansible has the concept of "Roles" as a series of related tasks: I'll play with that. Here's an example role to install and setup fail2ban:

class Role(role.Role):
    def main(self):
        self.add(builtin.apt(
            name=["fail2ban"],
            state="present",
        ))

        self.add(builtin.copy(
            content=inline("""
                [postfix]
                enabled = true
                [dovecot]
                enabled = true
            """),
            dest="/etc/fail2ban/jail.local",
            owner="root",
            group="root",
            mode=0o644,
        ), name="configure fail2ban")

I prototyped roles as classes, with methods that push actions down the pipeline. If an action fails, all further actions for the same role won't executed, and will be marked as skipped.

Since skipping is applied per-role, it means that I can blissfully stream actions for multiple roles to the server down the same pipe, and errors in one role will stop executing that role and not others. Potentially I can get multiple roles going with a single network round-trip:

#!/usr/bin/python3

import sys
from transilience.system import Mitogen
from transilience.runner import Runner


@Runner.cli
def main():
    system = Mitogen("my server", "ssh", hostname="server.example.org", username="root")

    runner = Runner(system)

    # Send roles to the server
    runner.add_role("general")
    runner.add_role("fail2ban")
    runner.add_role("prosody")

    # Run until all roles are done
    runner.main()

if __name__ == "__main__":
    sys.exit(main())

That looks like a playbook, using Python as glue rather than YAML.

Decision making in roles

Besides filing a series of actions, a role may need to take decisions based on the results of previous actions, or on facts discovered from the server. In that case, we need to wait until the results we need come back from the server, and then decide if we're done or if we want to send more actions down the pipe.

Here's an example role that installs and configures Prosody:

from transilience import actions, role
from transilience.actions import builtin
from .handlers import RestartProsody


class Role(role.Role):
    """
    Set up prosody XMPP server
    """
    def main(self):
        self.add(actions.facts.Platform(), then=self.have_facts)

        self.add(builtin.apt(
            name=["certbot", "python-certbot-apache"],
            state="present",
        ), name="install support packages")

        self.add(builtin.apt(
            name=["prosody", "prosody-modules", "lua-sec", "lua-event", "lua-dbi-sqlite3"],
            state="present",
        ), name="install prosody packages")

    def have_facts(self, facts):
        facts = facts.facts  # Malkovich Malkovich Malkovich!

        domain = facts["domain"]
        ctx = {
            "ansible_domain": domain
        }

        self.add(builtin.command(
            argv=["certbot", "certonly", "-d", f"chat.{domain}", "-n", "--apache"],
            creates=f"/etc/letsencrypt/live/chat.{domain}/fullchain.pem"
        ), name="obtain chat certificate")

        with self.notify(RestartProsody):
            self.add(builtin.copy(
                content=self.template_engine.render_file("roles/prosody/templates/prosody.cfg.lua", ctx),
                dest="/etc/prosody/prosody.cfg.lua",
            ), name="write prosody configuration")

            self.add(builtin.copy(
                src="roles/prosody/templates/firewall-ruleset.pfw",
                dest="/etc/prosody/firewall-ruleset.pfw",
            ), name="write prosody firewall")

    # ...

This files some general actions down the pipe, with a hook that says: when the results of this action come back, run self.have_facts().

At that point, the role can use the results to build certbot command lines, render prosody's configuration from Jinja2 templates, and use the results to file further action down the pipe.

Note that this way, while the server is potentially still busy installing prosody, we're already streaming prosody's configuration to it.

If anything goes wrong with the installation of prosody's package, the role will be marked as failed and all further actions of the same role, even those filed by have_facts() will be skipped.

Notify and handlers

In the previous example self.notify() also appears: that's my attempt to model the equivalent of Ansible's handlers. If any of the actions inside the with produce changes, then the RestartProsody role will be executed, potentially filing more actions ad the end of the playbook.

The runner will take care of collecting all the triggered role classes in a set, which discards duplicates, and then running the main() method of all resulting roles, which will cause more actions to be filed down the pipe.

Action conditions

Sometimes some actions are only meaningful as consequences of other actions. Let's take, for example, enabling buster-backports as an extra apt source:

        a = self.add(builtin.copy(
            owner="root",
            group="root",
            mode=0o644,
            dest="/etc/apt/sources.list.d/debian-buster-backports.list",
            content="deb [arch=amd64] https://mirrors.gandi.net/debian/ buster-backports main contrib",
        ), name="enable backports")

        self.add(builtin.apt(
            update_cache=True
        ), name="update after enabling backports",
           # Run only if the previous copy changed anything
           when={a: ResultState.CHANGED},
        )

Here we want to update Apt's cache, which is a slow operation, only after we actually write /etc/apt/sources.list.d/debian-buster-backports.list. If the file was already there from a previous run, we can skip downloading the new package lists.

The when= attributes adds an annotation to the action that is sent town the pipeline, that says that it should only be run if the state of a previous action matches the given one.

In this case, when on the remote it's the turn of "update after enabling backports", it gets skipped unless the state of the previous "enable backports" action is CHANGED.

Effects of pipelining

I ported enough of Ansible's modules to be able to run the provisioning scripts of my VPS entirely via ansible.

This is the playbook run as plain Ansible:

$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    2m10.072s
user    0m33.149s
sys 0m10.379s

This is the same playbook run with Ansible speeded up via the Mitogen backend, which makes Ansible more bearable:

$ export ANSIBLE_STRATEGY=mitogen_linear
$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

real    0m24.428s
user    0m8.479s
sys 0m1.894s

This is the same playbook ported to Transilience:

$ time ./provision
[...]
real    0m2.585s
user    0m0.659s
sys 0m0.034s

Doing nothing went from 2 minutes down to 3 seconds!

That's the kind of running time that finally makes me comfortable with maintaining my VPS by editing the playbook only, and never logging in to mess with the system configuration by hand!

Next steps

I'm quite happy with what I have: I can now maintain my VPS with a simple script with quick iterative cycles.

I might use it to develop new playbooks, and port them to ansible only when they're tested and need to be shared with infrastructure that needs to rely on something more solid and battle tested than a prototype provisioning system.

I might also keep working on it as I have more interesting ideas that I'd like to try. I feel like Ansible reached some architectural limits that are hard to overcome without a major redesign, and are in many way hardcoded in its playbook configuration. It's nice to be able to try out new designs without that baggage.

I'd love it if even just the library of Transilience actions could grow, and gain widespread use. Ansible modules standardized a set of management operations, that I think became the way people think about system management, and should really be broadly available outside of Ansible.

If you are interesting in playing with Transilience, such as:

  • polishing the packaging, adding a setup.py, publishing to PIP, packaging in Debian
  • adding example playbooks
  • porting more Ansible modules to Transilience actions
  • improving the command line interface
  • test other ways to feed actions to pipelines
  • test other pipeline primitives
  • add backends besides Local and Mitogen
  • prototype a parser to turn a subsets of YAML playbook syntax into transilience actions
  • adopt it into your multinational organization infrastructure to speed up provisioning times by orders of magnitude at the cost of the development time that it takes to turn this prototype into something solid and road tested
  • create a startup and get millions in venture capital to disrupt the provisioning ecosystem

do get in touch or send a pull request! :)

14 June, 2021 03:40PM

Use ansible actions in a script

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

I like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

This doesn't look easy to do with Ansible code as it is. Also, the code quality of various Ansible modules doesn't fit something I'd want in a standard library of cross-platform provisioning functions.

Modeling Actions

I want to keep the declarative, idempotent aspect of describing actions on a system. A good place to start could be a hierarchy of dataclasses that hold the same parameters as ansible modules, plus a run() method that performs the action:

@dataclass
class Action:
    """
    Base class for all action implementations.

    An Action is the equivalent of an ansible module: a declarative
    representation of an idempotent operation on a system.

    An Action can be run immediately, or serialized, sent to a remote system,
    run, and sent back with its results.
    """
    uuid: str = field(default_factory=lambda: str(uuid.uuid4()))
    result: Result = field(default_factory=Result)

    def summary(self):
        """
        Return a short text description of this action
        """
        return self.__class__.__name__

    def run(self, system: transilience.system.System):
        """
        Perform the action
        """
        self.result.state = ResultState.NOOP

I like that Ansible tasks have names, and I hate having to give names to trivial tasks like "Create directory /foo/bar", so I added a summary() method so that trivial tasks like that can take care of naming themselves.

Dataclasses allow to introspect fields and annotate them with extra metadata, and together with docstrings, I can make actions reasonably self-documeting.

I ported some of Ansible's modules over: see complete list in the git repository.

Running Actions in a script

With a bit of glue code I can now run Ansible-style functions from a plain Python script:

#!/usr/bin/python3

from transilience.runner import Script

script = Script()

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

Running Actions remotely

Dataclasses have an asdict function that makes them trivially serializable. If their members stick to data types that can be serialized with Mitogen and the run implementation doesn't use non-pure, non-stdlib Python modules, then I can trivially run actions on all sorts of remote systems using Mitogen:

#!/usr/bin/python3

from transilience.runner import Script
from transilience.system import Mitogen

script = Script(system=Mitogen("my server", "ssh", hostname="machine.example.org", username="user"))

for i in range(10):
    script.builtin.file(state="touch", path=f"/tmp/test{i}")

How fast would that be, compared to Ansible?

$ time ansible-playbook test.yaml
[...]
real    0m15.232s
user    0m4.033s
sys 0m1.336s

$ time ./test_script

real    0m4.934s
user    0m0.547s
sys 0m0.049s

With a network round-trip for each single operation I'm already 3x faster than Ansible, and it can run on nspawn containers, too!

I always wanted to have a library of ansible modules useable in normal scripts, and I've always been angry with Ansible for not bundling their backend code in a generic library. Well, now there's the beginning of one!

Sweet! Next step, pipelining.

14 June, 2021 02:35PM

My gripes with Ansible

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience.

Musing about Ansible

I like infrastructure as code.

I like to be able to represent an entire system as text files in a git repositories, and to be able to use that to recreate the system, from my Virtual Private Server, to my print server and my stereo, to build machines, to other kind of systems I might end up setting up.

I like that the provisioning work I do on a machine can be self-documenting and replicable at will.

The good

For that I quite like Ansible, in principle: simple (in theory) YAML files describe a system in (reasonably) high-level steps, and it can be run on (almost) any machine that happens to have a simple Python interpreter installed.

I also like many of the modules provided with Ansible: they are convenient, platform-independent implementations of common provisioning steps. They'd be fantastic to have in a library that I could use in normal programs.

The bad

Unfortunately, Ansible is slow. Running the playbook on my VPS takes about 3 whole minutes even if I'm just changing a line in a configuration file.

This means that most of the time, instead of changing that line in the playbook and running it, to then figure out after 3 minutes that it was the wrong line, or I made a spelling mistake in the playbook, I end up logging into the server and editing in place.

That defeats the whole purpose, but that level of latency between iterations is just unacceptable to me.

The ugly

I also think that Ansible has outgrown its original design, and the supposedly declarative, idempotent YAML has become a full declarative scripting language in disguise, whose syntax is extremely awkward and verbose.

If I'm writing declarative descriptions, YAML is great. If I'm writing loops and conditionals, I want to write code, not templated YAML.

I also keep struggling trying to use Ansible to provision chroots and nspawn containers.

A personal experiment: Transilience

There's another thing I like in Ansible: it's written in Python, which is a language I'm comfortable with. Compared to other platforms, it's one that I'm more likely to be able to control beyond being a simple user.

What if I can port Ansible modules into a library of high-level provisioning functions, that I can just run via normal Python scripts?

What if I can find a way to execute those scripts remotely and not just locally?

I've started writing some prototype code, and the biggest problem is, of course, finding a name.

Ansible comes from Ursula K. Le Guin's Hainish Cycle novels, where it is a device that allows its users to communicate near-instantaneously over interstellar distances. Traveling, however, is still constrained by the speed of light.

Later in the same universe, the novels A Fisherman of the Inland Sea and The Shobies' Story, talk about experiments with instantaneous interstellar travel, as a science Ursula Le Guin called transilience:

Transilience: n. A leap across or from one thing to another [1913 Webster]

Transilience. I like everything about this name.

Now that the hardest problem is solved, the rest is just a simple matter of implementation details.

14 June, 2021 02:30PM

François Marier

Self-hosting an Ikiwiki blog

8.5 years ago, I moved my blog to Ikiwiki and Branchable. It's now time for me to take the next step and host my blog on my own server. This is how I migrated from Branchable to my own Apache server.

Installing Ikiwiki dependencies

Here are all of the extra Debian packages I had to install on my server:

apt install ikiwiki ikiwiki-hosting-common gcc libauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-sslauthen-passphrase-perl libcgi-formbuilder-perl libcrypt-ssleay-perl libjson-xs-perl librpc-xml-perl python-docutils libxml-feed-perl libsearch-xapian-perl libmailtools-perl highlight-common libsearch-xapian-perl xapian-omega
apt install --no-install-recommends ikiwiki-hosting-web libgravatar-url-perl libmail-sendmail-perl libcgi-session-perl
apt purge libnet-openid-consumer-perl

Then I enabled the CGI module in Apache:

a2enmod cgi

and un-commented the following in /etc/apache2/mods-available/mime.conf:

AddHandler cgi-script .cgi

Creating a separate user account

Since Ikiwiki needs to regenerate my blog whenever a new article is pushed to the git repo or a comment is accepted, I created a restricted user account for it:

adduser blog
adduser blog sshuser
chsh -s /usr/bin/git-shell blog

git setup

Thanks to Branchable storing blogs in git repositories, I was able to import my blog using a simple git clone in /home/blog (the srcdir):

git clone --bare git://feedingthecloud.branchable.com/ source.git

Note that the name of the directory (source.git) is important for the ikiwikihosting plugin to work.

Then I pulled the .setup file out of the setup branch in that repo and put it in /home/blog/.ikiwiki/FeedingTheCloud.setup. After that, I deleted the setup branch and the origin remote from that clone:

git branch -d setup
git remote rm origin

Following the recommended git configuration, I created a working directory (the repository) for the blog user to modify the blog as needed:

cd /home/blog/
git clone /home/blog/source.git FeedingTheCloud

I added my own ssh public key to /home/blog/.ssh/authorized_keys so that I could push to the srcdir from my laptop.

Finaly, I generated a new ssh key without a passphrase:

ssh-keygen -t ed25519

and added it as deploy key to the GitHub repo which acts as a read-only mirror of my blog.

Ikiwiki config

While I started with the Branchable setup file, I changed the following things in it:

adminemail: webmaster@fmarier.org
srcdir: /home/blog/FeedingTheCloud
destdir: /var/www/blog
url: https://feeding.cloud.geek.nz
cgiurl: https://feeding.cloud.geek.nz/blog.cgi
cgi_wrapper: /var/www/blog/blog.cgi
cgi_wrappermode: 675
add_plugins:
- goodstuff
- lockedit
- comments
- blogspam
- sidebar
- recentchangesdiff
- attachment
- remove
- rename
- favicon
- format
- highlight
- search
- theme
- moderatedcomments
- flattr
- calendar
- headinganchors
- notifyemail
- anonok
- autoindex
- date
- recentchanges
- relativedate
- htmlbalance
- pagestats
- sortnaturally
- ikiwikihosting
- gitpush
- emailauth
disable_plugins:
- brokenlinks
- fortune
- more
- openid
- orphans
- passwordauth
- progress
- repolist
- toggle
- txt
sslcookie: 1
cookiejar:
  file: /home/blog/.ikiwiki/cookies
useragent: ikiwiki
git_wrapper: /home/blog/source.git/hooks/post-update
urlalias:
- http://feeds.cloud.geek.nz/
- http://www.feeding.cloud.geek.nz/
owner: francois@fmarier.org
hostname: feeding.cloud.geek.nz
emailauth_sender: login@fmarier.org
allowed_attachments: admin()

Then I created the destdir:

mkdir /var/www/blog
chown blog:blog /var/www/blog

and generated the initial copy of the blog as the blog user:

ikiwiki --setup .ikiwiki/FeedingTheCloud.setup --wrappers --rebuild

One thing that failed to generate properly was the tag cloug (from the pagestats plugin). I have not been able to figure out why it fails to generate any output when run this way, but if I push to the repo and let the git hook handle the rebuilding of the wiki, the tag cloud is generated correctly. Consequently, fixing this is not high on my list of priorities, but if you happen to know what the problem is, please reach out.

Apache config

Here's the Apache config I put in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Header set Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"

    Include /etc/fmarier-org/blog-common
</VirtualHost>

<VirtualHost *:443>
    ServerName www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    SSLEngine On
    SSLCertificateFile /etc/letsencrypt/live/feeding.cloud.geek.nz/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/feeding.cloud.geek.nz/privkey.pem

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

<VirtualHost *:80>
    ServerName feeding.cloud.geek.nz
    ServerAlias www.feeding.cloud.geek.nz
    ServerAlias feeds.cloud.geek.nz

    Redirect permanent / https://feeding.cloud.geek.nz/
</VirtualHost>

and the common config I put in /etc/fmarier-org/blog-common:

ServerAdmin webmaster@fmarier.org

DocumentRoot /var/www/blog

LogLevel core:info
CustomLog ${APACHE_LOG_DIR}/blog-access.log combined
ErrorLog ${APACHE_LOG_DIR}/blog-error.log

AddType application/rss+xml .rss

<Location /blog.cgi>
        Options +ExecCGI
</Location>

before enabling all of this using:

a2ensite blog
apache2ctl configtest
systemctl restart apache2.service

The feeds.cloud.geek.nz domain used to be pointing to Feedburner and so I need to maintain it in order to avoid breaking RSS feeds from folks who added my blog to their reader a long time ago.

Server-side improvements

Since I'm now in control of the server configuration, I was able to make several improvements to how my blog is served.

First of all, I enabled the HTTP/2 and Brotli modules:

a2enmod http2
a2enmod brotli

and enabled Brotli compression by putting the following in /etc/apache2/conf-available/francois.conf:

<IfModule mod_brotli.c>
    AddOutputFilterByType BROTLI_COMPRESS text/html text/plain text/xml text/css text/javascript application/javascript
    BrotliCompressionQuality 4
</IfModule>

Next, I made my blog available as a Tor onion service by putting the following in /etc/apache2/sites-available/blog.conf:

<VirtualHost *:443>
    ServerName feeding.cloud.geek.nz
    ServerAlias xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion

    Header set Onion-Location "http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion%{REQUEST_URI}s"
    Header set alt-svc 'h2="xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion:443"; ma=315360000; persist=1'
    ... 

<VirtualHost *:80>
    ServerName xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion
    Include /etc/fmarier-org/blog-common
</VirtualHost>

Then I followed the Mozilla Observatory recommendations and enabled the following security headers:

Header set Content-Security-Policy: "default-src 'none'; report-uri https://fmarier.report-uri.com/r/d/csp/enforce ; style-src 'self' 'unsafe-inline' ; img-src 'self' https://seccdn.libravatar.org/ ; script-src https://feeding.cloud.geek.nz/ikiwiki/ https://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ http://xfdug5vmfi6oh42fp6ahhrqdjcf7ysqat6fkp5dhvde4d7vlkqixrsad.onion/ikiwiki/ 'unsafe-inline' 'sha256-pA8FbKo4pYLWPDH2YMPqcPMBzbjH/RYj0HlNAHYoYT0=' 'sha256-Kn5E/7OLXYSq+EKMhEBGJMyU6bREA9E8Av9FjqbpGKk=' 'sha256-/BTNlczeBxXOoPvhwvE1ftmxwg9z+WIBJtpk3qe7Pqo=' ; base-uri 'self'; form-action 'self' ; frame-ancestors 'self'"
Header set X-Frame-Options: "SAMEORIGIN"
Header set Referrer-Policy: "same-origin"
Header set X-Content-Type-Options: "nosniff"

Note that the Mozilla Observatory is mistakenly identifying HTTP onion services as insecure, so you can ignore that failure.

I also used the Mozilla TLS config generator to improve the TLS config for my server.

Then I added security.txt and gpc.json to the root of my git repo and then added the following aliases to put these files in the right place:

Alias /.well-known/gpc.json /var/www/blog/gpc.json
Alias /.well-known/security.txt /var/www/blog/security.txt

I also followed these instructions to create a sitemap for my blog with the following alias:

Alias /sitemap.xml /var/www/blog/sitemap/index.rss

Finally, I simplified a few error pages to save bandwidth:

ErrorDocument 301 " "
ErrorDocument 302 " "
ErrorDocument 404 "Not Found"

Monitoring 404s

Another advantage of running my own web server is that I can monitor the 404s easily using logcheck by putting the following in /etc/logcheck/logcheck.logfiles:

/var/log/apache2/blog-error.log 

Based on that, I added a few redirects to point bots and users to the location of my RSS feed:

Redirect permanent /atom /index.atom
Redirect permanent /comments.rss /comments/index.rss
Redirect permanent /comments.atom /comments/index.atom
Redirect permanent /FeedingTheCloud /index.rss
Redirect permanent /feed /index.rss
Redirect permanent /feed/ /index.rss
Redirect permanent /feeds/posts/default /index.rss
Redirect permanent /rss /index.rss
Redirect permanent /rss/ /index.rss

and to tell them to stop trying to fetch obsolete resources:

Redirect gone /~ff/FeedingTheCloud
Redirect gone /gittip_button.png
Redirect gone /ikiwiki.cgi

I also used these 404s to discover a few old Feedburner URLs that I could redirect to the right place using archive.org:

Redirect permanent /feeds/1572545745827565861/comments/default /posts/watch-all-of-your-logs-using-monkeytail/comments.atom
Redirect permanent /feeds/1582328597404141220/comments/default /posts/news-feeds-rssatom-for-mythtvorg-and/comments.atom
...
Redirect permanent /feeds/8490436852808833136/comments/default /posts/recovering-lost-git-commits/comments.atom
Redirect permanent /feeds/963415010433858516/comments/default /posts/debugging-openwrt-routers-by-shipping/comments.atom

I also put the following robots.txt in the git repo in order to stop a bunch of authentication errors coming from crawlers:

User-agent: *
Disallow: /blog.cgi
Disallow: /ikiwiki.cgi

Future improvements

There are a few things I'd like to improve on my current setup.

The first one is to remove the iwikihosting and gitpush plugins and replace them with a small script which would simply git push to the read-only GitHub mirror. Then I could uninstall the ikiwiki-hosting-common and ikiwiki-hosting-web since that's all I use them for.

Next, I would like to have proper support for signed git pushes. At the moment, I have the following in /home/blog/source.git/config:

[receive]
    advertisePushOptions = true
    certNonceSeed = "(random string)"

but I'd like to also reject unsigned pushes.

While my blog now has a CSP policy which doesn't rely on unsafe-inline for scripts, it does rely on unsafe-inline for stylesheets. I tried to remove this but the actual calls to allow seemed to be located deep within jQuery and so I gave up. Patches for this would be very welcome of course.

Finally, I'd like to figure out a good way to deal with articles which don't currently have comments. At the moment, if you try to subscribe to their comment feed, it returns a 404. For example:

[Sun Jun 06 17:43:12.336350 2021] [core:info] [pid 30591:tid 140253834704640] [client 66.249.66.70:57381] AH00128: File does not exist: /var/www/blog/posts/using-iptables-with-network-manager/comments.atom

This is obviously not ideal since many feed readers will refuse to add a feed which is currently not found even though it could become real in the future. If you know of a way to fix this, please let me know.

14 June, 2021 06:18AM

June 13, 2021

Vincent Fourmond

Solution for QSoas quiz #2: averaging several Y values for the same X value

This post describes two similar solutions to the Quiz #2, using the data files found there. The two solutions described here rely on split-on-values. The first solution is the one that came naturally to me, and is by far the most general and extensible, but the second one is shorter, and doesn't require external script files.

Solution #1

The key to both solution is to separate the original data into a series of datasets that only contain data at a fixed value of x (which corresponds here to a fixed pH), and then process each dataset one by one to extract the average and standard deviation. This first step is done thus:
QSoas> load kcat-vs-ph.dat
QSoas> split-on-values pH x /flags=data
After these commands, the stacks contains a series of datasets bearing the data flag, that each contain a single column of data, as can be seen from the beginnings of a show-stack command:
QSoas> k
Normal stack:
	 F  C	Rows	Segs	Name	
#0	(*) 1	43	1	'kcat-vs-ph_subset_22.dat'
#1	(*) 1	44	1	'kcat-vs-ph_subset_21.dat'
#2	(*) 1	43	1	'kcat-vs-ph_subset_20.dat'
...
Each of these datasets have a meta-data named pH whose value is the original x value from kcat-vs-ph.dat. Now, the idea is to run a stats command on the resulting datasets, extracting the average value of x and its standard deviation, together with the value of the meta pH. The most natural and general way to do this is to use run-for-datasets, using the following script file (named process-one.cmds):
stats /meta=pH /output=true /stats=x_average,x_stddev
So the command looks like:
QSoas> run-for-datasets process-one.cmds flagged:data
This command produces an output file containing, for each flagged dataset, a line containing x_average, x_stddev, and pH. Then, it is just a matter of loading the output file and shuffling the columns in the right order to get the data in the form asked. Overall, this looks like this:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
output result.dat /overwrite=true
run-for-datasets process-one.cmds flagged:data
l result.dat
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2
The slight improvement over what is described above is the use of the output command to write the output to a dedicated file (here result.dat), instead of out.dat and ensuring it is overwritten, so that no data remains from previous runs.

Solution #2

The second solution is almost the same as the first one, with two improvements:
  • the stats command can work with datasets other than the current one, by supplying them to the /buffers= option, so that it is not necessary to use run-for-datasets;
  • the use of the output file can by replaced by the use of the accumulator.
This yields the following, smaller, solution:
l kcat-vs-ph.dat
split-on-values pH x /flags=data
stats /meta=pH /accumulate=* /stats=x_average,x_stddev /buffers=flagged:data
pop
apply-formula tmp=y2;y2=y;y=x;x=tmp
dataset-options /yerrors=y2


About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

13 June, 2021 09:14PM by Vincent Fourmond (noreply@blogger.com)

June 12, 2021

hackergotchi for Norbert Preining

Norbert Preining

Future of Cinnamon in Debian

OK, this is not an easy post. I have been maintaining Cinnamon in Debian for quite some time, since around the times version 4 came out. The soon (hahaha) to be released Bullseye will carry the last release of the 4-track, but version 5 is already waiting, After Bullseye, the future of Cinnamon in Debian currently looks bleak.

Since my switch to KDE/Plasma, I haven’t used Cinnamon in months. Only occasionally I tested new releases, but never gave them a real-world test. Having left Gnome3 for it’s complete lack of usability for pro-users, I escaped to Cinnamon and found a good home there for quite some time – using modern technology but keeping user interface changes conservative. For long time I haven’t even contemplated using KDE, having been burned during the bad days of KDE3/4 when bloat-as-bloat-can-be was the best description.

What revelation it was that KDE/Plasma was more lightweight, faster, responsive, integrated, customizable, all in all simple great. Since my switch to KDE/Plasma I think not for a second I have missed anything from the Gnome3 or Cinnamon world.

And that means, I will most probably NOT packaging Cinnamon 5, nor do any real packaging work of Cinnamon for Debian in the future. Of course, I will try to keep maintenance of the current set of packages for Bullseye, but for the next release, I think it is time that someone new steps in. Cinnamon packaging taught me a lot on how to deal with multiple related packages, which is of great use in the KDE packaging world.

If someone steps forward, I will surely be around for support and help, but as long as nobody takes the banner, it will mean the end of Cinnamon in Debian.

Please contact me if you are interested!

12 June, 2021 01:19PM by Norbert Preining

hackergotchi for Junichi Uekawa

Junichi Uekawa

Wrote a quick hack to open chroot in emacs tramp.

Wrote a quick hack to open chroot in emacs tramp. I wrote a mode for cros_sdk and it was relatively simple. I figured that chroot must be easier. I could write one in about 30 minutes. I need to mount proc and home inside the chroot to make it useful, but here goes. chroot-tramp.el。

12 June, 2021 08:32AM by Junichi Uekawa

June 11, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

New: The Debian BBB Packaging Team (and: Kurento Media Server goes Debian)

Today, Fre(i)e Software GmbH has been contracted for packaging Kurento Media Server for Debian. This packaging project will be funded by GUUG e.V. (the German Unix User Group e.V.). A big thanks to the people from GUUG e.V. for making this packaging project possible.

About Kurento Media Server

Kurento is an open source software project providing a platform suitable for creating modular applications with advanced real-time communication capabilities. For knowing more about Kurento, please visit the Kurento project website: https://www.kurento.org.

Kurento is part of FIWARE. For further information on the relationship of FIWARE and Kurento check the Kurento FIWARE Catalog Entry. Kurento is also part of the NUBOMEDIA research initiative.

Kurento Media Server is a WebRTC-compatible server that processes audio and video streams, doing composable pipeline-based processing of media.

About BigBlueButton

As some of you may know, Kurento Media Server is one of the core components of the BigBlueButton software, an ,,Open Source Virtual Classroom Software''.

The context of the KMS funding is - after several other steps - getting the complete software component stack of BigBlueButton (aka BBB) into Debian some day, so that we can provide BBB as native Debian packages. On Debian. (Currently, one needs to use an always already a bit outdated version of Ubuntu).

Due to this greater context, I just created the Debian BBB Packaging Team on salsa.debian.org.

Outlook and Appreciation

The current project (uploading Kurento Media Server to Debian) will very likely be extended to one year of package maintenance for all Kurento Media Server components in Debian. Extending this maintenance funding to a second year, has also been discussed, and seems a possible option.

Probably most Debian Developer colleagues will agree with me when I say that Debian packaging is not a one-time shot until the first uploads of software packages have landed and settled. Debian package maintenance is a long term responsibility and requires long term commitment. I am very glad, that the people at GUUG e.V are on the same page with me (with us) regarding this. This is much and dearly appreciated. Thank you!!!

What else?

Well, we have also talked about another BigBlueButton component that is not yet in Debian: FreeSwitch. But more of that, when time has come.

How to Join the Debian BBB Packaging Team?

Please ping me via IRC (sunweaver on OFTC IRC) or [matrix] (@sunweaver:matrix.org).

How to Support the Debian BBB Packaging Team?

If you, your organization, your company, your municipality, your university, etc. feels like supporting the effort of packaging BigBlueButton for Debian, please get in touch with: mike.gabriel@freiesoftware.gmbh

And yes, the company homepage is not online, yet, but it is in the makings...

light+love
Mike (aka sunweaver)

11 June, 2021 09:35PM by sunweaver

François Marier

Upgrading an ext4 filesystem for the year 2038

If you see a message like this in your logs:

ext4 filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)

it's an indication that your filesystem is not Y2k38-safe.

You can also check this manually using:

$ tune2fs -l /dev/sda1 | grep "Inode size:"
Inode size:           128

where an inode size of 128 is insufficient beyond 2038 and an inode size of 256 is what you want.

The safest way to change this is to copy the contents of your partition to another ext4 partition:

cp -a /boot /mnt/backup/

and then reformat with the correct inode size:

umount /boot
mkfs.ext4 -I 256 /dev/sda1

before copying everything back:

mount /boot
cp -a /mnt/backup/boot/* /boot/

11 June, 2021 08:43PM

Deleting non-decryptable restic snapshots

Due to what I suspect is disk corruption error due to a faulty RAM module or network interface on my GnuBee, my restic backup failed with the following error:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-854484247
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-854484247
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
error for tree 4645312b:
  decrypting blob 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c failed: ciphertext verification failed
error for tree 2c3248ce:
  decrypting blob 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6 failed: ciphertext verification failed
Fatal: repository contains errors

I started by locating the snapshots which make use of these corrupt trees:

$ restic find --tree 4645312b
repository b0b0516c opened successfully, password is correct
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 4645312b443338d57295550f2f4c135c34bda7b17865c4153c9b99d634ae641c
 ... path /usr/include/boost/spirit/home/support/auxiliary
 ... in snapshot e75876ed (2021-02-28 08:35:29)

$ restic find --tree 2c3248ce
repository b0b0516c opened successfully, password is correct
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot 41e138c8 (2021-01-31 08:35:16)
Found tree 2c3248ce5dc7a4bc77f03f7475936041b6b03e0202439154a249cd28ef4018b6
 ... path /usr/include/boost/spirit/home/support/char_encoding
 ... in snapshot e75876ed (2021-02-28 08:35:29)

and then deleted them:

$ restic forget 41e138c8 e75876ed
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  2 / 2 files deleted

$ restic prune 
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:23] 100.00%  58964 / 58964 packs
repository contains 58964 packs (1417910 blobs) with 278.913 GiB
processed 1417910 blobs: 0 duplicate blobs, 0 B duplicate
load all snapshots
find data that is still in use for 20 snapshots
[1:15] 100.00%  20 / 20 snapshots
found 1364852 of 1417910 data blobs still in use, removing 53058 blobs
will remove 0 invalid files
will delete 942 packs and rewrite 1358 packs, this frees 6.741 GiB
[10:50] 31.96%  434 / 1358 packs rewritten
hash does not match id: want 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57, got 95d90aa48ffb18e6d149731a8542acd6eb0e4c26449a4d4c8266009697fd1904
github.com/restic/restic/internal/repository.Repack
    github.com/restic/restic/internal/repository/repack.go:37
main.pruneRepository
    github.com/restic/restic/cmd/restic/cmd_prune.go:242
main.runPrune
    github.com/restic/restic/cmd/restic/cmd_prune.go:62
main.glob..func19
    github.com/restic/restic/cmd/restic/cmd_prune.go:27
github.com/spf13/cobra.(*Command).execute
    github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/cobra/command.go:960
github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/cobra/command.go:897
main.main
    github.com/restic/restic/cmd/restic/main.go:98
runtime.main
    runtime/proc.go:204
runtime.goexit
    runtime/asm_amd64.s:1374

As you can see above, the prune command failed due to a corrupt pack and so I followed the process I previously wrote about and identified the affected snapshots using:

$ restic find --pack 9ec955794534be06356655cfee6abe73cb181f88bb86b0cd769cf8699f9f9e57

before deleting them with:

$ restic forget 031ab8f1 1672a9e1 1f23fb5b 2c58ea3a 331c7231 5e0e1936 735c6744 94f74bdb b11df023 dfa17ba8 e3f78133 eefbd0b0 fe88aeb5 
repository b0b0516c opened successfully, password is correct
[0:00] 100.00%  13 / 13 files deleted

$ restic prune
repository b0b0516c opened successfully, password is correct
counting files in repo
building new index for repo
[13:37] 100.00%  60020 / 60020 packs
repository contains 60020 packs (1548315 blobs) with 283.466 GiB
processed 1548315 blobs: 129812 duplicate blobs, 4.331 GiB duplicate
load all snapshots
find data that is still in use for 8 snapshots
[0:53] 100.00%  8 / 8 snapshots
found 1219895 of 1548315 data blobs still in use, removing 328420 blobs
will remove 0 invalid files
will delete 6232 packs and rewrite 1275 packs, this frees 36.302 GiB
[23:37] 100.00%  1275 / 1275 packs rewritten
counting files in repo
[11:45] 100.00%  52822 / 52822 packs
finding old index files
saved new indexes as [a31b0fc3 9f5aa9b5 db19be6f 4fd9f1d8 941e710b 528489d9 fb46b04a 6662cd78 4b3f5aad 0f6f3e07 26ae96b2 2de7b89f 78222bea 47e1a063 5abf5c2d d4b1d1c3 f8616415 3b0ebbaa]
remove 23 old index files
[0:00] 100.00%  23 / 23 files deleted
remove 7507 old packs
[0:08] 100.00%  7507 / 7507 files deleted
done

And with 13 of my 21 snapshots deleted, the checks now pass:

$ restic check
using temporary cache in /var/tmp/restic-tmp/restic-check-cache-407999210
repository b0b0516c opened successfully, password is correct
created new cache in /var/tmp/restic-tmp/restic-check-cache-407999210
create exclusive lock for repository
load indexes
check all packs
check snapshots, trees and blobs
no errors were found

This represents a significant amount of lost backup history, but at least it's not all of it.

11 June, 2021 08:43PM

hackergotchi for Colin Watson

Colin Watson

SSH quoting

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"
/home/user

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp
pwd

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"
/home/user
/tmp

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"
/tmp

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

11 June, 2021 10:22AM by Colin Watson

hackergotchi for Mike Gabriel

Mike Gabriel

Linux on Acer Spin 3

Recently, I bought an Acer Spin 3 Convertible Notebook for the company and provided it to Robert Tari for his daily work on Ayatana Indicators (which currently is funded by the UBports Foundation via my company Fre(i)e Software GmbH).

Some days ago Robert reported back about a sleepless night he spent with that machine... He got stuck with a tricky issue regarding the installation of Manjaro GNU/Linux on that machine, that could be -- at the end -- resolved by a not so well documented trick.

Before anyone else spends another sleepless night on this, we thought we'd better share Robert's solution.

So, the below applies to the Acer Spin 3 series (and probably to other Spin models, perhaps even some other Acer laptops):

Acer Spin 3 Pre-Inst Cheat Codes

Before you even plug in the USB install media:

  1. Go to UEFI settings (i.e. BIOS for us elderly people) [F2]
  2. Security -> Set Supervisor Password [Enabled]
  3. Enter the password you'll use
  4. Boot -> Secure Boot -> [Disabled] (you can't disable it without a set supervisor password)
  5. Exit -> Exit Saving Changes
  6. Restart and go to UEFI settings again [F2]
  7. Main -> [Now press CTRL + S] -> VMD Controller -> [Disabled]
  8. Exit -> Exit Saving Changes
  9. Now plug in the install USB and restart

Esp. the disabling of the VMD Controller is essential. Otherwise, GRUB won't find any partition nor EFI registered boot items after the installation and drops into the EFI recovery shell.

Robert hasn't tested the Wacom pen that comes with the device, nor the fingerprint reader, yet.

Everything else works out-of-the-box.

light+love
Mike Gabriel (aka sunweaver)

11 June, 2021 06:31AM by sunweaver

June 10, 2021

Vincent Bernat

Serving WebP & AVIF images with Nginx

WebP and AVIF are two image formats for the web. They aim to produce smaller files than JPEG and PNG. They both support lossy and lossless compression, as well as alpha transparency. WebP was developed by Google and is a derivative of the VP8 video format.1 It is supported on most browsers. AVIF is using the newer AV1 video format to achieve better results. It is supported by Chromium-based browsers and has experimental support for Firefox.2

Your browser supports WebP and AVIF image formats. Your browser supports none of these image formats. Your browser only supports the WebP image format. Your browser only supports the AVIF image format.

Without JavaScript, I can’t tell what your browser supports.

Converting and optimizing images

For this blog, I am using the following shell snippets to convert and optimize JPEG and PNG images. Skip to the next section if you are only interested in the Nginx setup.

JPEG images

JPEG images are converted to WebP using cwebp.

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -q 84 -af '{}' -o '{}'.webp

They are converted to AVIF using avifenc from libavif:

find media/images -type f -name '*.jpg' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      avifenc --codec aom --yuv 420 --min 20 --max 25 '{}' '{}'.avif

Then, they are optimized using jpegoptim built with Mozilla’s improved JPEG encoder, via Nix. This is one reason I love Nix.

jpegoptim=$(nix-build --no-out-link \
      -E 'with (import <nixpkgs>{}); jpegoptim.override { libjpeg = mozjpeg; }')
find media/images -type f -name '*.jpg' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      ${jpegoptim}/bin/jpegoptim --max=84 --all-progressive --strip-all

PNG images

PNG images are down-sampled to 8-bit RGBA-palette using pngquant. The conversion reduces file sizes significantly while being mostly invisible.

find media/images -type f -name '*.png' -print0 \
  | sort -z
  | xargs -0n10 -P$(nproc) \
      pngquant --skip-if-larger --strip \
               --quiet --ext .png --force

Then, they are converted to WebP with cwebp in lossless mode:

find media/images -type f -name '*.png' -print0 \
  | xargs -0n1 -P$(nproc) -i \
      cwebp -z 8 '{}' -o '{}'.webp

No conversion is done to AVIF: lossless compression is not as efficient as pngquant and lossy compression is only marginally better than what I get with WebP.

Keeping only the smallest files

I am only keeping WebP and AVIF images if they are at least 10% smaller than the original format: decoding is usually faster for JPEG and PNG; and JPEG images can be decoded progressively.3

for f in media/images/**/*.{webp,avif}; do
  orig=$(stat --format %s ${f%.*})
  new=$(stat --format %s $f)
  (( orig*0.90 > new )) || rm $f
done

I only keep AVIF images if they are smaller than WebP.

for f in media/images/**/*.avif; do
  [[ -f ${f%.*}.webp ]] || continue
  orig=$(stat --format %s ${f%.*}.webp)
  new=$(stat --format %s $f)
  (( $orig > $new )) || rm $f
done

We can compare how many images are kept when converted to WebP or AVIF:

printf "     %10s %10s %10s\n" Original WebP AVIF
for format in png jpg; do
  printf " ${format:u} %10s %10s %10s\n" \
    $(find media/images -name "*.$format" | wc -l) \
    $(find media/images -name "*.$format.webp" | wc -l) \
    $(find media/images -name "*.$format.avif" | wc -l)
done

AVIF is better than MozJPEG for most JPEG files while WebP beats MozJPEG only for one file out of two:

       Original       WebP       AVIF
 PNG         64         47          0
 JPG         83         40         74

Further reading

I didn’t detail my choices for quality parameters and there is not much science in it. Here are two resources providing more insight on AVIF:

Serving WebP & AVIF with Nginx

To serve WebP and AVIF images, there are two possibilities:

  1. use <picture> to let the browser pick the format it supports, or
  2. use content negotiation to let the server send the best-supported format.

I use the second approach. It relies on inspecting the Accept HTTP header in the request. For Chrome, it looks like this:

Accept: image/avif,image/webp,image/apng,image/*,*/*;q=0.8

I configure Nginx to serve AVIF image, then the WebP image, and fallback to the original JPEG/PNG image depending on what the browser advertises:4

http {
  map $http_accept $webp_suffix {
    default        "";
    "~image/webp"  ".webp";
  }
  map $http_accept $avif_suffix {
    default        "";
    "~image/avif"  ".avif";
  }
}
server {
  # […]
  location ~ ^/images/.*\.(png|jpe?g)$ {
    add_header Vary Accept;
    try_files $uri$avif_suffix$webp_suffix $uri$avif_suffix $uri$webp_suffix $uri =404;
  }
}

For example, let’s suppose the browser requests /images/ont-box-orange@2x.jpg. If it supports WebP but not AVIF, $webp_suffix is set to .webp while $avif_suffix is set to the empty string. The server tries to serve the first existing file in this list:

  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

If the browser supports both AVIF and WebP, Nginx walks the following list:

  • /images/ont-box-orange@2x.jpg.webp.avif (it never exists)
  • /images/ont-box-orange@2x.jpg.avif
  • /images/ont-box-orange@2x.jpg.webp
  • /images/ont-box-orange@2x.jpg

Eugene Lazutkin explains in more detail how this works. I have only presented a variation of his setup supporting both WebP and AVIF.


  1. VP8 is only used for lossy compression. Lossless compression is using an unrelated format↩︎

  2. Firefox support was scheduled for Firefox 86 but because of the lack of proper color space support, it is still not enabled by default. ↩︎

  3. Progressive decoding is not planned for WebP but could be implemented using low-quality thumbnail images for AVIF. See this issue for a discussion. ↩︎

  4. The Vary header ensures an intermediary cache (a proxy or a CDN) checks the Accept header before using a cached response. Internet Explorer has trouble with this header and may not be able to cache the resource properly. There is a workaround but Internet Explorer’s market share is now so small that it is pointless to implement it. ↩︎

10 June, 2021 01:11PM by Vincent Bernat

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#33: Collaborative Editing and Execution in Shared Byoby Sessions

Welcome to the 33th post in the rigorously raconteuring R recommendations series, or R4 for short. This post is also a post in the T4 series of tips, tricks, tools, and toys as it picks up and extends earlier posts on byobu. And it fits nicely in the more recent ESS-Intro series as we show some Emacs. You can find earlier R4 posts here, and the T4 posts here; the ESS-Intro series is here.

The focus of this short video (and slides) is on collaboration using files, but also entire sessions, execution and all aspects of joint exploration, development or debugging. Anything you can do in a terminal you can also do shared in a terminal. The video contains a brief lightning talk, and a shared session jointly with Grant McDermott and Vicent Arel-Bundock. My big big thanks to both of them for prodding and encouragement, as well as fearless participation in the joint section of the video:

The corresponding pdf slides are here.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 June, 2021 12:49AM

June 09, 2021

hackergotchi for Michael Prokop

Michael Prokop

efivars is gone with Debian/bullseye #newinbullseye

Continuing with #newinbullseye, it’s worth being aware of, that efivars is gone with the kernel version shipped as of Debian/bullseye.

Quoting from wiki.debian.org/UEFI:

The Linux kernel gives access to the UEFI configuration variables via a set of files under /sys, using two different interfaces.

The older interface was showing files under /sys/firmware/efi/vars, and this is what was used by default in both Wheezy and Jessie.

The new interface is efivarfs, which will expose things in a slightly different format under /sys/firmware/efi/efivars.
This is the new preferred way of using UEFI configuration variables, and Debian switched to it by default from Stretch onwards.

Now, CONFIG_EFI_VARS is no longer enabled in Debian due to commit 20146398c4 (shipped as such with Debian kernel package versions >=5.10.1-1~exp1).

As a result, the kernel module efivars is no longer available on systems running Debian kernels >=5.10 (which includes Debian/bullseye). Now, when running such a system in EFI mode, chroot-ing into a system and executing e.g. efibootmgr, it might fail with:

# efibootmgr
EFI variables are not supported on this system.

This is caused by /sys/firmware/efi/vars no longer being available, because of the disabled CONFIG_EFI_VARS. To get this working again, you need to make efivarfs available via:

# mount -t efivarfs efivarfs /sys/firmware/efi/efivars

Then efibootmgr and further tools relying on efivars should work again.

FYI: if you’re a user of Grml’s grml-chroot tool, this is going to be handled out of the box for you.

09 June, 2021 05:16PM by mika

Thorsten Alteholz

My Debian Activities in May 2021

FTP master

This month I accepted 85 and rejected 6 packages. The overall number of packages that got accepted was only 88. Yeah, Debian is frozen but hopefully will unfreeze soon.

Debian LTS

This was my eighty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 29.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2650-1] exim4 security update for 17 CVEs
  • [DLA 2665-1] ring security update one CVE
  • [DLA 2669-1] libxml2 security update one CVE
  • the fix for tnef/CVE-2019-18849 had been approved and I could do the PU-upload

I also made some progress with gpac and struggle with dozens of issues here.

Last but not least I did some days of frontdesk duties, which for whatever reason was rather time-consuming this month.

Debian ELTS

This month was the thirty-fifth ELTS month.

During my allocated time I uploaded:

  • ELA-420-1 for exim4
  • ELA-435-1 for python2.7
  • ELA-436-1 for libxml2

I also made some progress with python3.4

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

Last but not least I adopted gnucobol.

09 June, 2021 03:33PM by bladmin

Enrico Zini

Ansible recurse and follow quirks

I'm reading Ansible's builtin.file sources for, uhm, reasons, and the use of follow stood out to my eyes. Reading on, not only that. I feel like the ansible codebase needs a serious review, at least in essential core modules like this one.

In the file module documentation it says:

This flag indicates that filesystem links, if they exist, should be followed.

In the recursive_set_attributes implementation instead, follow means "follow symlinks to directories", but if a symlink to a file is found, it does not get followed, kind of.

What happens is that ansible will try to change the mode of the symlink, which makes sense on some operating systems. And it does try to use lchmod if present. Buf if not, this happens:

# Attempt to set the perms of the symlink but be
# careful not to change the perms of the underlying
# file while trying
underlying_stat = os.stat(b_path)
os.chmod(b_path, mode)
new_underlying_stat = os.stat(b_path)
if underlying_stat.st_mode != new_underlying_stat.st_mode:
    os.chmod(b_path, stat.S_IMODE(underlying_stat.st_mode))

So it tries doing chmod on the symlink, and if that changed the mode of the actual file, switch it back.

I would have appreciated a comment documenting on which systems a hack like this makes sense. As it is, it opens a very short time window in which a symlink attack can make a system file vulerable, and an exception thrown by the second stat will make it vulnerable permanently.

What about follow following links during recursion: how does it avoid loops? I don't see a cache of (device, inode) pairs visited. Let's try:

fatal: [localhost]: FAILED! => {"changed": false, "details": "maximum recursion depth exceeded", "gid": 1000, "group": "enrico", "mode": "0755", "msg": "mode must be in octal or symbolic form", "owner": "enrico", "path": "/tmp/test/test1", "size": 0, "state": "directory", "uid": 1000}

Ok, it, uhm, delegates handling that to the Python stack size. I guess it means that a ln -s .. foo in a directory that gets recursed will always fail the task. Fun!

More quirks

Turning a symlink into a hardlink is considered a noop if the symlink points to the same file:

---
- hosts: localhost
  tasks:
   - name: create test file
     file:
        path: /tmp/testfile
        state: touch
   - name: create test link
     file:
        path: /tmp/testlink
        state: link
        src: /tmp/testfile
   - name: turn it into a hard link
     file:
        path: /tmp/testlink
        state: hard
        src: /tmp/testfile

gives:

$ ansible-playbook test3.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [create test file] *****************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [create test link] *****************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [turn it into a hard link] *********************************************************************************************************************************************************************************************
ok: [localhost]

PLAY RECAP ******************************************************************************************************************************************************************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

More quirks

Converting a directory into a hardlink should work, but it doesn't because unlink is used instead of rmdir:

---
- hosts: localhost
  tasks:
   - name: create test dir
     file:
        path: /tmp/testdir
        state: directory
   - name: turn it into a symlink
     file:
        path: /tmp/testdir
        state: hard
        src: /tmp/
        force: yes

gives:

$ ansible-playbook test4.yaml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [create test dir] ******************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [turn it into a symlink] ***********************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "gid": 1000, "group": "enrico", "mode": "0755", "msg": "Error while replacing: [Errno 21] Is a directory: b'/tmp/testdir'", "owner": "enrico", "path": "/tmp/testdir", "size": 0, "state": "directory", "uid": 1000}

PLAY RECAP ******************************************************************************************************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

More quirks

This is hard to test, but it looks like if source and destination are hardlinks to the same inode numbers, but on different filesystems, the operation is considered a successful noop: https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/file.py#L821

It should probably be something like:

if (st1.st_dev, st1.st_ino) == (st2.st_dev, st2.st_ino):

09 June, 2021 12:18PM

June 08, 2021

Mock syscalls with C++

I wrote and maintain some C++ code to stream high quantities of data as fast as possible, and I try to use splice and sendfile when available.

The availability of those system calls varies at runtime according to a number of factors, and the code needs to be written to fall back to read/write loops depending on what the splice and sendfile syscalls say.

The tricky issue is unit testing: since the code path chosen depends on the kernel, the test suite will test one path or the other depending on the machine and filesystems where the tests are run.

It would be nice to be able to mock the syscalls, and replace them during tests, and it looks like I managed.

First I made catalogues of the mockable syscalls I want to be able to mock. One with function pointers, for performance, and one with std::function, for flexibility:

/**
 * Linux versions of syscalls to use for concrete implementations.
 */
struct ConcreteLinuxBackend
{
    static ssize_t (*read)(int fd, void *buf, size_t count);
    static ssize_t (*write)(int fd, const void *buf, size_t count);
    static ssize_t (*writev)(int fd, const struct iovec *iov, int iovcnt);
    static ssize_t (*sendfile)(int out_fd, int in_fd, off_t *offset, size_t count);
    static ssize_t (*splice)(int fd_in, loff_t *off_in, int fd_out,
                             loff_t *off_out, size_t len, unsigned int flags);
    static int (*poll)(struct pollfd *fds, nfds_t nfds, int timeout);
    static ssize_t (*pread)(int fd, void *buf, size_t count, off_t offset);
};

/**
 * Mockable versions of syscalls to use for testing concrete implementations.
 */
struct ConcreteTestingBackend
{
    static std::function<ssize_t(int fd, void *buf, size_t count)> read;
    static std::function<ssize_t(int fd, const void *buf, size_t count)> write;
    static std::function<ssize_t(int fd, const struct iovec *iov, int iovcnt)> writev;
    static std::function<ssize_t(int out_fd, int in_fd, off_t *offset, size_t count)> sendfile;
    static std::function<ssize_t(int fd_in, loff_t *off_in, int fd_out,
                                 loff_t *off_out, size_t len, unsigned int flags)> splice;
    static std::function<int(struct pollfd *fds, nfds_t nfds, int timeout)> poll;
    static std::function<ssize_t(int fd, void *buf, size_t count, off_t offset)> pread;

    static void reset();
};

Then I converted the code to templates, parameterized on the catalogue class.

Explicit template instantiation helps in making sure that one doesn't need to include template code in all sorts of places.

Finally, I can have a RAII class for mocking:

/**
 * RAII mocking of syscalls for concrete stream implementations
 */
struct MockConcreteSyscalls
{
    std::function<ssize_t(int fd, void *buf, size_t count)> orig_read;
    std::function<ssize_t(int fd, const void *buf, size_t count)> orig_write;
    std::function<ssize_t(int fd, const struct iovec *iov, int iovcnt)> orig_writev;
    std::function<ssize_t(int out_fd, int in_fd, off_t *offset, size_t count)> orig_sendfile;
    std::function<ssize_t(int fd_in, loff_t *off_in, int fd_out,
                                 loff_t *off_out, size_t len, unsigned int flags)> orig_splice;
    std::function<int(struct pollfd *fds, nfds_t nfds, int timeout)> orig_poll;
    std::function<ssize_t(int fd, void *buf, size_t count, off_t offset)> orig_pread;

    MockConcreteSyscalls();
    ~MockConcreteSyscalls();
};

MockConcreteSyscalls::MockConcreteSyscalls()
    : orig_read(ConcreteTestingBackend::read),
      orig_write(ConcreteTestingBackend::write),
      orig_writev(ConcreteTestingBackend::writev),
      orig_sendfile(ConcreteTestingBackend::sendfile),
      orig_splice(ConcreteTestingBackend::splice),
      orig_poll(ConcreteTestingBackend::poll),
      orig_pread(ConcreteTestingBackend::pread)
{
}

MockConcreteSyscalls::~MockConcreteSyscalls()
{
    ConcreteTestingBackend::read = orig_read;
    ConcreteTestingBackend::write = orig_write;
    ConcreteTestingBackend::writev = orig_writev;
    ConcreteTestingBackend::sendfile = orig_sendfile;
    ConcreteTestingBackend::splice = orig_splice;
    ConcreteTestingBackend::poll = orig_poll;
    ConcreteTestingBackend::pread = orig_pread;
}

And here's the specialization to pretend sendfile and splice aren't available:

/**
 * Mock sendfile and splice as if they weren't available on this system
 */
struct DisableSendfileSplice : public MockConcreteSyscalls
{
    DisableSendfileSplice();
};

DisableSendfileSplice::DisableSendfileSplice()
{
    ConcreteTestingBackend::sendfile = [](int out_fd, int in_fd, off_t *offset, size_t count) -> ssize_t {
        errno = EINVAL;
        return -1;
    };
    ConcreteTestingBackend::splice = [](int fd_in, loff_t *off_in, int fd_out,
                                        loff_t *off_out, size_t len, unsigned int flags) -> ssize_t {
        errno = EINVAL;
        return -1;
    };
}

It's now also possible to reproduce in the test suite all sorts of system-related issues we might observe in production over time.

08 June, 2021 02:29PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

LaTeX draft documents

I'm writing up a PhD deliverable (which will show up here eventually) using LaTeX, which is my preferred tool for such things, since I also use it for papers, and will eventually be using it for my thesis itself. For this last document, I experimented with a few packages and techniques for organising the document which I found useful, so I thought I'd share them.

What version is this anyway?

I habitually store and develop my documents in git repositories. From time to time I generate a PDF and copy it off elsewhere to review (e.g., an iPad). Later on it can be useful to be able to figure out exactly what source built the PDF. I achieve this using

\newcommand{\version}{\input|"git describe --always --dirty"}

And \version\ somewhere in the header of the document.

Draft mode

The common document classes all accept a draft argument, to enable Draft Mode.

\documentclass[12pt,draft]{article}

Various other packages behave differently if Draft Mode is enabled. The graphicx package, for example, doesn't actually draw pictures in draft mode, which I don't find useful. So for that package, I force it to behave as if we were in "Final Mode" at all times:

\usepackage[final]{graphicx}

I want to also include some different bits and pieces in Draft Mode. Although the final version won't need it, I find having a Table of Contents very helpful during the writing process. The ifdraft package adds a convenience macro to query whether we are in draft or not. I use it like so:

\ifdraft{
This page will be cut from the final report.
\tableofcontents
\newpage
}{}

For this document, I have been given the section headings I must use and the number of pages each section must run to. When drafting, I want to include the page budget in the section names (e.g. Background (2 pages)). I also force new pages at the beginning of each Section, to make it easier to see how close I am to each section's page budget.

\ifdraft{\newpage}{}
\section{Work completed\ifdraft{ (1 page)}{}} % 1 Page

todonotes

Two TODO items in the margin

Two TODO items in the margin

Collated TODOs in a list

Collated TODOs in a list

The todonotes package package is one of many that offers macros to make managing in-line TODO notes easier. Within the source of my document, I can add a TODO right next to the relevant text with \todo{something to do}. In the document, by default, this is rendered in the right-hand margin. With the right argument, the package will only render the notes in draft mode.

\usepackage[textsize=small,obeyDraft]{todonotes}

todonotes can also collate all the TODOs together into a single list. The list items are hyperlinked back to the page where the relevant item appears.

\ifdraft{
\newpage
\section{TODO}
  This page will be cut from the final report.
  \listoftodos
}{}

08 June, 2021 12:35PM

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.22 for Debian

Today, KDE released version 5.22 of the Plasma desktop with the usual long list of updates and improvements. And packages for Debian are ready for consumption! Ah and yes, KDE Gear 21.04 is also ready!

As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma521 line to read plasma522. Just for your convenience, if you want the full set of stuff, here are the apt-source entries:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma522/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2104/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/Debian_Unstable/ ./

and for testing the same with Debian_unstable replaced with Debian_Testing. As usual, don’t forget that you need to import my OBS gpg key to make these repos work!

The sharp eye might have detected also the apps2104 line, yes the newly renamed KDE Gear suite of packages is also available in my OBS builds (and in Debian/experimental).

Uploads to Debian

Currently, the frameworks and most of the KDE Gear (Apps) 21.04 are in Debian/experimental. I will upload Plasma 5.22 to experimental as soon as NEW processing of two packages is finished (which might take another few months).

After the release of bullseye, all the current versions will be uploaded to unstable as usual.

Enjoy the new Plasma!

08 June, 2021 11:47AM by Norbert Preining

hackergotchi for Bits from Debian

Bits from Debian

Registration for DebConf21 Online is Open

DebConf21 banner

The DebConf team is glad to announce that registration for DebConf21 Online is now open.

The 21st Debian Conference is being held Online, due to COVID-19, from August 22 to August 29, 2021. It will also sport a DebCamp from August 15 to August 21, 2021 (preceeding the DebConf).

To register for DebConf21, please visit the DebConf website at https://debconf21.debconf.org/register

Reminder: Creating an account on the site does not register you for the conference, there's a conference registration form to complete after signing in.

Participation in DebConf21 is conditional on your respect of our Code of Conduct. We require you to read, understand and abide by this code.

A few notes about the registration process:

  • We need to know attendees' locations to better plan the schedule around timezones. Please make sure you fill in the "Country I call home" field in the registration form accordingly. It's especially important to have this data for people who submitted talks, but also for other attendees.

  • We are offering limited amounts of financial support for those who require it in order to attend. Please refer to the corresponding page on the website for more information.

Any questions about registration should be addressed to registration@debconf.org.

See you online!

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak, and our Gold Sponsor Matanel Foundation.

08 June, 2021 08:00AM by Stefano Rivera

June 07, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

UBports: Packaging of Lomiri Operating Environment for Debian (part 05)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri (Feb - May 2021)

Over the past 4 months I attended 14 of the weekly scheduled UBports development sync sessions and worked on the following bits and pieces regarding Lomiri in Debian:

  • Bundle upload to Debian of all Ayatana Indicators packages, including lomiri-url-dispatcher
  • Upload to Debian unstable: ayatana-indicator-session 0.8.2-1 (fix parallel build problem)
  • lomiri-ui-toolkit: Consulting on non-free font usage and other issues with upstream side of the Ubuntu Touch's UI Toolkit
  • Upload to Debian unstable: wlcs 1.2.1-1
  • qtmir update: 0.6.1-6 (fixing qml-demo-shell)
  • Upload to Debian unstable: qtmir 0.6.1-7
  • Upload to Debian unstable as NEW: deviceinfo 0.1.0-1
  • Upload to Debian unstable: mir 1.8.0+dfsg1-16 (fixing #985503)
  • Discuss with upstream how to handle hard-coded /opt/click.ubuntu.com path in src:pkg click
  • File upstream MR: https://gitlab.com/ubports/core/click/-/issues/2
  • Upload to Debian experimental as NEW: click 0.5.0-1
  • Upload to Debian experimental as NEW: libusermetrics 1.2.0-1
  • Port libusermetrics over to pkgkde-symbolshelper
  • Upload to Debian experimental as NEW: hfd-service 0.1.0-1 (This included upstream development: upstart to systemd conversion, a D-Bus security fix, etc.)
  • Upload to Debian experimental as NEW: libayatana-common 0.9.1-1
  • Upload to Debian experimental: deviceinfo (0.1.0-2): Update .symbols file for non-amd64 Debian architectures
  • Upload to Debian experimental: ayatana-indicator-keyboard 0.7.901-1
  • Test-Build lomiri-ui-toolkit (after several Qt5.15 fixes on upstream's side); exclude broken unit tests from building lomiri-ui-toolkit as recommended by upstream
  • Upstream MR (libusermetrics): Amend URL in Lomiri upstream path (https://gitlab.com/ubports/core/libusermetrics/-/merge_requests/3)
  • Upstream bug hunting (lomiri-ui-toolkit, unit test failures in unit/components/tst_haptics.qml)
  • Prepare content-hub packaging for initial build tests. However, this requires lomiri-ui-toolkit to be packaged first
  • Upstream MRs related to lomiri-ui-toolkit:
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/24 (grammar fixes)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/25 (fix misspelled property name)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/26 (fix misspelled word in CONTEXT_TRACE() call)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/27 (drop license test)
    https://gitlab.com/ubports/core/lomiri-ui-toolkit/-/merge_requests/28 (app-launch-profile: Use lomiri namespace in binary file names)
  • Upload to Debian experimental: lomiri-ui-toolkit 1.3.3000+dfsg1-1
  • Upload to Debian unstable: lomiri-download-manager 0.1.0-8 (Fix wrong package dependency, #988808)
  • Upload to Debian unstable: lomiri-app-launch 0.0.90-8 (Update .symbols file for alpha and hppa archs)
  • Upload to Debian experimental: hfd-service 0.1.0-2 (systemd build-requirement fix)
  • Upload to Debian experimental: deviceinfo 0.1.0-3 (Update .symbols for s390x arch)
  • Prepare for upload to Debian: content-hub 1.0.0
  • For content-hub to be buildable, we needed another packaging fix for lomiri-ui-toolkit which received an...
  • Upload to Debian expermental AS NEW (2): lomiri-ui-toolkit 1.3.3000+dfsg-2 (font package dependency fix, but also a d/copyright update with correct artwork license)

The largest amount of work (and time) went into getting lomiri-ui-toolkit ready for upload. That code component is a absolutely massive beast and dearly intertwined with Qt5 (and unit tests fail with every new warning a new Qt5.x introduces). This bit of work I couldn't do alone (see below in "Credits" section).

The next projects / packages ahead are some smaller packages (content-hub, gmenuharness, etc.) before we will finally come to lomiri (i.e. main bit of the Lomiri Operating Environment) itself.

Credits

Many big thanks go to everyone on the UBports project, but especially to Ratchanan Srirattanamet who lived inside of lomiri-ui-toolkit for more than two weeks, it seemed.

Also, thanks to Florian Leeber for being my point of contact for topics regarding my cooperation with the UBports Foundation.

Packaging Status

The current packaging status of Lomiri related packages in Debian can be viewed at:
https://qa.debian.org/developer.php?login=team%2Bubports%40tracker.debia...

light+love
Mike Gabriel (aka sunweaver)

07 June, 2021 12:23PM by sunweaver

Russell Coker

Dell PowerEdge T320 and Linux

I recently bought a couple of PowerEdge T320 servers, so now to learn about setting them up. They are a little newer than the R710 I recently setup (which had iDRAC version 6), they have iDRAC version 7.

RAM Speed

One system has a E5-2440 CPU with 2*16G DDR3 DIMMs and a Memtest86+ speed of 13,043MB/s, the other is essentially identical but with a E5-2430 CPU and 4*16G DDR3 DIMMs and a Memtest86+ speed of 8,270MB/s. I had expected that more DIMMs means better RAM performance but this isn’t what happened. I firstly upgraded the BIOS, as I expected it didn’t make a difference but it’s a good thing to try first.

On the E5-2430 I tried removing a DIMM after it was pointed out on Facebook that the CPU has 3 memory channels (here’s a link to a great site with information on that CPU and many others [1]). When I did that I was prompted to disable advanced ECC (which treats pairs of DIMMs as a single unit for ECC allowing correcting more than 1 bit errors) and I had to move the 3 remaining DIMMS to different slots. That improved the performance to 13,497MB/s. I then put the spare DIMM into the E5-2440 system and the performance increased to 13,793MB/s, when I installed 4 DIMMs in the E5-2440 system the performance remained at 13,793MB/s and the E5-2430 went down to 12,643MB/s.

This is a good result for me, I now have the most RAM and fastest RAM configuration in the system with the fastest CPU. I’ll sell the other one to someone who doesn’t need so much RAM or performance (it will be really good for a small office mail server and NAS).

Firmware Update

BIOS

The first issue is updating the BIOS, unfortunately the first link I found to the Dell web site didn’t have a link to download the Linux installer. It offered a Windows binary, an EFI program, and a DOS binary. I’m not about to install Windows if there is any other option and EFI is somewhat annoying, so that leaves DOS. The first Google result for installing FreeDOS advised using “unetbootin”, that didn’t work at all for me (created a USB image that the Dell BIOS didn’t recognise as bootable) and even if it did it wouldn’t have been a good solution.

I went to the FreeDOS download page [2] and got the “Lite USB” zip file. That contained “FD12LITE.img” which I could just dd to a USB stick. I then used fdisk to create a second 32MB partition, used mkfs.fat to format it, and then copied the BIOS image file to it. I booted the USB stick and then ran the BIOS update program from drive D:. After the BIOS update this became the first system I’ve seen get a totally green result from “spectre-meltdown-checker“!

I found the link to the Linux installer for the new Dell BIOS afterwards, but it was still good to play with FreeDOS.

PERC Driver

I probably didn’t really need to update the PERC (PowerEdge Raid Controller) firmware as I’m just going to run it in JBOD mode. But it was easy to do, a simple bash shell script to update it.

Here are the perccli commands needed to access disks, it’s all hot-plug so you can insert disks and do all this without a reboot:

# show overview
perccli show
# show controller 0 details
perccli /c0 show all
# show controller 0 info with less detail
perccli /c0 show
# clear all "foreign" RAID members
perccli /c0 /fall delete
# add a vd (RAID) of level RAID0 (r0) with the drive 32:0 (enclosure:slot from above command)
perccli /c0 add vd r0 drives=32:0

The “perccli /c0 show” command gives the following summary of disk (“PD” in perccli terminology) information amongst other information. The EID is the enclosure, Slt is the “slot” (IE the bay you plug the disk into) and the DID is the disk identifier (not sure what happens if you have multiple enclosures). The allocation of device names (sda, sdb, etc) will be in order of EID:Slt or DID at boot time, and any drives added at run time will get the next letters available.

----------------------------------------------------------------------------------
EID:Slt DID State DG       Size Intf Med SED PI SeSz Model                     Sp 
----------------------------------------------------------------------------------
32:0      0 Onln   0  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:1      1 Onln   1  465.25 GB SATA SSD Y   N  512B Samsung SSD 850 EVO 500GB U  
32:3      3 Onln   2   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
32:4      4 Onln   3   3.637 TB SATA HDD N   N  512B WDC WD40EURX-64WRWY0      U  
32:5      5 Onln   5 278.875 GB SAS  HDD Y   N  512B ST300MM0026               U  
32:6      6 Onln   6 558.375 GB SAS  HDD N   N  512B AL13SXL600N               U  
32:7      7 Onln   4   3.637 TB SATA HDD N   N  512B ST4000DM000-1F2168        U  
----------------------------------------------------------------------------------

The PERC controller is a MegaRAID with possibly some minor changes, there are reports of Linux MegaRAID management utilities working on it for similar functionality to perccli. The version of MegaRAID utilities I tried didn’t work on my PERC hardware. The smartctl utility works on those disks if you tell it you have a MegaRAID controller (so obviously there’s enough similarity that some MegaRAID utilities will work). Here are example smartctl commands for the first and last disks on my system. Note that the disk device node doesn’t matter as all device nodes associated with the PERC/MegaRAID are equal for smartctl.

# get model number etc on DID 0 (Samsung SSD)
smartctl -d megaraid,0 -i /dev/sda
# get all the basic information on DID 0
smartctl -d megaraid,0 -a /dev/sda
# get model number etc on DID 7 (Seagate 4TB disk)
smartctl -d megaraid,7 -i /dev/sda
# exactly the same output as the previous command
smartctl -d megaraid,7 -i /dev/sdc

I have uploaded etbemon version 1.3.5-6 to Debian which has support for monitoring smartctl status of MegaRAID devices and NVMe devices.

IDRAC

To update IDRAC on Linux there’s a bash script with the firmware in the same file (binary stuff at the end of a shell script). To make things a little more exciting the script insists that rpm be available (running “apt install rpm” fixes that for a Debian system). It also creates and runs other shell scripts which start with “#!/bin/sh” but depend on bash syntax. So I had to make /bin/sh a symlink to /bin/bash. You know you need this if you see errors like “typeset: not found” and “[: -eq: unexpected operator” and then the system reboots. Dell people, please test your scripts on dash (the Debian /bin/sh) or just specify #!/bin/bash.

If the IDRAC update works it will take about 8 minutes.

Lifecycle Controller

The Lifecycle Controller is apparently for installing OS and firmware updates. I use Linux tools to update Linux and I generally don’t plan to update the firmware after deployment (although I could do so from Linux if needed). So it doesn’t seem to offer anything useful to me.

Setting Up IDRAC

For extra excitement I decided to try to setup IDRAC from the Linux command-line. To install the RAC setup tool you run “apt install srvadmin-idracadm7 libargtable2-0” (because srvadmin-idracadm7 doesn’t have the right dependencies).

# srvadmin-idracadm7 is missing a dependency
apt install srvadmin-idracadm7 libargtable2-0
# set the IP address, netmask, and gatewat for IDRAC
idracadm7 setniccfg -s 192.168.0.2 255.255.255.0 192.168.0.1
# put my name on the front panel LCD
idracadm7 set System.LCD.UserDefinedString "Russell Coker"

Conclusion

This is a very nice deskside workstation/server. It’s extremely quiet with hardly any fan noise and the case is strong enough to contain the noise of hard drives. When running with 3* 3.5″ SATA disks and 2*10k 2.5″ SAS disks on a wooden floor it wasn’t annoyingly loud. Without the SAS disks it was as quiet as you can expect any PC to be, definitely not the volume you expect from a serious server! I bought the T320 systems loaded with SAS disks which made them quite loud, I immediately put the disks on ebay and installed SATA SSDs and hard drives which gives me more performance and more space than the SAS disks with less cost and almost no noise.

8*3.5″ drive bays gives room for expansion. I currently have 2*SATA SSDs and 3*SATA disks, the SSDs are for the root filesystem (including /home) and the disks are for a separate filesystem for large files.

07 June, 2021 07:18AM by etbe

Russ Allbery

Review: Stoneskin

Review: Stoneskin, by K.B. Spangler

Series: Deep Witches #0
Publisher: A Girl and Her Fed Books
Copyright: September 2017
ASIN: B075PHK498
Format: Kindle
Pages: 226

Stoneskin is a prequel to the Deep Witches Trilogy, which is why I have it marked as book 0 of the series. Unlike most prequels, it was written and published before the series and there doesn't seem to be any reason not to read it first.

Tembi Moon is an eight-year-old girl from the poor Marumaru area on the planet of Adhama. Humanity has spread to the stars and first terraformed the worlds and then bioformed themselves to live there. The differences are subtle, but Tembi's skin becomes thicker and less sensitive when damaged (either physically or emotionally) and she can close her ears against dust storms. One day, she wakes up in an unknown alley and finds herself on the world of Miha'ana, sixteen thousand light-years away, where she is rescued and brought home by a Witch named Matindi.

In this science fiction future, nearly all interstellar travel is done through the Deep. The Deep is not like the typical hand-waved science fiction subspace, most notably in that it's alive. No one is entirely sure where it came from or what sort of creature it is. It sometimes manages to communicate in words, but abstract patterns with feelings attached are more common, and it only communicates with specific people. Those people are Witches, who are chosen by the Deep via some criteria no one understands. Witches can use the Deep to move themselves or anything else around the galaxy. All interstellar logistics rely on them.

The basics of Tembi's story are not that unusual; she's been chosen by the Deep to be a Witch. What is remarkable is that she's young and she's poor, completely disconnected from the power structures of the galaxy. But, once chosen, her path as far as the rest of the galaxy is concerned is fixed: she will go to Lancaster to be trained as a Witch. Matindi is able to postpone this for a time by keeping an eye on her, but not forever.

I bought this book because of the idea of the Deep, and that is indeed the best part of the book. There is a lot of mystery about its exact nature, most of which is not resolved in this book, but it mostly behaves like a giant and extremely strange dog, and it's awesome. Replacing the various pseudo-scientific explanations for faster than light travel with interactions with a dream-weird giant St. Bernard with too many paws that talks in swirls of colored bubbles and is very eager to please its friends is brilliant.

This book covers a lot of years of Tembi's life and is, as advertised, a prelude to a story that is not resolved here. It's a coming of age story in which she does eventually end up at Lancaster, learns and chafes at the elaborate and very conservative structures humans have put in place to try to make interactions with the Deep predictable and reliable, and eventually gets drawn into the politics of war and the question of when people have a responsibility to intervene. Tembi, and the reader, also have many opportunities to get extremely upset at how the Deep is treated and how much entitlement the Witches have about their access and control, although how the Deep feels about it is left for a future book.

Not all of this story is as good as the premise. There are some standard coming of age tropes that I'm not fond of, such as Tembi's predictable temporary falling out with the Deep (although the Deep's reaction is entertaining). It's also not at all a complete story, although that's clearly signaled by the subtitle. But as an introduction to the story universe and an extended bit of scene-setting, it accomplishes what it sets out to do. It's also satisfyingly thoughtful about the moral trade-offs around stability and the value of preserving institutions. I know which side I'm on within the universe, but I appreciated how much nuance and thoughtfulness Spangler puts into the contrary opinion.

I'm hooked on the universe and want to learn more about the Deep, enough so that I've already bought the first book of the main trilogy.

Followed by The Blackwing War.

Rating: 7 out of 10

07 June, 2021 04:56AM

June 06, 2021

Iustin Pop

Goodbye Travis, hello GitHub Actions

My very cyclical open-source work

For some reason, I only manage to do coding at home every some months - mostly 6 months apart, so twice a year (even worse than my blogging frequency :P). As such, I missed the whole discussion about travis-ci (the .org version) going away, etc.

So when I finally did some work on corydalis a few weeks ago, and had a travis build failure (restoring a large cache simply timed out, either travis or S3 has some hiccup - it cleared by itself a day later), I opened the travis-ci interface to see the scary banner (“travis-ci.org is shutting down�), and asked myself what’s happening. The deadline was in less than a week even 😧…

Long story short, Travis was a good home for many years, but they were bought and are doing significant changes to their OSS support, so it’s time to move on. I anyway wanted to learn GitHub Actions for a while (ahem, free time intervened), so this was a good (forced) “opportunity�.

Proper composable CI

The advantage of Travis’ infrastructure was that the build configuration was really simple. It had very few primitives: pre-steps (before_install), install steps, and the actual things to do to test, post-steps (after_sucess) and a few other small helpers (caching, apt packages, etc.) This made it really easy to just pick up and write a config, plus it had the advantage of allowing to test configs from the web UI without needing to push.

This simplicity was unfortunately also its significant limiter: the way to do complex things in steps was simply to add more shell commands.

GitHub actions, together with its marketplace, changes this entirely. There are no “built-in� actions, the language just defines the build/job/step hierarchy, and one glues together whatever steps they want. This has the disadvantage that even checking out the code needs to be explicitly written in all workflows (so boilerplate, if you don’t need customisation), but it opens up a huge opportunity for composition, since it allows people to publish actions (steps) that you just import, encapsulating all the work.

So, after learning how to write a moderately complicated workflow (complicated as in multiple Python version, some of them needing different OS version, and multi-OS), it was straightforward to port this to all my projects - just somewhat tedious. I’ve now shutdown all builds on Travis, just can’t find a way to delete my account 😅

Better multi-OS, worse (missing) multi-arch

In theory, Travis supports Linux, MacOS, FreeBSD and Windows, but I’ve found that support for non-Linux is not quite as good. Maybe I missed things, but multi-version Python builds on MacOS were not as nicely supported as Linux; Windows is quite early, and very limited; and I haven’t tested FreeBSD.

GitHub is more restrictive - Linux, MacOS and Windows - but I found support for MacOS and Windows better for my use cases. If your use case is testing multiple MacOS versions, Travis wins, if it’s more varied languages/etc. on the single available MacOS version, GitHub works better.

On the multi-arch side, Travis wins hands-down. Four different native architectures, and enabling one more is just adding another key to the arch list. With GitHub, if I understand right, you either have to use docker+emulation, or use self-hosted runners.

So here it really matters what is more important to you. Maybe in the future GitHub will support more arches, but right now, Travis wins for this use-case.

Summary

For my specific use-case, GitHub Actions is a better fit right now. The marketplace has advantages (I’ll explain better in a future post), the actions are a very nice way to encapsulate functionality, and it’s still available/free (up to a limit) for open source projects. I don’t know what the future of Travis for OSS will be, but all I heard so far is very concerning.

However, I’ll still miss a few things.

For example, an overall dashboard for all my projects, like this one:

Travis dashboard
Travis dashboard

I couldn’t find any such thing on GitHub, so I just use my set of badges.

Then cache management. Travis allows you to clear the cache, and it does auto-update the cache. GitHub caches are immutable once built, so you have to:

  • watch if changed dependencies/dependency chains result in things that are no longer cached;
  • if so, need to manually bump the cache key, resulting in a commit for purely administrative purposes.

For languages where you have a clean full chain of dependencies recorded (e.g. node’s package-lock.json, stack’s stack.yaml.lock), this is trivial to achieve, but gets complicated if you add OS dependencies, languages which don’t record all this, etc.

Hmm, maybe I should just embed the year/month in the cache name - cheating, but automated cheating.

With all said and done, I think GHA is much less refined, but with more potential. Plus, the pace of innovation on Travis side was quite slow (likely money problems, hence them being bought, etc.)…

So one TODO done: “learn GitHub Actions�, even if a bit unplanned 😂

06 June, 2021 09:00PM

Russell Coker

Netflix and IPv6

It seems that Netflix has an ongoing issue of not working well with IPv6, apparently they have some sort of region checking code that doesn’t correctly identify IPv6 prefixes. To fix this I wrote the following script to make a small zone file with only A records for Netflix and no AAAA records. The $OUT.header file just has the SOA record for my fake netflix.com domain.

#!/bin/bash

OUT=/etc/bind/data/netflix.com
HEAD=$OUT.header

cp $HEAD $OUT
dig -t a www.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/www IN/p"|grep [0-9]$ >> $OUT
dig -t a android.prod.cloud.netflix.com @8.8.8.8|sed -n -e "s/^.*IN/android.prod.cloud IN/p"|grep [0-9]$ >> $OUT
/usr/sbin/rndc reload > /dev/null

Update

I updated this post to add a line for android.prod.cloud.netflix.com which is the address used by Android devices.

06 June, 2021 11:36AM by etbe

hackergotchi for Evgeni Golov

Evgeni Golov

Controlling Somfy roller shutters using an ESP32 and ESPHome

Our house has solar powered, remote controllable roller shutters on the roof windows, built by the German company named HEIM & HAUS. However, when you look closely at the remote control or the shutter motor, you'll see another brand name: SIMU. As the shutters don't have any wiring inside the house, the only way to control them is via the remote interface. So let's go on the Internet and see how one can do that, shall we? ;)

First thing we learn is that SIMU remote stuff is just re-branded Somfy. Great, another name! Looking further we find that Somfy uses some obscure protocol to prevent (replay) attacks (spoiler: it doesn't!) and there are tools for RTL-SDR and Arduino available. That's perfect!

Always sniff with RTL-SDR first!

Given the two re-brandings in the supply chain, I wasn't 100% sure our shutters really use the same protocol. So the first "hack" was to listen and decrypt the communication using RTL-SDR:

$ git clone https://github.com/dimhoff/radio_stuff
$ cd radio_stuff
$ make -C converters am_to_ook
$ make -C decoders decode_somfy
$ rtl_fm -M am -f 433.42M -s 270K | ./am_to_ook -d 10 -t 2000 -  | ./decode_somfy
<press some buttons on the remote>

The output contains the buttons I pressed, but also the id of the remote and the command counter (which is supposed to prevent replay attacks). At this point I could just use the id and the counter to send own commands, but if I'd do that too often, the real remote would stop working, as its counter won't increase and the receiver will drop the commands when the counters differ too much.

But that's good enough for now. I know I'm looking for the right protocol at the right frequency. As the end result should be an ESP32, let's move on!

Acquiring the right hardware

Contrary to an RTL-SDR, one usually does not have a spare ESP32 with 433MHz radio at home, so I went shopping: a NodeMCU-32S clone and a CC1101. The CC1101 is important as most 433MHz chips for Arduino/ESP only work at 433.92MHz, but Somfy uses 433.42MHz and using the wrong frequency would result in really bad reception. The CC1101 is essentially an SDR, as you can tune it to a huge spectrum of frequencies.

Oh and we need some cables, a bread board, the usual stuff ;)

The wiring is rather simple:

ESP32 wiring for a CC1101

And the end result isn't too beautiful either, but it works:

ESP32 and CC1101 in a simple case

Acquiring the right software

In my initial research I found an Arduino sketch and was totally prepared to port it to ESP32, but luckily somebody already did that for me! Even better, it's explicitly using the CC1101. Okay, okay, I cheated, I actually ordered the hardware after I found this port and the reference to CC1101. ;)

As I am using ESPHome for my ESPs, the idea was to add a "Cover" that's controlling the shutters to it. Writing some C++, how hard can it be?

Turns out, not that hard. You can see the code in my GitHub repo. It consists of two (relevant) files: somfy_cover.h and somfy.yaml.

somfy_cover.h essentially wraps the communication with the Somfy_Remote_Lib library into an almost boilerplate Custom Cover for ESPHome. There is nothing too fancy in there. The only real difference to the "Custom Cover" example from the documentation is the split into SomfyESPRemote (which inherits from Component) and SomfyESPCover (which inherits from Cover) -- this is taken from the Custom Sensor documentation and allows me to define one "remote" that controls multiple "covers" using the add_cover function. The first two params of the function are the NVS name and key (think database table and row), and the third is the rolling code of the remote (stored in somfy_secrets.h, which is not in Git).

In ESPHome a Cover shall define its properties as CoverTraits. Here we call set_is_assumed_state(true), as we don't know the state of the shutters - they could have been controlled using the other (real) remote - and setting this to true allows issuing open/close commands at all times. We also call set_supports_position(false) as we can't tell the shutters to move to a specific position.

The one additional feature compared to a normal Cover interface is the program function, which allows to call the "program" command so that the shutters can learn a new remote.

somfy.yaml is the ESPHome "configuration", which contains information about the used hardware, WiFi credentials etc. Again, mostly boilerplate. The interesting parts are the loading of the additional libraries and attaching the custom component with multiple covers and the additional PROG switches:

esphome:
  name: somfy
  platform: ESP32
  board: nodemcu-32s
  libraries:
    - SmartRC-CC1101-Driver-Lib@2.5.6
    - Somfy_Remote_Lib@0.4.0
    - EEPROM
  includes:
    - somfy_secrets.h
    - somfy_cover.h


cover:
  - platform: custom
    lambda: |-
      auto somfy_remote = new SomfyESPRemote();
      somfy_remote->add_cover("somfy", "badezimmer", SOMFY_REMOTE_BADEZIMMER);
      somfy_remote->add_cover("somfy", "kinderzimmer", SOMFY_REMOTE_KINDERZIMMER);
      App.register_component(somfy_remote);
      return somfy_remote->covers;

    covers:
      - id: "somfy"
        name: "Somfy Cover"
      - id: "somfy2"
        name: "Somfy Cover2"

switch:
  - platform: template
    name: "PROG"
    turn_on_action:
      - lambda: |-
          ((SomfyESPCover*)id(somfy))->program();
  - platform: template
    name: "PROG2"
    turn_on_action:
      - lambda: |-
          ((SomfyESPCover*)id(somfy2))->program();

The switch to trigger the program mode took me a bit. As the Cover interface of ESPHome does not offer any additional functions besides movement control, I first wrote code to trigger "program" when "stop" was pressed three times in a row, but that felt really cumbersome and also had the side effect that the remote would send more than needed, sometimes confusing the shutters. I then decided to have a separate button (well, switch) for that, but the compiler yelled at me I can't call program on a Cover as it does not have such a function. Turns out, you need to explicitly cast to SomfyESPCover and then it works, even if the code becomes really readable, NOT. Oh and as the switch does not have any code to actually change/report state, it effectively acts as a button that can be pressed.

At this point we can just take an existing remote, press PROG for 5 seconds, see the blinds move shortly up and down a bit and press PROG on our new ESP32 remote and the shutters will learn the new remote.

And thanks to the awesome integration of ESPHome into HomeAssistant, this instantly shows up as a new controllable cover there too.

Future Additional Work

I started writing this post about a year ago… And the initial implementation had some room for improvement…

More than one remote

The initial code only created one remote and one cover element. Sure, we could attach that to all shutters (there are 4 of them), but we really want to be able to control them separately.

Thankfully I managed to read enough ESPHome docs, and learned how to operate std::vector to make the code dynamically accept new shutters.

Using ESP32's NVS

The ESP32 has a non-volatile key-value storage which is much nicer than throwing bits at an emulated EEPROM. The first library I used for that explicitly used EEPROM storage and it would have required quite some hacking to make it work with NVS. Thankfully the library I am using now has a plugable storage interface, and I could just write the NVS backend myself and upstream now supports that. Yay open-source!

Remaining issues

Real state is unknown

As noted above, the ESP does not know the real state of the shutters: a command could have been lost in transmission (the Somfy protocol is send-only, there is no feedback) or the shutters might have been controlled by another remote. At least the second part could be solved by listening all the time and trying to decode commands heard over the air, but I don't think this is worth the time -- worst that can happen is that a closed (opened) shutter receives another close (open) command and that is harmless as they have integrated endstops and know that they should not move further.

Can't program new remotes with ESP only

To program new remotes, one has to press the "PROG" button for 5 seconds. This was not exposed in the old library, but the new one does support "long press", I just would need to add another ugly switch to the config and I currently don't plan to do so, as I do have working remotes for the case I need to learn a new one.

06 June, 2021 09:22AM by evgeni

June 05, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

td 0.0.4 on CRAN: More Maintenance

The still fairly recent td package for accessing the twelvedata API for financial data has been updated on CRAN this morning, and is now at released version 0.0.4. This corrects something the previous 0.0.3 release from last weekend was meant to address, but didn’t quite do it.

Access to the helper function finding a proper user config file (for .e.g., the API config) is now correctly conditioned on R 4.0.0, and the versioned depends has been removed.

The NEWS entry follows.

Changes in version 0.0.4 (2021-06-05)

  • The version comparison was corrected and the package no longer (formally) depends on R (>= 4.0.0)

  • Very minor README.md edits

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

05 June, 2021 01:56PM

June 04, 2021

hackergotchi for Junichi Uekawa

Junichi Uekawa

emacs tramp mode for chroot.

emacs tramp mode for chroot. I saw tramp for docker and lxc so I figured it must be possible to write a mode for chroot. I wrote one for cros_sdk and that made my life much easier. I can build and run inside chroot from emacs transparently. Seems like it should also be possible to write something for dchroot.

04 June, 2021 11:25PM by Junichi Uekawa

June 03, 2021

Free Software Fellowship

Matthias Kirschner & FSFE People Trafficking, coercion of volunteers

Bounties have recently been offered for evidence of defamation and other malicious actions from the management of free software communities. This incredible picture comes from Albania.

Matthias Kirschner, FSFE President took controversial paternity leave in March and April 2018 while volunteers were doing most of the work without pay. These are facts.

Upon his return, he immediately left his wife Kristina to change the nappies and went back out to another OSCAL conference in Tirana, Albania during May 2018. He had attended OSCAL more than once.

This picture captures the happy daddy with a bunch of women at least ten years younger. Readers may recognize some of the women. All these young women appear far more dignified than the President of the FSFE. Some of them are developers, the President of the FSFE never wrote any code. The local drink is called Raki.

There have been serious concerns expressed about trafficking of women and volunteers in Albania's Open Labs group. The photo is from the Pazari i Ri fish market.

FSFE's REUSE campaign promotes Modern Day Slavery

FSFE, a registered charity in Germany, has now created a dedicated micro-site for their REUSE campaign.

What is this campaign about? Well, FSFE is asking unpaid volunteers to spend extra time adding more license text to our work. Their campaign asks us to do that for free, to spend time working on license text and spend more time running a tool to check the license text. Why should unpaid volunteers spend time and effort doing this? Why don't we just spend more time writing code? Well, the answer is buried in the front page of the micro-site:

Kirschner: A REUSE-compliant project makes the jobs of legal experts and compliance officers much easier.

Kirschner is talking about the legal experts and compliance officers in very large companies like Google, Amazon and IBM, his sponsors.

Is it really appropriate for a taxpayer-subsidized charity to be bullying unpaid volunteers to do this extra task for free and helping those big greedy companies? This doesn't sound like a real charity. It has just been revealed that Microsoft paid $0 in tax on the $314 billion surplus from transactions through Ireland in 2020. Here is that definition of People Trafficking from the US State Department:

Human trafficking can include, but does not require, movement. People may be considered trafficking victims regardless of whether they were born into a state of servitude, were exploited in their home town, were transported to the exploitative situation, previously consented to work for a trafficker, or participated in a crime as a direct result of being trafficked. At the heart of this phenomenon is the traffickers’ aim to exploit and enslave their victims and the myriad coercive and deceptive practices they use to do so.

People tricked into running the FSFE REUSE tool may be victims of trafficking within this definition. Remember, when women in FSFE's office asked to be paid for all the hours they work, Kirschner sacked them all.

Matthias Kirschner, Tirana, Albania, women, girls, Outreachy

People trafficking is a real problem

Undercover police in London released this photo of a woman being trafficked by an Albanian man in Oxford Street, London.

Matthias Kirschner, Chris Lamb, Oxford Street, London, Albanian women, people trafficking, modern slavery Matthias Kirschner, Tirana, Albania, women, girls, Outreachy

03 June, 2021 09:20PM

Matthias Kirschner & FSFE People Trafficking, coercion of volunteers

Bounties have recently been offered for evidence of defamation and other malicious actions from the management of free software communities. This incredible picture comes from Albania.

Matthias Kirschner, FSFE President took controversial paternity leave in March and April 2018 while volunteers were doing most of the work without pay. These are facts.

Upon his return, he immediately left his wife Kristina to change the nappies and went back out to another OSCAL conference in Tirana, Albania during May 2018. He had attended OSCAL more than once.

This picture captures the happy daddy with a bunch of women at least ten years younger. Readers may recognize some of the women. All these young women appear far more dignified than the President of the FSFE. Some of them are developers, the President of the FSFE never wrote any code. The local drink is called Raki.

There have been serious concerns expressed about trafficking of women and volunteers in Albania's Open Labs group. The photo is from the Pazari i Ri fish market.

FSFE's REUSE campaign promotes Modern Day Slavery

FSFE, a registered charity in Germany, has now created a dedicated micro-site for their REUSE campaign.

What is this campaign about? Well, FSFE is asking unpaid volunteers to spend extra time adding more license text to our work. Their campaign asks us to do that for free, to spend time working on license text and spend more time running a tool to check the license text. Why should unpaid volunteers spend time and effort doing this? Why don't we just spend more time writing code? Well, the answer is buried in the front page of the micro-site:

Kirschner: A REUSE-compliant project makes the jobs of legal experts and compliance officers much easier.

Kirschner is talking about the legal experts and compliance officers in very large companies like Google, Amazon and IBM, his sponsors.

Is it really appropriate for a taxpayer-subsidized charity to be bullying unpaid volunteers to do this extra task for free and helping those big greedy companies? This doesn't sound like a real charity. It has just been revealed that Microsoft paid $0 in tax on the $314 billion surplus from transactions through Ireland in 2020. Here is that definition of People Trafficking from the US State Department:

Human trafficking can include, but does not require, movement. People may be considered trafficking victims regardless of whether they were born into a state of servitude, were exploited in their home town, were transported to the exploitative situation, previously consented to work for a trafficker, or participated in a crime as a direct result of being trafficked. At the heart of this phenomenon is the traffickers’ aim to exploit and enslave their victims and the myriad coercive and deceptive practices they use to do so.

People tricked into running the FSFE REUSE tool may be victims of trafficking within this definition. Remember, when women in FSFE's office asked to be paid for all the hours they work, Kirschner sacked them all.

Matthias Kirschner, Tirana, Albania, women, girls, Outreachy

People trafficking is a real problem

Undercover police in London released this photo of a woman being trafficked by an Albanian man in Oxford Street, London.

Matthias Kirschner, Chris Lamb, Oxford Street, London, Albanian women, people trafficking, modern slavery Matthias Kirschner, Tirana, Albania, women, girls, Outreachy

03 June, 2021 09:20PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Digging into Kubernetes containers

Having build a single node Kubernetes cluster and had a poke at what it’s doing in terms of networking the next thing I want to do is figure out what it’s doing in terms of containers. You might argue this should have come before networking, but to me the networking piece is more non-standard than the container piece, so I wanted to understand that first.

Let’s start with a process listing on the host.

ps faxno user,stat,cmd

There are a number of processes from the host kernel we don’t care about:

kernel processes
    USER STAT CMD
       0 S    [kthreadd]
       0 I<    \_ [rcu_gp]
       0 I<    \_ [rcu_par_gp]
       0 I<    \_ [kworker/0:0H-events_highpri]
       0 I<    \_ [mm_percpu_wq]
       0 S     \_ [rcu_tasks_rude_]
       0 S     \_ [rcu_tasks_trace]
       0 S     \_ [ksoftirqd/0]
       0 I     \_ [rcu_sched]
       0 S     \_ [migration/0]
       0 S     \_ [cpuhp/0]
       0 S     \_ [cpuhp/1]
       0 S     \_ [migration/1]
       0 S     \_ [ksoftirqd/1]
       0 I<    \_ [kworker/1:0H-kblockd]
       0 S     \_ [cpuhp/2]
       0 S     \_ [migration/2]
       0 S     \_ [ksoftirqd/2]
       0 I<    \_ [kworker/2:0H-events_highpri]
       0 S     \_ [cpuhp/3]
       0 S     \_ [migration/3]
       0 S     \_ [ksoftirqd/3]
       0 I<    \_ [kworker/3:0H-kblockd]
       0 S     \_ [kdevtmpfs]
       0 I<    \_ [netns]
       0 S     \_ [kauditd]
       0 S     \_ [khungtaskd]
       0 S     \_ [oom_reaper]
       0 I<    \_ [writeback]
       0 S     \_ [kcompactd0]
       0 SN    \_ [ksmd]
       0 SN    \_ [khugepaged]
       0 I<    \_ [kintegrityd]
       0 I<    \_ [kblockd]
       0 I<    \_ [blkcg_punt_bio]
       0 I<    \_ [edac-poller]
       0 I<    \_ [devfreq_wq]
       0 I<    \_ [kworker/0:1H-kblockd]
       0 S     \_ [kswapd0]
       0 I<    \_ [kthrotld]
       0 I<    \_ [acpi_thermal_pm]
       0 I<    \_ [ipv6_addrconf]
       0 I<    \_ [kstrp]
       0 I<    \_ [zswap-shrink]
       0 I<    \_ [kworker/u9:0-hci0]
       0 I<    \_ [kworker/2:1H-kblockd]
       0 I<    \_ [ata_sff]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/39-mmc0]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/42-mmc1]
       0 S     \_ [scsi_eh_0]
       0 I<    \_ [scsi_tmf_0]
       0 S     \_ [scsi_eh_1]
       0 I<    \_ [scsi_tmf_1]
       0 I<    \_ [kworker/1:1H-kblockd]
       0 I<    \_ [kworker/3:1H-kblockd]
       0 S     \_ [jbd2/sda5-8]
       0 I<    \_ [ext4-rsv-conver]
       0 S     \_ [watchdogd]
       0 S     \_ [scsi_eh_2]
       0 I<    \_ [scsi_tmf_2]
       0 S     \_ [usb-storage]
       0 I<    \_ [cfg80211]
       0 S     \_ [irq/130-mei_me]
       0 I<    \_ [cryptd]
       0 I<    \_ [uas]
       0 S     \_ [irq/131-iwlwifi]
       0 S     \_ [card0-crtc0]
       0 S     \_ [card0-crtc1]
       0 S     \_ [card0-crtc2]
       0 I<    \_ [kworker/u9:2-hci0]
       0 I     \_ [kworker/3:0-events]
       0 I     \_ [kworker/2:0-events]
       0 I     \_ [kworker/1:0-events_power_efficient]
       0 I     \_ [kworker/3:2-events]
       0 I     \_ [kworker/1:1]
       0 I     \_ [kworker/u8:1-events_unbound]
       0 I     \_ [kworker/0:2-events]
       0 I     \_ [kworker/2:2]
       0 I     \_ [kworker/u8:0-events_unbound]
       0 I     \_ [kworker/0:1-events]
       0 I     \_ [kworker/0:0-events]

There are various basic host processes, including my SSH connections, and Docker. I note it’s using containerd. We also see kubelet, the Kubernetes node agent.

host processes
    USER STAT CMD
       0 Ss   /sbin/init
       0 Ss   /lib/systemd/systemd-journald
       0 Ss   /lib/systemd/systemd-udevd
     101 Ssl  /lib/systemd/systemd-timesyncd
       0 Ssl  /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf /var/lib/dhcp/dhclient.enx00e04c6851de.leases -I -df /var/lib/dhcp/dhclient6.enx00e04c6851de.leases enx00e04c6851de
       0 Ss   /usr/sbin/cron -f
     104 Ss   /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
       0 Ssl  /usr/sbin/dockerd -H fd://
       0 Ssl  /usr/sbin/rsyslogd -n -iNONE
       0 Ss   /usr/sbin/smartd -n
       0 Ss   /lib/systemd/systemd-logind
       0 Ssl  /usr/bin/containerd
       0 Ss+  /sbin/agetty -o -p -- \u --noclear tty1 linux
       0 Ss   sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
       0 Ss    \_ sshd: root@pts/1
       0 Ss    |   \_ -bash
       0 R+    |       \_ ps faxno user,stat,cmd
       0 Ss    \_ sshd: noodles [priv]
    1000 S         \_ sshd: noodles@pts/0
    1000 Ss+           \_ -bash
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
    1000 Ss   /lib/systemd/systemd --user
    1000 S     \_ (sd-pam)
       0 Ssl  /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.4.1

And that just leaves a bunch of container related processes:

container processes
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-apiserver --advertise-address=192.168.53.147 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456 -address /run/containerd/containerd.sock
       0 Ssl   \_ etcd --advertise-client-urls=https://192.168.53.147:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.53.147:2380 --initial-cluster=udon=https://192.168.53.147:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.53.147:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.53.147:2380 --name=udon --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=udon
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/bin/weave-npc
       0 S<        \_ /usr/sbin/ulogd -v
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/sh /home/weave/launch.sh
       0 Sl        \_ /home/weave/weaver --port=6783 --datapath=datapath --name=12:82:8f:ed:c7:bf --http-addr=127.0.0.1:6784 --metrics-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=192.168.0.0/24 --nickname=udon --ipalloc-init consensus=0 --conn-limit=200 --expect-npc --no-masq-local
       0 Sl        \_ /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -peer-name=12:82:8f:ed:c7:bf -log-level=debug
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30 -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/bash /usr/local/bin/run.sh
       0 S         \_ nginx: master process nginx -g daemon off;
   65534 S             \_ nginx: worker process
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074 -address /run/containerd/containerd.sock
     101 Ss    \_ /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 Ssl       \_ /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 S             \_ nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 S                 \_ nginx: cache manager process

There’s a lot going on there. Some bits are obvious; we can see the nginx ingress controller, our echoserver (the other nginx process hanging off /usr/local/bin/run.sh), and some things that look related to weave. The rest appears to be Kubernete’s related infrastructure.

kube-scheduler, kube-controller-manager, kube-apiserver, kube-proxy all look like core Kubernetes bits. etcd is a distributed, reliable key-value store. coredns is a DNS server, with plugins for Kubernetes and etcd.

What does Docker claim is happening?

docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED      STATUS      PORTS     NAMES
d5fa78fa31f1   k8s.gcr.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   3 days ago   Up 3 days             k8s_controller_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
6669168db70d   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run.…"   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
62b369de8d8c   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
649a507d4583   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
4a30785f9187   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
9ffd6b668ddf   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
534c0a698478   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
36b418e69ae7   df29c0a4002c                          "/home/weave/launch.…"   4 days ago   Up 4 days             k8s_weave_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_1
48d735f7f44e   weaveworks/weave-npc                  "/usr/bin/launch.sh"     4 days ago   Up 4 days             k8s_weave-npc_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
7104f65b5d92   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
26d92a720c56   4359e752b596                          "/usr/local/bin/kube…"   4 days ago   Up 4 days             k8s_kube-proxy_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
73fae81715b6   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
89f35bf7a825   771ffcf9ca63                          "kube-apiserver --ad…"   4 days ago   Up 4 days             k8s_kube-apiserver_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
afa9798c9f66   a4183b88f6e6                          "kube-scheduler --au…"   4 days ago   Up 4 days             k8s_kube-scheduler_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
2dabff6e4f59   0369cf4303ff                          "etcd --advertise-cl…"   4 days ago   Up 4 days             k8s_etcd_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0
4b3708b62f4d   e16544fd47b0                          "kube-controller-man…"   4 days ago   Up 4 days             k8s_kube-controller-manager_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
fd95c597ff31   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
589c1545d9e0   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
6f417fd8a8c5   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
c2ff2c50f0bc   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0

Ok, that’s interesting. Before we dig into it, what does Kubernetes say? (I’ve trimmed the RESTARTS + AGE columns to make things fit a bit better here; they weren’t interesting).

noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                        READY   STATUS
default         hello-node-59bffcc9fd-8hkgb                 1/1     Running
ingress-nginx   ingress-nginx-admission-create-8jgkt        0/1     Completed
ingress-nginx   ingress-nginx-admission-patch-jdq4t         0/1     Completed
ingress-nginx   ingress-nginx-controller-5b74bc9868-bczdr   1/1     Running
kube-system     coredns-558bd4d5db-4nvrg                    1/1     Running
kube-system     coredns-558bd4d5db-flrfq                    1/1     Running
kube-system     etcd-udon                                   1/1     Running
kube-system     kube-apiserver-udon                         1/1     Running
kube-system     kube-controller-manager-udon                1/1     Running
kube-system     kube-proxy-6d8kg                            1/1     Running
kube-system     kube-scheduler-udon                         1/1     Running
kube-system     weave-net-mchmg                             2/2     Running

So there are a lot more Docker instances running than Kubernetes pods. What’s happening there? Well, it turns out that Kubernetes builds pods from multiple different Docker instances. If you think of a traditional container as being comprised of a set of namespaces (process, network, hostname etc) and a cgroup then a pod is made up of the namespaces and then each docker instance within that pod has it’s own cgroup. Ian Lewis has a much deeper discussion in What are Kubernetes Pods Anyway?, but my takeaway is that a pod is a set of sort-of containers that are coupled. We can see this more clearly if we ask systemd for the cgroup breakdown:

systemd-cgls
Control group /:
-.slice
├─user.slice 
│ ├─user-0.slice 
│ │ ├─session-29.scope 
│ │ │ ├─ 515899 sshd: root@pts/1
│ │ │ ├─ 515913 -bash
│ │ │ ├─3519743 systemd-cgls
│ │ │ └─3519744 cat
│ │ └─user@0.service …
│ │   └─init.scope 
│ │     ├─515902 /lib/systemd/systemd --user
│ │     └─515903 (sd-pam)
│ └─user-1000.slice 
│   ├─user@1000.service …
│   │ └─init.scope 
│   │   ├─2564011 /lib/systemd/systemd --user
│   │   └─2564012 (sd-pam)
│   └─session-110.scope 
│     ├─2564007 sshd: noodles [priv]
│     ├─2564040 sshd: noodles@pts/0
│     └─2564041 -bash
├─init.scope 
│ └─1 /sbin/init
├─system.slice 
│ ├─containerd.service …
│ │ ├─  21383 /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff31…
│ │ ├─  21408 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc…
│ │ ├─  21432 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0…
│ │ ├─  21459 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c5…
│ │ ├─  21582 /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f66…
│ │ ├─  21607 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d…
│ │ ├─  21640 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825…
│ │ ├─  21648 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59…
│ │ ├─  22343 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b6…
│ │ ├─  22391 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c56…
│ │ ├─  26992 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92…
│ │ ├─  27405 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e…
│ │ ├─  27531 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7…
│ │ ├─  27941 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478…
│ │ ├─  27960 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddf…
│ │ ├─  28131 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f9187…
│ │ ├─  28159 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d4583…
│ │ ├─ 514667 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8c…
│ │ ├─ 514976 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18…
│ │ ├─ 698904 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70d…
│ │ ├─ 699284 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f1…
│ │ └─2805479 /usr/bin/containerd
│ ├─systemd-udevd.service 
│ │ └─2805502 /lib/systemd/systemd-udevd
│ ├─cron.service 
│ │ └─2805474 /usr/sbin/cron -f
│ ├─docker.service …
│ │ └─528 /usr/sbin/dockerd -H fd://
│ ├─kubelet.service 
│ │ └─2805501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap…
│ ├─systemd-journald.service 
│ │ └─2805505 /lib/systemd/systemd-journald
│ ├─ssh.service 
│ │ └─2805500 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
│ ├─ifup@enx00e04c6851de.service 
│ │ └─2805675 /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf…
│ ├─rsyslog.service 
│ │ └─2805488 /usr/sbin/rsyslogd -n -iNONE
│ ├─smartmontools.service 
│ │ └─2805499 /usr/sbin/smartd -n
│ ├─dbus.service 
│ │ └─527 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile…
│ ├─systemd-timesyncd.service 
│ │ └─2805513 /lib/systemd/systemd-timesyncd
│ ├─system-getty.slice 
│ │ └─getty@tty1.service 
│ │   └─536 /sbin/agetty -o -p -- \u --noclear tty1 linux
│ └─systemd-logind.service 
│   └─533 /lib/systemd/systemd-logind
└─kubepods.slice 
  ├─kubepods-burstable.slice 
  │ ├─kubepods-burstable-pod1af8c5f362b7b02269f4d244cb0e6fbf.slice 
  │ │ ├─docker-6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94.scope …
  │ │ │ └─21493 /pause
  │ │ └─docker-89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc.scope …
  │ │   └─21699 kube-apiserver --advertise-address=192.168.33.147 --allow-privi…
  │ ├─kubepods-burstable-podf8b2b52e_6673_4966_82b1_3fbe052a0297.slice 
  │ │ ├─docker-649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c.scope …
  │ │ │ └─28187 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0.scope …
  │ │   └─27987 /pause
  │ ├─kubepods-burstable-podc2a3008c1d9895f171cd394e38656ea0.slice 
  │ │ ├─docker-c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b.scope …
  │ │ │ └─21481 /pause
  │ │ └─docker-2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456.scope …
  │ │   └─21701 etcd --advertise-client-urls=https://192.168.33.147:2379 --cert…
  │ ├─kubepods-burstable-pod629dc49dfd9f7446eb681f1dcffe6d74.slice 
  │ │ ├─docker-fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413.scope …
  │ │ │ └─21491 /pause
  │ │ └─docker-afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752.scope …
  │ │   └─21680 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/sche…
  │ ├─kubepods-burstable-podb9af9615_8cde_4a18_8555_6da1f51b7136.slice 
  │ │ ├─docker-48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4.scope …
  │ │ │ ├─27424 /usr/bin/weave-npc
  │ │ │ └─27458 /usr/sbin/ulogd -v
  │ │ ├─docker-36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b.scope …
  │ │ │ ├─27549 /bin/sh /home/weave/launch.sh
  │ │ │ ├─27629 /home/weave/weaver --port=6783 --datapath=datapath --name=12:82…
  │ │ │ └─27825 /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -pee…
  │ │ └─docker-7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e.scope …
  │ │   └─27011 /pause
  │ ├─kubepods-burstable-pod4d7d3d81_a769_4de9_a4fb_04763b7c1605.slice 
  │ │ ├─docker-6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082.scope …
  │ │ │ └─698925 /pause
  │ │ └─docker-d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074.scope …
  │ │   ├─ 699303 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-ser…
  │ │   ├─ 699316 /nginx-ingress-controller --publish-service=ingress-nginx/ing…
  │ │   ├─ 699405 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/ngi…
  │ │   ├─1075085 nginx: worker process
  │ │   ├─1075086 nginx: worker process
  │ │   ├─1075087 nginx: worker process
  │ │   ├─1075088 nginx: worker process
  │ │   └─1075089 nginx: cache manager process
  │ ├─kubepods-burstable-pod1976f4d6_647c_45ca_b268_95f071f064d5.slice 
  │ │ ├─docker-4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf.scope …
  │ │ │ └─28178 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70.scope …
  │ │   └─27995 /pause
  │ └─kubepods-burstable-pod1d1b9018c3c6e7aa2e803c6e9ccd2eab.slice 
  │   ├─docker-589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856.scope …
  │   │ └─21489 /pause
  │   └─docker-4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62.scope …
  │     └─21690 kube-controller-manager --authentication-kubeconfig=/etc/kubern…
  └─kubepods-besteffort.slice 
    ├─kubepods-besteffort-podc7111c9e_7131_40e0_876d_be89d5ca1812.slice 
    │ ├─docker-62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2.scope …
    │ │ └─514688 /pause
    │ └─docker-7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30.scope …
    │   ├─514999 /bin/bash /usr/local/bin/run.sh
    │   ├─515039 nginx: master process nginx -g daemon off;
    │   └─515040 nginx: worker process
    └─kubepods-besteffort-pod8bf2d7ec_4850_427f_860f_465a9ff84841.slice 
      ├─docker-73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d.scope …
      │ └─22364 /pause
      └─docker-26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878.scope …
        └─22412 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.c…

Again, there’s a lot going on here, but if you look for the kubepods.slice piece then you can see our pods are divided into two sets, kubepods-burstable.slice and kubepods-besteffort.slice. Under those you can see the individual pods, all of which have at least 2 separate cgroups, one of which is running /pause. Turns out this is a generic Kubernetes image which basically performs the process reaping that an init process would do on a normal system; it just sits and waits for processes to exit and cleans them up. Again, Ian Lewis has more details on the pause container.

Finally let’s dig into the actual containers. The pause container seems like a good place to start. We can examine the details of where the filesystem is (may differ if you’re not using the overlay2 image thingy). The hex string is the container ID listed by docker ps.

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 6669168db70d
/var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# cd /var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# find . | sed -e 's;^./;;'
pause
proc
.dockerenv
etc
etc/resolv.conf
etc/hostname
etc/mtab
etc/hosts
sys
dev
dev/shm
dev/pts
dev/console
# file pause
pause: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d35dab7152881e37373d819f6864cd43c0124a65, stripped

This is a nice, minimal container. The pause binary is statically linked, so there are no extra libraries required and it’s just a basic set of support devices and files. I doubt the pieces in /etc are even required. Let’s try the echoserver next:

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 7cbb177bee18
/var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# cd /var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# find . | wc -l
3358

Wow. That’s a lot more stuff. Poking /etc/os-release shows why:

# grep PRETTY etc/os-release
PRETTY_NAME="Ubuntu 16.04.2 LTS"

Aha. It’s an Ubuntu-based image. We can cut straight to the chase with the nginx ingress container:

# docker exec d5fa78fa31f1 grep PRETTY /etc/os-release
PRETTY_NAME="Alpine Linux v3.13"

That’s a bit more reasonable an image for a container; Alpine Linux is a much smaller distro.

I don’t feel there’s a lot more poking to do here. It’s not something I’d expect to do on a normal Kubernetes setup, but I wanted to dig under the hood to make sure it really was just a normal container situation. I think the next steps involve adding a bit more complexity - that means building a pod with more than a single node, and then running an application that’s a bit more complicated. That should help explore two major advantages of running this sort of setup; resilency from a node dying, and the ability to scale out beyond what a single node can do.

03 June, 2021 08:20PM

Reproducible Builds

Reproducible Builds in May 2021

Welcome to the May 2021 report from the Reproducible Builds project

In these reports we try to highlight the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.


The president of the United States signed an executive order this month outlining policies aimed to improve the cybersecurity in the US. The executive order comes after a number of highly-publicised security problems such as a ransomware attack that affected an oil pipeline between Texas and New York and the SolarWinds hack that affected a large number of US federal agencies.

A summary of the (8,000-word) document is available, but section four is relevant in the context of reproducible builds. Titled “Enhancing Software Supply Chain Security”, it outlines a plan that might involve:

requiring developers to maintain greater visibility into their software and making security data publicly available. It stands up a concurrent public-private process to develop new and innovative approaches to secure software development and uses the power of Federal procurement to incentivize the market. Finally, it creates a pilot program to create an “energy star” type of label so the government – and the public at large – can quickly determine whether software was developed securely.

In response to this Executive Order, the US National Institute of Standards and Technology (NIST) announced that they would host a virtual workshop in early June to both respond and attempt to fulfill its terms. In addition, David Wheeler published a blog post on the Linux Foundation’s blog on the topic. Titled How LF communities enable security measures required by the US Executive Order on Cybersecurity, David’s post explicitly mentions reproducible builds, particularly the Yocto Project’s support for fully-reproducible builds.


David A. Wheeler posted to our mailing list, to announce that the public defense of his Fully Countering Trusting Trust through Diverse Double-Compiliing (DDC) PhD thesis at George Mason University is now available online.


Dan Shearer announced a new tool called “Not-Forking which attempts to avoid duplicating the source code of one project within another. This is highly relevant in the context of reproducible builds, as embedded code copies are often the cause of reproducibility: in many cases, addressing the problem upstream (and then ensuring a fixed version is available in distributions) is not a sufficient fix, as any embedded code copies remain unaffected. (This has been observed a number of times, particularly with embedded copies of help2man and similar documentation generation tools.)


Due to the recent upheavals on the Freenode IRC network, the #archlinux-reproducible has moved to Libera Chat. (The more general #reproducible-builds IRC channel, which is hosted on the OFTC network, has not moved.)


On our mailing list, Marcus Hoffman started a thread after finding that he was unable to hunt down the cause of a unreproducible build of an Android APK package which Bernhard M. Wiedemann managed to track down to a ‘pg-map-id’ field and a related checksum. This resulted in an issue being reported against Google’s Android toolchain which, as Marcus himself wrote, “hope it get’s fixed this year”.


Roland Clobus reported on his progress towards making the Debian ‘Live’ image reproducible on our mailing list this month, coordinating with Holger Levsen to add automatic, daily testing of Live images and producing diffoscope reports if not. Elsewhere in Debian, 9 reviews of Debian packages were added, 8 were updated and 29 were removed this month adding to our knowledge about identified issues. Chris Lamb also identified a new random_uuid_in_notebooks_generated_by_nbsphinx toolchain issue.

Software development

Upstream patches

diffoscope

diffoscope is the Reproducible Builds project in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it provides human-readable diffs from many kinds of binary formats. This month, Chris Lamb made a number of changes including releasing version 174, version 175 and version 176:

  • Bug fixes:

    • Check that we are parsing an actual Debian .buildinfo file, not just a file with that particular extension — after all, it could be any file. (#254, #987994)
    • Support signed .buildinfo files again. It appears that some versions of file(1) reports them as PGP signed message. []
    • Use the actual filesystem path name (instead of diffoscope’s concept of the source archive name) in order to correct filename filtering when an APK file has been extracted from a container format. In particular, we need to filter the auto-incremented 1.apk instead of original-name.pk. (#255)
  • New features:

    • Update ffmpeg tests to work with version 4.4. (#258)
    • Correct grammar in a fsimage.py debug message. []
  • Misc:

    • Don’t unnecessarily call os.path.basename twice in the Android APK comparator. []
    • Added instructions on how to install diffoscope on openSUSE on the diffoscope website [].
    • Add a comment about stripping filenames. []
    • Corrected a reference to site.salsa_url which was breaking the “File a new issue” link on the website. []

In addition:

  • Keith Smiley:

    • Improve support for Apple provisioning profiles. []
    • Fix ignoring objdump-related tests on MacOS. MacOS has a version of objdump(1) that doesn’t support --info so the tests would fail on that operating system. []
  • Mattia Rizzolo:

    • Fix recognition of compressed .xz archives with file(1) version 5.40. [][]
    • Embed small test fixture in the code itself, rather than a separate file. []

strip-nondeterminism

Chris Lamb made the following changes to strip-nondeterminism, our tool to remove specific non-deterministic results from a completed build:

  • Added support for Python pyzip files: they require special handling to not mangle the UNIX shebang. (#18)

  • Dropped single-debian-patch, etc. from the Debian source package options. []

  • Version 1.12.0-1 was uploaded to Debian unstable by Chris Lamb.

Website and documentation

Quite a few changes were made to the main Reproducible Builds website and documentation this month, including:

  • Arnout Engelen:

    • Add a section regarding contributing to NixOS. []
  • Chris Lamb:

    • Incorporate Holger Levsen’s suggestion to improve the homepage text. []
  • Holger Levsen:

    • Make the contribute page look a bit less like it is ‘under construction’, including explaining how we care about all distros and projects. [][][][]
    • Create an Arch Linux contribution page. [][]
    • Make sponsor link visible in the sidebar. []
  • Ian Muchina:

    • Add syntax highlight styles. []
  • Jelle van der Waa:

  • Ludovic Courtès:

    • Explain how to contribute to reproducible builds related to GNU Guix. []
  • Roland Clobus:

    • Added a trailing slash, fixing access to the Debian and Archlinux contribution pages. []
    • Fix markup as reported by msgfmt. []

Testing framework

The Reproducible Builds project operates a Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:

    • Automatic node health check improvements:

    • Improvements to the common-functions.sh library:

      • Set a more sensible default for the locale early on. []
      • Various visual improvements, including changes to script output. [][]
      • Improvement debug output. [][]
      • Only notify an IRC channel if a channel is actually configured. []
    • Improvements to cleanup routines:

      • Cleanup sbuild(1) directories using sudo(8) after three days. [][][]
      • Loosen a regular expression to detect failures when removing stuff. []
    • Misc:

      • Increase kernel inotify(7) watch limit further on all hosts. The value is now four times the default now. []
      • Don’t try to install the devscripts package from the buster-backports distribution. []
      • Improve grammar in some comments that are seen every day. []
  • Mattia Rizzolo:

    • Stop filtering out build failures due to -ffile-prefix-map: this flag is the default for the official dpkg package, so these are now “real” build failures. []
    • Export package ‘not for us’ (NFU) and ‘blacklist’ states in the reproducible.json file, but keep excluding them from tracker.json. []
    • Update the IP addresses of armhf architecture hosts. []
    • Properly alternate between -amd64 and -686 Debian kernels on the i386 architecture builders. []
    • Disable the man-db package everywhere, to save time in virtually all apt install/upgrade actions. []
  • Vagrant Cascadian:

    • Add new armhf architecture build nodes. [][]
    • Retire all machines with only 2GiB of ram. []
    • Drop Debian buster kernel configurations for cbxi4* and wbq0 hosts. []
    • Keep imx6 systems running Debian buster kernels. []
    • Prepare to switch armhf nodes over to Debian bullseye. [][]

Finally, build node maintenance was performed by Holger Levsen [][][] and Vagrant Cascadian [][][][]


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

03 June, 2021 01:43PM

John Goerzen

Roundup of Unique Data/Storage Hosting Options

Recently I have been taking another look at the services at rsync.net and it got me thinking: what would I do with a lot of storage? What might I want to run with it, if it were fairly cheap?

  • Backups are an obvious place to start. Borgbackup makes a pretty compelling option: very bandwidth-efficient thanks to block-level rolling hash dedup, encryption fully on the client side, etc. Borg can run over ssh, though does need a server-side program.
  • Nextcloud is another option. With Google Photos getting quite expensive now, if you could have a TB of storage that you control, what might you do with it? Nextcloud also includes IM, video chat, and online document editing similar to Google Docs.
  • I’ve written before about the really neat properties of Syncthing: distributed synchronization that needs no server component. It also supports untrusted nodes in the mesh, where all content is encrypted before it reaches them. Sometimes an intermediary node is useful; for instance, if nodes A and C are to sync but are rarely online at the same time, an untrusted node B that is always online can facilitate synchronization. A server with some space could help with this.
  • A relay for NNCP or UUCP.
  • More broadly, you could self-host your photo or video cllection.

Let’s start taking a look at what’s out there. I’m going to try to focus on things that are unique for some reason: pricing, features, etc. Incidentally, good reviews are hard to find due to the proliferation of affiliate links. I have no affiliate relationships with anyone mentioned here and there are no affiliate links in this post.

I’ll start with the highest-end community and commercial options (though both are quite competitive on price for what they are), and then move on to the cheaper options.

Community option: SDF

SDF is somewhat hard to define. “What is SDF?” could prompt answers like:

  • A community-run network offering free Unix shells to the public
  • A diverse community of people that connect with unique tools. A social network in the 80s sense, sort of.
  • A provider of… let me see… VPN, DSL, and even dialup access.
  • An organization that runs various Open Source social network services, including Mastodon, Pixelfed (image sharing), PeerTube (video sharing), WordPress, even Minecraft.
  • A provider of various services for a nominal charge: $3/mo gets you access to the MetaArray with 800GB of storage space which you have shell access to, and can store stuff on with Nextcloud, host public webpages, etc.
  • Thriving communities around amateur radio, musicians, Plan 9, and even – brace yourself – TOPS-20, a DEC operating system first released in 1976 and not updated since 1988.
  • There’s even a Wikipedia article about SDF.

There’s a lot there. SDF lets you use things for yourself, of course, but you can also join a community. It’s not a commercial service backed by SLAs — it’s best-effort — but it’s been around more than 30 years and has a great track record.

Top commercial option for backup storage: rsync.net

rsync.net offers storage broadly over SSH: sftp, rsync, scp, borg, rclone, restic, git-annex, git, and such. You do not get a shell, but you do get to run a few noninteractive commands via ssh. You can, for instance, run git clone on the rsync server.

The rsync special sauce is in ZFS. They run raidz3 on their arrays (and also offer dual location setups for an additional fee), offer both free and paid ZFS snapshots, etc. The service is designed to be extremely reliable, particularly for backups, and it seems to me to meet those goals.

Basic storage is $0.025 per GB/mo, but with certain account types such as borg, can be had for $0.015 per GB/mo. The minimum size is 400GB or $10/mo. There are no bandwidth charges. This makes it quite economical even compared to, say, S3. Additional discounts start at 10TB, so 10TB with rsync.net would cost $204.80/mo or $81.92 on the borg plan.

You won’t run Nextcloud on this thing, but for backups that must be reliable, or even a photo collection or something, it makes perfect sense.

When you look into other options, you’ll find that other providers are a lot more vague about their storage setup than rsync.net.

Various offerings from Hetzner

Hetzner is one of Europe’s large hosting companies, and they have several options of interest.

Their Storage Box competes directly with the rsync.net service. Their per-GB storage cost is lower than rsync.net, and although they do include a certain amount of free bandwidth with each account, bandwidth is not unlimited and could result in charges. Still, if you don’t drive 2x or more your storage usage in bandwidth each month, it would be cheaper than rsync. The Storage Box also uses ZFS with some kind of redundancy, though they don’t specifcy details.

What differentiates them from rsync.net is the protocol support. They support sftp, scp, Borg, ssh, rsync, etc. just as rsync.net does. But then they also throw in Samba/CIFS, FTPS, HTTPS, and WebDAV – all optionally enabled or disabled by you. Although things like sshfs exist, they aren’t particularly optimal for some use cases, and CIFS support may just be what you need in some situations.

10TB with Hetzner would cost EUR 39.90/mo, or about $48.84/mo. (This figure is higher for Europeans, who also have to pay VAT.)

Hetzner also offers a Storage Share, which is a private Nextcloud instance. 10TB of that is exactly the same cost as 10TB of the Storage Box. You can add your own users, groups, etc. to this as your are the Nextcloud admin of your instance. Hetzner throws in automatic updates (which is great, as updates have been a pain in my side for a long time). Nextcloud is ideal for things like photo sharing, even has email and chat built in, etc. For about the same price at 2TB of Google One, you can have 2TB of Nextcloud with all those services for yourself. Not bad. You can also mount a Nextcloud instance with WebDAV.

Interestingly, Nextcloud supports “external storages” as backend for the data. It supports another Nextcloud instance, OpenStack or S3 object storage, and SFTP, SMB/CIFS, and WebDAV. If you’re thinking you’d like both SFTP and Nextcloud access to a pool of storage, I imagine you could always get a large Storage Box from Hetzner (internal transfer is free), pair it with a small Nextcloud instance, and link the two with Nextcloud external storage.

Dedicated Servers

If you want a more DIY approach, you can find some interesting deals on actual dedicated server hardware – you get the entire machine to yourself. I’ve been using OVH’s SoYouStart for a number of years, with good experienaces, and they have a number of server configurations available. For instance, for $45.99, you can get a Xeon box with 4x2TB drives and 32GB RAM. With RAID5 or raidz1, that’s 6TB of available space – and cheaper than the 6TB from rsync.net (though less redundant) plus you get the whole box to yourself too. OVH directly has some more storage servers; for instance, you can get a box with 4x4TB + 1x500GB SSD for $86.75/mo, giving you 12TB available with RAID5/raidz1, plus a 16GB server to do what you want with.

Hetzner also has some larger options available, for instance 2x4TB at EUR39 or 2x8TB at EUR54, both with 64GB of RAM.

Bargain Corner

Yes, you can find 10TB for $25/mo. It’s hosted on ceph, by what appears to be mostly a single person (though with a lot of experience and a fair bit of transparency). You’re not going to have the round-the-clock support experience as with rsync.net, nor its raidz3 level of redundancy – but if you don’t need that, there are quite a few options.

Let’s start with Lima Labs. Yes, 10TB is $25/mo, and they support sftp, rsync, borg, and even NFS mounts on storage backed by Ceph. The owner, Sam, seems to be a nice guy but the service isn’t going to be on the scale of rsync.net or Hetzner. That may or may not be OK for your needs – I mean, you can even get 1TB for $5/mo, so there are some fantastic deals to be had here.

BorgBase does Borg hosting and borg hosting only. You can get 1TB for $6.67/mo or, for instance, 10TB for $53.46. They don’t say much about their infrastructure and it’s hard to get a read on the company, but for Borg backups, it could be a nice option.

Bargain Corner Part 2: Seedboxes

There’s a market out there of companies offering BitTorrent seeding and downloading services. Typically, these services offer you Unix ssh access to a shell, give you a bunch of space on completely non-redundant drives (theory being that the data on them is transient), lots of bandwidth, for a low price. Some people use them for BitTorrent, others for media serving and such.

If you are willing to take the lowest in drive redundancy, there are some deals to be had. Whatbox is a popular leader here, and has an extensive wiki with info. Or you can find some seedbox.io “shared storage” plans – for instance, 12TB for $32.49/mo. But it’s completely non-redundant drives.

Seedbox has a partner company, Walker Servers, with some interesting deals; for instance, 4x8TB for EUR 52.45. Not bad for 24TB usable with RAID5 – but Walker Servers is completely unknown to me and doesn’t publish a phone number. So, YMMV.

Conclusion

I’m sure I’ve left out many quality options here, but hopefully this is enough to lay out a general lay of the land. Leave other suggestions in the comments.

03 June, 2021 04:53AM by John Goerzen

June 02, 2021

Sven Hoexter

Avoiding the GitHub WebUI

Now that GitHub released v1.0 of the gh cli tool, and this is all over HN, it might make sense to write a note about my clumsy aliases and shell functions I cobbled together in the past month. Background story is that my dayjob moved to GitHub coming from Bitbucket. From my point of view the WebUI for Bitbucket is mediocre, but the one at GitHub is just awful and painful to use, especially for PR processing. So I longed for the terminal and ended up with gh and wtfutil as a dashboard.

The setup we have is painful on its own, with several orgs and repos which are more like monorepos covering several corners of infrastructure, and some which are very focused on a single component. All workflows are anti GitHub workflows, so you must have permission on the repo, create a branch in that repo as a feature branch, and open a PR for the merge back into master.

gh functions and aliases

# setup a token with perms to everything, dealing with SAML is a PITA
export GITHUB_TOKEN="c0ffee4711"
# I use a light theme on my terminal, so adjust the gh theme
export GLAMOUR_STYLE="light"

#simple aliases to poke at a PR
alias gha="gh pr review --approve"
alias ghv="gh pr view"
alias ghd="gh pr diff"

### github support functions, most invoked with a PR ID as $1

#primary function to review PRs
function ghs {
    gh pr view ${1}
    gh pr checks ${1}
    gh pr diff ${1}
}

# very custom PR create function relying on ORG and TEAM settings hard coded
# main idea is to create the PR with my team directly assigned as reviewer
function ghc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    git push --set-upstream origin HEAD
    gh pr create -f -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam"
}

# merge a PR and update master if we're not in a different branch
function ghm {
    gh pr merge -d -r ${1}
    if [[ "$(git rev-parse --abbrev-ref HEAD)" =~ (master|main) ]]; then
        git pull
    fi
}

# get an overview over the files changed in a PR
function ghf {
    gh pr diff ${1} | diffstat -l
}

# generate a link to a commit in the WebUI to pass on to someone else
# input is a git commit hash
function ghlink {
    local repo="$(git remote -v | grep -E "github.+push" | cut -d':' -f 2 | cut -d'.' -f 1)"
    echo "https://github.com/${repo}/commit/${1}"
}

Update 2020-10-14: create pr from a branch with multiple commits

Bitbucket had a nice PR creation functionality by default: If you created a PR from a branch with multiple commits, it derived the titel from the branch name and create a PR discribtion based on all commit messages. I replicated this behaviour, and open the description text in an editor (via $EDITOR) for you to edit. Feels more native, like a git commit, now. In the honor of Bitbucket it currently derives the PR title from the branch name, though I'm wondering if that should be changed to something more helpful. Lacking ideas at the moment.

function ghbbc {
    if git status | grep -q 'Untracked'; then
        echo "ERROR: untracked files in branch"
        git status
        return 1
    fi
    local commitmsg="$(mktemp ${XDG_RUNTIME_DIR}/ghbbc_commit.XXXXXXX)"
    git log --pretty=format:"%B" origin.. > ${commitmsg}
    eval "${EDITOR} ${commitmsg}"
    git push --set-upstream origin HEAD
    gh pr create \
    -r "$(git remote -v | grep push | grep -oE 'myorg-[a-z]+')/myteam" \
    -b "$(cat ${commitmsg})" \
    -t "$(git rev-parse --abbrev-ref HEAD)"
    rm ${commitmsg}

wtfutil

I have a terminal covering half my screensize with small dashboards listing PRs for the repos I care about. For other repos I reverted back to mail notifications which get sorted and processed from time to time. A sample dashboard config looks like this:

github_admin:
  apiKey: "c0ffee4711"
  baseURL: ""
  customQueries:
    othersPRs:
      title: "Pull Requests"
      filter: "is:open is:pr -author:hoexter -label:dependencies"
  enabled: true
  enableStatus: true
  showOpenReviewRequests: false
  showStats: false
  position:
    top: 0
    left: 0
    height: 3
    width: 1
  refreshInterval: 30
  repositories:
    - "myorg/admin"
  uploadURL: ""
  username: "hoexter"
  type: github

The -label:dependencies is used here to filter out dependabot PRs in the dashboard.

Workflow

Look at a PR with ghv $ID, if it's ok ACK it with gha $ID. Create a PR from a feature branch with ghc and later on merge it with ghm $ID. The $ID is retrieved from looking at my wtfutil based dashboard.

Security Considerations

The world is full of bad jokes. For the WebUI access I've the full array of pain with SAML auth, which expires too often, and 2nd factor verification for my account backed by a Yubikey. But to work with the CLI you basically need an API token with full access, everything else drives you insane. So I gave in and generated exactly that. End result is that I now have an API token - which is basically a password - which has full power, and is stored in config files and environment variables. So the security features created around the login are all void now. Was that the aim of it after all?

02 June, 2021 03:17PM

pulseaudio/alsa and dynamic mic sensitivity in my browser

It's a gross hack but works for now. To prevent overly sensitive mic settings autotuned by the browser in web conferences, I currently edit as root /usr/share/pulseaudio/alsa-mixer/paths/analog-input-internal-mic.conf. Change in [Element Capture] the setting volume from merge to 80.

The config block as a whole looks like this:

[Element Capture]
switch = mute
volume = 80
override-map.1 = all
override-map.2 = all-left,all-right

Solution found at https://askubuntu.com/a/761103.

02 June, 2021 01:38PM

hackergotchi for Joachim Breitner

Joachim Breitner

Verifying the code of the Internet Identity service

The following post was meant to be posted at https://forum.dfinity.org/, but that discourse instance didn’t like it; maybe too much inline code, so I’m posting it here instead. To my regular blog audience, please excuse the lack of context. Please comment at the forum post. The text was later also posted on the DFINITY medium blog

You probably have used https://identity.ic0.app/ to log into various applications (the NNS UI, OpenChat etc.) before, and if you do that, you are trusting this service to take good care of your credentials. Furthermore, you might want to check that the Internet Identity is really not tracking you. So you want to know: Is this really running the code we claim it to run? Of course the following applies to other canisters as well, but I’ll stick to the Internet Identity in this case.

I’ll walk you through the steps of verifying that:

Find out what is running

A service on the Internet Computer, i.e. a canister, is a WebAssembly module. The Internet Computer does intentionally not allow you to just download the Wasm code of any canisters, because maybe some developer wants to keep their code private. But it does expose a hash of the Wasm module. The easiest way to get it is using dfx:

$ dfx canister --no-wallet --network ic info rdmx6-jaaaa-aaaaa-aaadq-cai
Controller: r7inp-6aaaa-aaaaa-aaabq-cai
Module hash: 0xd4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04

The “controller” here is the canister id of the governance canister. This tells you that the Internet Identity is controlled by the Network Nervous System (NNS), and its code can only be changed via proposals that are voted on. This is good; if the controller was just, say, me, I could just change the code of the Internet Identity and take over all your identities.

The “Module hash” is the SHA-256 hash of the .wasm that was deployed. So let’s follow that trace.

Finding the right commit

Since upgrades to the Internet Identity are done via proposals to the NNS, we should find a description of such a proposal in the https://github.com/ic-association/nns-proposals repository, in the proposals/network_canister_management directory.

Github’s list of recent NNS proposals

Github’s list of recent NNS proposals

We have to find the latest proposal upgrading the Internet Identity. The folder unfortunately contains proposals for many canisters, and the file naming isn’t super helpful. I usually go through the list from bottom and look at the second column, which contains the title of the latest commit creating or modifying a file.

In this case, the second to last is the one we care about: https://github.com/ic-association/nns-proposals/blob/main/proposals/network_canister_management/20210527T2203Z.md. This file lists rationales, gives an overview of changes and, most importantly, says that bd51eab is the commit we are upgrading to.

The file also says that the wasm hash is d4a…c04, which matches what we saw above. This is good: it seems we really found the youngest proposal upgrading the Internet Identity, and that the proposal actually went through.

WARNING: If you are paranoid, don’t trust this file. There is nothing preventing a proposal proposer to create a file pointing to one revision, but actually including other code in the proposal. That’s why the next steps are needed.

Getting the source

Now that we have the revision, we can get the source and check out revision bd51eab:

/tmp $ git clone https://github.com/dfinity/internet-identity
Klone nach 'internet-identity' ...
remote: Enumerating objects: 3959, done.
remote: Counting objects: 100% (344/344), done.
remote: Compressing objects: 100% (248/248), done.
remote: Total 3959 (delta 161), reused 207 (delta 92), pack-reused 3615
Empfange Objekte: 100% (3959/3959), 6.05 MiB | 3.94 MiB/s, Fertig.
Löse Unterschiede auf: 100% (2290/2290), Fertig.
/tmp $ cd internet-identity/
/tmp/internet-identity $ git checkout bd51eab
/tmp/internet-identity $ git log --oneline -n 1
bd51eab (HEAD, tag: mainnet-20210527T2203Z) Registers the seed phrase before showing it (#301)

In the last line you see that the Internet Identity team has tagged that revision with a tag name that contains the proposal description file name. Very tidy!

Reproducing the build

The README.md has the following build instructions:

Official build

The official build should ideally be reproducible, so that independent parties can validate that we really deploy what we claim to deploy.

We try to achieve some level of reproducibility using a Dockerized build environment. The following steps should build the official Wasm image

docker build -t internet-identity-service .
docker run --rm --entrypoint cat internet-identity-service /internet_identity.wasm > internet_identity.wasm
sha256sum internet_identity.wasm

The resulting internet_identity.wasm is ready for deployment as rdmx6-jaaaa-aaaaa-aaadq-cai, which is the reserved principal for this service.

It actually suffices to run the first command, as it also prints the hash (we don’t need to copy the .wasm out of the Docker canister):

/tmp/internet-identity $ docker build -t internet-identity-service .
…
Step 26/26 : RUN sha256sum internet_identity.wasm
 ---> Running in 1a04644b544c
d4af9277f3e8d26fd8cdc7874a9f47b6456587fbb2a64d61b6b6880d144d3c04  internet_identity.wasm
Removing intermediate container 1a04644b544c
 ---> bfe6a63a7980
Successfully built bfe6a63a7980
Successfully tagged internet-identity-service:latest

Success! The hashes match.

You don’t believe me? Try it yourself (and let us know if you get a different hash, maybe I got hacked). This may fail if you have too little RAM configured for Docker, 8GB should be enough.

At this point you have a trust path from the code sitting in front of you to the Internet Identity running at https://identity.ic0.app, including the front-end code, and you can start auditing the source code.

What about the canister id?

If you payed close attention you might have noticed that we got the module has for canister rdmx6-jaaaa-aaaaa-aaadq-cai, but we are accessing a web application at https://identity.ic0.app. So where is this connection?

In the future, I expect some form of a DNS-like “nice host name registry” on the Internet Computer that stores a mapping from nice names to canister ids, and that you will be able to query that to for “which canister serves rdmx6-jaaaa-aaaaa-aaadq-cai” in a secure way (e.g. using certified variables). But since we don’t have that yet, but still want you to be able to use a nice name for the Internet Identity (and not have to change the name later, which would cause headaches), we have hard-coded this mapping for now.

The relevant code here is the “Certifying Service Worker” that your browser downloads when accessing any *.ic0.app URL. This piece of code will then intercept all requests to that domain, map it to an query call, and then use certified variables to validate the response. And indeed, the mapping is in the code there:

const hostnameCanisterIdMap: Record<string, [string, string]> = {
  'identity.ic0.app': ['rdmx6-jaaaa-aaaaa-aaadq-cai', 'ic0.app'],
  'nns.ic0.app': ['qoctq-giaaa-aaaaa-aaaea-cai', 'ic0.app'],
  'dscvr.ic0.app': ['h5aet-waaaa-aaaab-qaamq-cai', 'ic0.page'],
};

What about other canisters?

In principle, the same approach works for other canisters, whether it’s OpenChat, the NNS canisters etc. But the details will differ, as every canister developer might have their own way of

  • communicating the location and revision of the source for their canisters
  • building the canisters

In particular, without a reproducible way of building the canister, this will fail, and that’s why projects like https://reproducible-builds.org/ are so important in general.

02 June, 2021 07:42AM by Joachim Breitner (mail@joachim-breitner.de)

June 01, 2021

hackergotchi for David Bremner

David Bremner

Baby steps towards schroot and slurm cooperation.

Unfortunately schroot does not maintain CPU affinity 1. This means in particular that parallel builds have the tendency to take over an entire slurm managed server, which is kindof rude. I haven't had time to automate this yet, but following demonstrates a simple workaround for interactive building.

╭─ simplex:~
╰─% schroot --preserve-environment -r -c polymake
(unstable-amd64-sbuild)bremner@simplex:~$ echo $SLURM_CPU_BIND_LIST
0x55555555555555555555
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed:   ffff,ffffffff,ffffffff
Cpus_allowed_list:      0-79
(unstable-amd64-sbuild)bremner@simplex:~$ taskset $SLURM_CPU_BIND_LIST bash
(unstable-amd64-sbuild)bremner@simplex:~$ grep Cpus /proc/self/status
Cpus_allowed:   5555,55555555,55555555
Cpus_allowed_list:      0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78

Next steps

In principle the schroot configuration parameter can be used to run taskset before every command. In practice it's a bit fiddly because you need a shell script shim (because the environment variable) and you need to e.g. goof around with bind mounts to make sure that your script is available in the chroot. And then there's combining with ccache and eatmydata...

01 June, 2021 04:45PM

Antoine Beaupré

Leaving Freenode

The freenode IRC network has been hijacked.

TL;DR: move to libera.chat or OFTC.net, as did countless free software projects including Gentoo, CentOS, KDE, Wikipedia, FOSDEM, and more. Debian and the Tor project were already on OFTC and are not affected by this.

What is freenode and why should I care?

freenode is the largest remaining IRC network. Before this incident, it had close to 80,000 users, which is small in terms of modern internet history -- even small social networks are larger by multiple orders of magnitude -- but is large in IRC history. The IRC network is also extensively used by the free software community, being the default IRC network on many programs, and used by hundreds if not thousands of free software projects.

I have been using freenode since at least 2006.

This matters if you care about IRC, the internet, open protocols, decentralisation, and, to a certain extent, federation as well. It also touches on who has the right on network resources: the people who "own" it (through money) or the people who make it work (through their labor). I am biased towards open protocols, the internet, federation, and worker power, and this might taint this analysis.

What happened?

It's a long story, but basically:

  1. back in 2017, the former head of staff sold the freenode.net domain (and its related company) to Andrew Lee, "American entrepreneur, software developer and writer", and, rather weirdly, supposedly "crown prince of Korea" although that part is kind of complex (see House of Yi, Yi Won, and Yi Seok). It should be noted the Korean Empire hasn't existed for over a century at this point (even though its flag, also weirdly, remains)

  2. back then, this was only known to the public as this strange PIA and freenode joining forces gimmick. it was suspicious at first, but since the network kept running, no one paid much attention to it. opers of the network were similarly reassured that Lee would have no say in the management of the network

  3. this all changed recently when Lee asserted ownership of the freenode.net domain and started meddling in the operations of the network, according to this summary. this part is disputed, but it is corroborated by almost a dozen former staff which collectively resigned from the network in protest, after legal threats, when it was obvious freenode was lost.

  4. the departing freenode staff founded a new network, irc.libera.chat, based on the new ircd they were working on with OFTC, solanum

  5. meanwhile, bot armies started attacking all IRC networks: both libera and freenode, but also OFTC and unrelated networks like a small one I help operate. those attacks have mostly stopped as of this writing (2021-05-24 17:30UTC)

  6. on freenode, however, things are going for the worse: Lee has been accused of taking over a channel, in a grotesque abuse of power; then changing freenode policy to not only justify the abuse, but also remove rules against hateful speech, effectively allowing nazis on the network (update: the change was reverted, but not by Lee)

Update: even though the policy change was reverted, the actual conversations allowed on freenode have already degenerated into toxic garbage. There are also massive channel takeovers (presumably over 700), mostly on channels that were redirecting to libera, but also channels that were still live. Channels that were taken over include #fosdem, #wikipedia, #haskell...

Instead of working on the network, the new "so-called freenode" staff is spending effort writing bots and patches to basically automate taking over channels. I run an IRC network and this bot is obviously not standard "services" stuff... This is just grotesque.

At this point I agree with this HN comment:

We should stop implicitly legitimizing Andrew Lee's power grab by referring to his dominion as "Freenode". Freenode is a quarter-century-old community that has changed its name to libera.chat; the thing being referred to here as "Freenode" is something else that has illegitimately acquired control of Freenode's old servers and user database, causing enormous inconvenience to the real Freenode.

I don't agree with the suggested name there, let's instead call it "so called freenode" as suggested later in the thread.

What now?

I recommend people and organisations move away from freenode as soon as possible. This is a major change: documentation needs to be fixed, and the migration needs to be coordinated. But I do not believe we can trust the new freenode "owners" to operate the network reliably and in good faith.

It's also important to use the current momentum to build a critical mass elsewhere so that people don't end up on freenode again by default and find an even more toxic community than your typical run-of-the-mill free software project (which is already not a high bar to meet).

Update: people are moving to libera in droves. It's now reaching 18,000 users, which is bigger than OFTC and getting close to the largest, traditionnal, IRC networks (EFnet, Undernet, IRCnet are in the 10-20k users range). so-called freenode is still larger, currently clocking 68,000 users, but that's a huge drop from the previous count which was 78,000 before the exodus began. We're even starting to see the effects of the migration on netsplit.de.

Update 2: the isfreenodedeadyet.com site is updated more frequently than netsplit and shows tons more information. It shows 25k online users for libera and 61k for so-called freenode (down from ~78k), and the trend doesn't seem to be stopping for so-called freenode. There's also a list of 400+ channels that have moved out. Keep in mind that such migrations take effect over long periods of time.

Where do I move to?

The first thing you should do is to figure out which tool to use for interactive user support. There are multiple alternatives, of course -- this is the internet after all -- but here is a short list of suggestions, in preferred priority order:

  1. irc.libera.chat
  2. irc.OFTC.net
  3. Matrix.org, which bridges with OFTC and (hopefully soon) with libera as well, modern IRC alternative
  4. XMPP/Jabber also still exists, if you're into that kind of stuff, but I don't think the "chat room" story is great there, at least not as good as Matrix

Basically, the decision tree is this:

  • if you want to stay on IRC:
    • if you are already on many OFTC channels and few freenode channels: move to OFTC
    • if you are more inclined to support the previous freenode staff: move to libera
    • if you care about matrix users (in the short term): move to OFTC
  • if you are ready to leave IRC:
    • if you want the latest and greatest: move to Matrix
    • if you like XML and already use XMPP: move to XMPP

Frankly, at this point, everyone should seriously consider moving to Matrix. The user story is great, the web is a first class user, it supports E2EE (although XMPP as well), and has a lot of momentum behind it. It even bridges with IRC well (which is not the case for XMPP) so if you're worried about problems like this happening again.

(Indeed, I wouldn't be surprised if similar drama happens on OFTC or libera in the future. The history of IRC is full of such epic controversies, takeovers, sabotage, attacks, technical flamewars, and other silly things. I am not sure, but I suspect a federated model like Matrix might be more resilient to conflicts like this one.)

Changing protocols might mean losing a bunch of users however: not everyone is ready to move to Matrix, for example. Graybeards like me have been using irssi for years, if not decades, and would take quite a bit of convincing to move elsewhere.

I have mostly kept my channels on IRC, and moved either to OFTC or libera. In retrospect, I think I might have moved everything to OFTC if I would have thought about it more, because almost all of my channels are there. But I kind of expect a lot of the freenode community to move to libera, so I am keeping a socket open there anyways.

How do I move?

The first thing you should do is to update documentation, websites, and source code to stop pointing at freenode altogether. This is what I did for feed2exec, for example. You need to let people know in the current channel as well, and possibly shutdown the channel on freenode.

Since my channels are either small or empty, I took the radical approach of:

  • redirecting the channel to ##unavailable which is historically the way we show channels have moved to another network
  • make the channel invite-only (which effectively enforces the redirection)
  • kicking everyone out of the channel
  • kickban people who rejoin
  • set the topic to announce the change

In IRC speak, the following commands should do all this:

/msg ChanServ set #anarcat mlock +if ##unavailable
/msg ChanServ clear #anarcat users moving to irc.libera.chat
/msg ChanServ set #anarcat restricted on
/topic #anarcat this channel has moved to irc.libera.chat

If the channel is not registered, the following might work

/mode #anarcat +if ##unavailable

Then you can leave freenode altogether:

/disconnect Freenode unacceptable hijack, policy changes and takeovers. so long and thanks for all the fish.

Keep in mind that some people have been unable to setup such redirections, because the new freenode staff have taken over their channel, in which case you're out of luck...

Some people have expressed concern about their private data hosted at freenode as well. If you care about this, you can always talk to NickServ and DROP your nick. Be warned, however, that this assumes good faith of the network operators, which, at this point, is kind of futile. I would assume any data you have registered on there (typically: your NickServ password and email address) to be compromised and leaked. If your password is used elsewhere (tsk, tsk), change it everywhere.

Update: there's also another procedure, similar to the above, but with a different approach. Keep in mind that so-called freenode staff are actively hijacking channels for the mere act of mentioning libera in the channel topic, so thread carefully there.

Last words

This is a sad time for IRC in general, and freenode in particular. It's a real shame that the previous freenode staff have been kicked out, and it's especially horrible that if the new policies of the network are basically making the network open to nazis. I wish things would have gone out differently: now we have yet another fork in the IRC history. While it's not the first time freenode changes name (it was called OPN before), now the old freenode is still around and this will bring much confusion to the world, especially since the new freenode staff is still claiming to support FOSS.

I understand there are many sides to this story, and some people were deeply hurt by all this. But for me, it's completely unacceptable to keep pushing your staff so hard that they basically all (except one?) resign in protest. For me, that's leadership failure at the utmost, and a complete disgrace. And of course, I can't in good conscience support or join a network that allows hate speech.

Regardless of the fate of whatever we'll call what's left of freenode, maybe it's time for this old IRC thing to die already. It's still a sad day in internet history, but then again, maybe IRC will never die...

01 June, 2021 03:09PM

hackergotchi for Robert McQueen

Robert McQueen

Next steps for the GNOME Foundation

As the President of the GNOME Foundation Board of Directors, I’m really pleased to see the number and breadth of candidates we have for this year’s election. Thank you to everyone who has submitted their candidacy and volunteered their time to support the Foundation. Allan has recently blogged about how the board has been evolving, and I wanted to follow that post by talking about where the GNOME Foundation is in terms of its strategy. This may be helpful as people consider which candidates might bring the best skills to shape the Foundation’s next steps.

Around three years ago, the Foundation received a number of generous donations, and Rosanna (Director of Operations) gave a presentation at GUADEC about her and Neil’s (Executive Director, essentially the CEO of the Foundation) plans to use these funds to transform the Foundation. We would grow our activities, increasing the pace of events, outreach, development and infrastructure that supported the GNOME project and the wider desktop ecosystem – and, crucially, would grow our funding to match this increased level of activity.

I think it’s fair to say that half of this has been a great success – we’ve got a larger staff team than GNOME has ever had before. We’ve widened the GNOME software ecosystem to include related apps and projects under the GNOME Circle banner, we’ve helped get GTK 4 out of the door, run a wider-reaching program in the Community Engagement Challenge, and consistently supported better infrastructure for both GNOME and the Linux app community in Flathub.

Aside from another grant from Endless (note: my employer), our fundraising hasn’t caught up with this pace of activities. As a result, the Board recently approved a budget for this financial year which will spend more funds from our reserves than we expect to raise in income. Due to our reserves policy, this is essentially the last time we can do this: over the next 6-12 months we need to either raise more money, or start spending less.

For clarity – the Foundation is fit and well from a financial perspective – we have a very healthy bank balance, and a very conservative “12 month run rate” reserve policy to handle fluctuations in income. If we do have to slow down some of our activities, we will return to a “steady state” where our regular individual donations and corporate contributions can support a smaller staff team that supports the events and infrastructure we’ve come to rely on.

However, this isn’t what the Board wants to do – the previous and current boards were unanimous in their support of the idea that we should be ambitious: try to do more in the world and bring the benefits of GNOME to more people. We want to take our message of trusted, affordable and accessible computing to the wider world.

Typically, a lot of the activities of the Foundation have been very inwards-facing – supporting and engaging with either the existing GNOME or Open Source communities. This is a very restricted audience in terms of fundraising – many corporate actors in our community already support GNOME hugely in terms of both financial and in-kind contributions, and many OSS users are already supporters either through volunteer contributions or donating to those nonprofits that they feel are most relevant and important to them.

To raise funds from new sources, the Foundation needs to take the message and ideals of GNOME and Open Source software to new, wider audiences that we can help. We’ve been developing themes such as affordability, privacy/trust and education as promising areas for new programs that broaden our impact. The goal is to find projects and funding that allow us to both invest in the GNOME community and find new ways for FOSS to benefit people who aren’t already in our community.

Bringing it back to the election, I’d like to make clear that I see this – reaching the outside world, and finding funding to support that – as the main priority and responsibility of the Board for the next term. GNOME Foundation elections are a slightly unusual process that “filters” our board nominees by being existing Foundation members, which means that candidates already work inside our community when they stand for election. If you’re a candidate and are already active in the community – THANK YOU – you’re doing great work, keep doing it! That said, you don’t need to be a Director to achieve things within our community or gain the support of the Foundation: being a community leader is already a fantastic and important role.

The Foundation really needs support from the Board to make a success of the next 12-18 months. We need to understand our financial situation and the trade-offs we have to make, and help to define the strategy with the Executive Director so that we can launch some new programs that will broaden our impact – and funding – for the future. As people cast their votes, I’d like people to think about what kind of skills – building partnerships, commercial background, familiarity with finances, experience in nonprofit / impact spaces, etc – will help the Board make the Foundation as successful as it can be during the next term.

01 June, 2021 11:45AM by ramcq

Russell Coker

Internode NBN with Arris CM8200 on Debian

I’ve recently signed up for Internode NBN while using the Arris CM8200 device supplied by Optus (previously used for a regular phone service). I took the configuration mostly from Dean’s great blog post on the topic [1]. One thing I changed was the /etc/networ/interfaces configuration, I used the following:

# VLAN ID 2 for Internode's NBN HFC.
auto eth1.2
iface eth1.2 inet manual
  vlan-raw-device eth1

auto nbn
iface nbn inet ppp
    pre-up /bin/ip link set eth1.2 up
    provider nbn

There is no need to have a section for eth1 when you have a section for eth1.2.

IPv6

IPv6 for only one system

With a line in /etc/ppp/options containing only “ipv6 ,” you get an IPv6 address automatically for the ppp0 interface after starting pppd.

IPv6 for your lan

Internode has documented how to configure the WIDE DHCPv6 client to get an IPv6 “prefix” (subnet) [2]. Just install the wide-dhcpv6-client package and put your interface names in a copy of the Internode example config and that works. That gets you a /64 assigned to your local Ethernet. Here’s an example of /etc/wide-dhcpv6/dhcp6c.conf:

interface ppp0 {
    send ia-pd 0;
    script "/etc/wide-dhcpv6/dhcp6c-script";
};

id-assoc pd {
    prefix-interface br0 {
        sla-id 0;
        sla-len 8;
    };
};

For providing addresses to other systems on your LAN they recommend radvd version 1.1 or greater, Debian/Bullseye will ship with version 2.18. Here is an example /etc/radvd.conf that will work with it. It seems that you have to manually (or with a script) set the value to use in place of “xxxx:xxxx:xxxx:xxxx” from the value that is assigned to eth0 (or whichever interface you are using) by the wide-dhcpv6-client.

interface eth0 { 
        AdvSendAdvert on;
        MinRtrAdvInterval 3; 
        MaxRtrAdvInterval 10;
        prefix xxxx:xxxx:xxxx:xxxx::/64 { 
                AdvOnLink on; 
                AdvAutonomous on; 
                AdvRouterAddr on; 
        };
};

Either the configuration of the wide dhcp client or radvd removes the default route from ppp0, so you need to run a command like
ip -6 route add default dev ppp0” to put it back. Probably having “ipv6 ,” is the wrong thing to do when using wide-dhcp-client and radvd.

On a client machine with bridging I needed to have “net.ipv6.conf.br0.accept_ra=2” in /etc/sysctl.conf to allow it to accept route advisory messages on the interface (in this case eth0), for machines without bridging I didn’t need that.

Firewalling

The default model for firewalling nowadays seems to be using NAT and only configuring specific ports to be forwarded to machines on the LAN. With IPv6 on the LAN every system can directly communicate with the rest of the world which may be a bad thing. The following lines in a firewall script will drop all inbound packets that aren’t in response to packets that are sent out. This will give an equivalent result to the NAT firewall people are used to and you can always add more rules to allow specific ports in.

ip6tables -A FORWARD -i ppp+ -m state --state ESTABLISHED,RELATED -j ACCEPT
ip6tables -A FORWARD -i ppp+ -i DROP

01 June, 2021 11:16AM by etbe

Russ Allbery

Review: The Horse and His Boy

Review: The Horse and His Boy, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #5
Publisher: Collier Books
Copyright: 1954
Printing: 1978
ISBN: 0-02-044200-9
Format: Mass market
Pages: 217

The Horse and His Boy was the fifth published book in the Chronicles of Narnia, but it takes place during the last chapter of The Lion, the Witch and the Wardrobe, in the midst of the golden age of Narnia. It's the only true side story of the series and it doesn't matter much where in sequence you read it, as long as it's after The Lion, the Witch and the Wardrobe and before The Last Battle (which would spoil its ending somewhat).

MAJOR SPOILERS BELOW.

The Horse and His Boy is also the only book of the series that is not a portal fantasy. The Pevensie kids make an appearance, but as the ruling kings and queens of Narnia, and only as side characters. The protagonists are a boy named Shasta, a girl named Aravis, and horses named Bree and Hwin. Aravis is a Calormene, a native of the desert (and extremely Orientalist, but more on that later) kingdom to the south of Narnia and Archenland. Shasta starts the book as the theoretically adopted son but mostly slave of a Calormene fisherman. The Horse and His Boy is the story of their journey from Calormen north to Archenland and Narnia, just in time to defend Narnia and Archenland from an invasion.

This story starts with a great hook. Shasta's owner is hosting a passing Tarkaan, a Calormene lord, and overhears a negotiation to sell Shasta to the Tarkaan as his slave (and, in the process, revealing that he rescued Shasta as an infant from a rowboat next to a dead man). Shasta starts talking to the Tarkaan's horse and is caught by surprise when the horse talks back. He is a Talking Horse from Narnia, kidnapped as a colt, and eager to return to Narnia and the North. He convinces Shasta to attempt to escape with him.

This has so much promise. For once, we're offered a story where one of the talking animals of Narnia is at least a co-protagonist and has some agency in the story. Bree takes charge of Shasta, teaches him to ride (or, mostly, how to fall off a horse), and makes most of the early plans. Finally, a story that recognizes that Narnia stories don't have to revolve around the humans!

Unfortunately, Bree is an obnoxious, arrogant character. I wanted to like him, but he makes it very hard. This gets even worse when Shasta is thrown together with Aravis, a noble Calormene girl who is escaping an arranged marriage on her own talking mare, Hwin. Bree is a warhorse, Hwin is a lady's riding mare, and Lewis apparently knows absolutely nothing about horses, because every part of Bree's sexist posturing and Hwin's passive meekness is awful and cringe-worthy. I am not a horse person, so will link to Judith Tarr's much more knowledgeable critique at Tor.com, but suffice it to say that mares are not meekly deferential or awed by stallions. If Bree had behaved that way with a real mare, he would have gotten the crap beaten out of him (which might have improved his attitude considerably). As is, we have to put up with rather a lot of Bree's posturing and Hwin (who I liked much better) barely gets a line and acts disturbingly like she was horribly abused.

This makes me sad, because I like Bree's character arc. He's spent his whole life being special and different from those around him, and while he wants to escape this country and return home, he's also gotten used to being special. In Narnia, he will just be a normal talking horse. To get everything else he wants, he also has to let go of the idea that he's someone special. If Lewis had done more with this and made Bree a more sympathetic character, this could have been very effective. As written, it only gets a few passing mentions (mostly via Bree being weirdly obsessed with whether talking horses roll) and is therefore overshadowed by Shasta's chosen one story and Bree's own arrogant behavior.

The horses aside, this is a passable adventure story with some well-done moments. The two kids and their horses end up in Tashbaan, the huge Calormene capital, where they stumble across the Narnians and Shasta is mistaken for one of their party. Radagast, the prince of Calormen, is proposing marriage to Susan, and the Narnians are in the process of realizing he doesn't plan to take no for an answer. Aravis, meanwhile, has to sneak out of the city via the Tisroc's gardens, which results in her hiding behind a couch as she hears Radagast's plans to invade Archenland and Narnia to take Susan as his bride by force. Once reunited, Shasta, Aravis, and the horses flee across the desert to bring warning to Archenland and then Narnia.

Of all the Narnia books, The Horse and His Boy leans the hardest into the personal savior angle of Christianity. Parts of it, such as Shasta's ride over the pass into Narnia, have a strong "Footprints" feel to them. Most of the events of the book are arranged by Aslan, starting with Shasta's early life. Readers of the series will know this when a lion shows up early to herd the horses where they need to go, or when a cat keeps Shasta company in the desert and frightens away jackals. Shasta only understands near the end.

I remember this being compelling stuff as a young Christian reader. This personal attention and life shaping from God is pure Christian wish fulfillment of the "God has a plan for your life" variety, even more so than Shasta turning out to be a lost prince. As an adult re-reader, I can see that Lewis is palming the theodicy card rather egregiously. It's great that Aslan was making everything turn out well in the end, but why did he have to scare the kids and horses half to death in the process? They were already eager to do what he wanted, but it's somehow inconceivable that Aslan would simply tell them what to do rather than manipulate them. There's no obvious in-story justification why he couldn't have made the experience much less terrifying. Or, for that matter, prevented Shasta from being kidnapped as an infant in the first place and solved the problem of Radagast in a more direct way. This sort of theology takes as an unexamined assumption that a deity must refuse to use his words and instead do everything in weirdly roundabout and mysterious ways, which makes even less sense in Narnia than in our world given how directly and straightforwardly Aslan has acted in previous books.

It was also obvious to me on re-read how unfair Lewis's strict gender roles are to Aravis. She's an excellent rider from the start of the book and has practiced many of the things Shasta struggles to do, but Shasta is the boy and Aravis is the girl, so Aravis has to have girl adventures involving tittering princesses, luxurious baths, and eavesdropping behind couches, whereas Shasta has boy adventures like riding to warn the king or bringing word to Narnia. There's nothing very objectionable about Shasta as a character (unlike Bree), but he has such a generic character arc. The Horse and Her Girl with Aravis and Hwin as protagonists would have been a more interesting story, and would have helpfully complicated the whole Narnia and the North story motive.

As for that storyline, wow the racism is strong in this one, starting with the degree that The Horse and His Boy is deeply concerned with people's skin color. Shasta is white, you see, clearly marking him as from the North because all the Calormenes are dark-skinned. (This makes even less sense in this fantasy world than in our world because it's strongly implied in The Magician's Nephew that all the humans in Calormen came from Narnia originally.) The Calormenes all talk like characters from bad translations of the Arabian Nights and are shown as cruel, corrupt slavers with a culture that's a Orientalist mishmash of Arab, Persian, and Chinese stereotypes. Everyone is required to say "may he live forever" after referencing the Tisroc, which is an obvious and crude parody of Islam. This stereotype fest culminates in the incredibly bizarre scene that Aravis overhears, in which the grand vizier literally grovels on the floor while Radagast kicks him and the Tisroc, Radagast's father, talks about how Narnia's freedom offends him and the barbarian kingdom would be more profitable and orderly when conquered.

The one point to Lewis's credit is that Aravis is also Calormene, tells stories in the same style, and is still a protagonist and just as acceptable to Aslan as Shasta is. It's not enough to overcome the numerous problems with Lewis's lazy world-building, but it makes me wish even more that Aravis had gotten her own book and more meaningful scenes with Aslan.

I had forgotten that Susan appears in this book, although that appearance doesn't add much to the general problem of Susan in Narnia except perhaps to hint at Lewis's later awful choices. She is shown considering marriage to the clearly villainous Radagast, and then only mentioned later with a weird note that she doesn't ride to war despite being the best archer of the four. I will say again that it's truly weird to see the Pevensie kids as (young) adults discussing marriage proposals, international politics, and border wars while remembering they all get dumped back into their previous lives as British schoolkids. This had to have had dramatic effects on their lives that Lewis never showed. (I know, the real answer is that Lewis is writing these books according to childhood imaginary adventure logic, where adventures don't have long-term consequences of that type.)

I will also grumble once more at how weirdly ineffectual Narnians are until some human comes to tell them what to do. Calormen is obviously a threat; Susan just escaped from an attempted forced marriage. Archenland is both their southern line of defense and is an ally separated by a mountain pass in a country full of talking eagles, among other obvious messengers. And yet, it falls to Shasta to ride to give warning because he's the human protagonist of the story. Everyone else seems to be too busy with quirky domesticity or endless faux-medieval chivalric parties.

The Horse and His Boy was one of my favorites when I was a kid, but reading as an adult I found it much harder to tolerate Bree or read past the blatant racial and cultural stereotyping. The bits with Aslan also felt less magical to me than they did as a kid because I was asking more questions about why Aslan had to do everything in such an opaque and perilous way. It's still not a bad adventure; Aravis is a great character, the bits in Tashbaan are at least memorable, and I still love the Hermit of the Southern March and want to know more about him. But I would rank it below the top tier of Narnia books, alongside Prince Caspian as a book with some great moments and some serious flaws.

Followed in original publication order by The Magician's Nephew.

Rating: 7 out of 10

01 June, 2021 04:14AM

Paul Wise

FLOSS Activities May 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Joined the great IRC migration
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord, sptag and esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

01 June, 2021 12:38AM

May 31, 2021

Russ Allbery

Mostly preorder haul

Some books that I had preordered, plus various other things that I failed to resist. There was a whole wave of new book releases this spring, most of which I have not yet read (in part because of the detour to re-read and review the Chronicles of Narnia).

Becky Chambers — The Galaxy, and the Ground Within (sff)
Richard Ben Cramer — What It Takes (nonfiction)
J.S. Dewes — The Last Watch (sff)
Anand Giridharadas — Winners Take All (nonfiction)
Lauren Hough — Leaving Isn't the Hardest Thing (nonfiction)
S.L. Huang — Burning Roses (sff)
Jane McAlevey — A Collective Bargain (nonfiction)
K.B. Spangler — Stoneskin (sff)
K.B. Spangler — The Blackwing War (sff)
Natalie Zina Walschots — Hench (sff)
Martha Wells — Fugitive Telemetry (sff)

31 May, 2021 07:41PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in May 2021

Here's my monthly update covering what I have been doing in the free software world for May 2021 (previous month):

  • Opened a pull request to make the build reproducible in the apispec API specification generator. The issue at hand is that the copyright message in the generated documentation used the current build date so that the build would vary depending on whenever you built it. [...]

  • Updated my Tickle Me Email tool that implements Gettings Things Done-like behaviours in any IMAP inbox in order to sort various items by the date, not by their subject field. [...]

§

Reproducible Builds

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

  • Kept isdebianreproducibleyet.com up to date. [...]

  • Categorised a large number of packages and issues in the Reproducible Builds notes.git" repository.

  • Filed an upstream pull request to make the build reproducible in the apispec specification generator. This was also filed in Debian as bug #988978. [...]

  • Filed a bug against the Debian jenkins.debian.org virtual packages to report that the reproducible rescheduling CGI script uses the deprecated Debian SSO service. (#989088)

  • Drafted, published and publicised our monthly report for April 2021.

  • I also made the following changes to strip-nondeterminism, our tool to remove specific non-deterministic results from a completed build:

    • Added support for Python pyzip files: they require special handling to not mangle the UNIX shebang. (#18)

    • Dropped single-debian-patch, etc. from the Debian source package options. [...]

I also made the following changes to diffoscope, including preparing and uploading versions 174, 175 and 176 to Debian:

  • Bug fixes:

    • Check that we are parsing an actual Debian .buildinfo file, not just a file with that particular extension — after all, it could be any file. (#254, #987994)
    • Support signed .buildinfo files again. It appears that some versions of file(1) reports them as PGP signed message. [...]
    • Use the actual filesystem path name (instead of diffoscope's concept of the source archive name) in order to correct filename filtering when an APK file has been extracted from a container format. In particular, we need to filter the auto-incremented 1.apk instead of original-name.pk. (#255)
  • New features:

    • Update ffmpeg tests to work with version 4.4. (#258)
    • Correct grammar in a fsimage.py debug message. [...]
  • Misc:

    • Don't unnecessarily call os.path.basename twice in the APK comparator. [...]
    • Added instructions on how to install diffoscope on openSUSE on the diffoscope website [...].
    • Add a comment about stripping filenames. [...]
    • Corrected a reference to site.salsa_url which was breaking the "File a new issue" link on the website [...].

§

Debian

Finally, I also made a sponsored upload of adminer (4.7.9-2) for Alexandre Rossi.

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the project via the following video:

31 May, 2021 04:32PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.19: Another Update

A new release of the inline package got to CRAN today, following and further updating the recent update from earlier in the month. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release builds on and extends the work of the recent 0.3.18 release and tweaks some of the test. We cannot fully test all platforms used by CRAN so some times iterations such as this one are needed. The package was uploaded a few days ago, but it sometimes takes a few days to clarify changes over email to the CRAN maintainers whose work is still greatly appreciated.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.19 (2021-05-25)

  • Documentation for moveDLL was updated and extended (Johannes in #22).

  • A few more tests were made conditional the test platform (Dirk in #24).

Courtesy of my CRANberries, there is a comparison to the previous release.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

31 May, 2021 01:52PM

Russell Coker

Some Ideas About Storage Reliability

Hard Drive Brands

When people ask for advice about what storage to use they often get answers like “use brand X, it works well for me and brand Y had a heap of returns a few years ago”. I’m not convinced there is any difference between the small number of manufacturers that are still in business.

One problem we face with reliability of computer systems is that the rate of change is significant, so every year there will be new technological developments to improve things and every company will take advantage of them. Storage devices are unique among computer parts for their requirement for long-term reliability. For most other parts in a computer system a fault that involves total failure is usually easy to fix and even a fault that causes unreliable operation usually won’t spread it’s damage too far before being noticed (except in corner cases like RAM corruption causing corrupted data on disk).

Every year each manufacturer will bring out newer disks that are bigger, cheaper, faster, or all three. Those disks will be expected to remain in service for 3 years in most cases, and for consumer disks often 5 years or more. The manufacturers can’t test the new storage technology for even 3 years before releasing it so their ability to prove the reliability is limited. Maybe you could buy some 8TB disks now that were manufactured to the same design as used 3 years ago, but if you buy 12TB consumer grade disks, the 20TB+ data center disks, or any other device that is pushing the limits of new technology then you know that the manufacturer never tested it running for as long as you plan to run it. Generally the engineering is done well and they don’t have many problems in the field. Sometimes a new range of disks has a significant number of defects, but that doesn’t mean the next series of disks from the same manufacturer will have problems.

The issues with SSDs are similar to the issues with hard drives but a little different. I’m not sure how much of the improvements in SSDs recently have been due to new technology and how much is due to new manufacturing processes. I had a bad experience with a nameless brand SSD a couple of years ago and now stick to the better known brands. So for SSDs I don’t expect a great quality difference between devices that have the names of major computer companies on them, but stuff that comes from China with the name of the discount web store stamped on it is always a risk.

Hard Drive vs SSD

A few years ago some people were still avoiding SSDs due to the perceived risk of new technology. The first problem with this is that hard drives have lots of new technology in them. The next issue is that hard drives often have some sort of flash storage built in, presumably a “SSHD” or “Hybrid Drive” gets all the potential failures of hard drives and SSDs.

One theoretical issue with SSDs is that filesystems have been (in theory at least) designed to cope with hard drive failure modes not SSD failure modes. The problem with that theory is that most filesystems don’t cope with data corruption at all. If you want to avoid losing data when a disk returns bad data and claims it to be good then you need to use ZFS, BTRFS, the NetApp WAFL filesystem, Microsoft ReFS (with the optional file data checksum feature enabled), or Hammer2 (which wasn’t production ready last time I tested it).

Some people are concerned that their filesystem won’t support “wear levelling” for SSD use. When a flash storage device is exposed to the OS via a block interface like SATA there isn’t much possibility of wear levelling. If flash storage exposes that level of hardware detail to the OS then you need a filesystem like JFFS2 to use it. I believe that most SSDs have something like JFFS2 inside the firmware and use it to expose what looks like a regular block device.

Another common concern about SSD is that it will wear out from too many writes. Lots of people are using SSD for the ZIL (ZFS Intent Log) on the ZFS filesystem, that means that SSD devices become the write bottleneck for the system and in some cases are run that way 24*7. If there was a problem with SSDs wearing out I expect that ZFS users would be complaining about it. Back in 2014 I wrote a blog post about whether swap would break SSD [1] (conclusion – it won’t). Apart from the nameless brand SSD I mentioned previously all of my SSDs in question are still in service. I have recently had a single Samsung 500G SSD give me 25 read errors (which BTRFS recovered from the other Samsung SSD in the RAID-1), I have yet to determine if this is an ongoing issue with the SSD in question or a transient thing. I also had a 256G SSD in a Hetzner DC give 23 read errors a few months after it gave a SMART alert about “Wear_Leveling_Count” (old age).

Hard drives have moving parts and are therefore inherently more susceptible to vibration than SSDs, they are also more likely to cause vibration related problems in other disks. I will probably write a future blog post about disks that work in small arrays but not in big arrays.

My personal experience is that SSDs are at least as reliable as hard drives even when run in situations where vibration and heat aren’t issues. Vibration or a warm environment can cause data loss from hard drives in situations where SSDs will work reliably.

NVMe

I think that NVMe isn’t very different from other SSDs in terms of the actual storage. But the different interface gives some interesting possibilities for data loss. OS, filesystem, and motherboard bugs are all potential causes of data loss when using a newer technology.

Future Technology

The latest thing for high end servers is Optane Persistent memory [2] also known as DCPMM. This is NVRAM that fits in a regular DDR4 DIMM socket that gives performance somewhere between NVMe and RAM and capacity similar to NVMe. One of the ways of using this is “Memory Mode” where the DCPMM is seen by the OS as RAM and the actual RAM caches the DCPMM (essentially this is swap space at the hardware level), this could make multiple terabytes of “RAM” not ridiculously expensive. Another way of using it is “App Direct Mode” where the DCPMM can either be a simulated block device for regular filesystems or a byte addressable device for application use. The final option is “Mixed Memory Mode” which has some DCPMM in “Memory Mode” and some in “App Direct Mode”.

This has much potential for use of backups and to make things extra exciting “App Direct Mode” has RAID-0 but no other form of RAID.

Conclusion

I think that the best things to do for storage reliability are to have ECC RAM to avoid corruption before the data gets written, use reasonable quality hardware (buy stuff with a brand that someone will want to protect), and avoid new technology. New hardware and new software needed to talk to new hardware interfaces will have bugs and sometimes those bugs will lose data.

Filesystems like BTRFS and ZFS are needed to cope with storage devices returning bad data and claiming it to be good, this is a very common failure mode.

Backups are a good thing.

31 May, 2021 11:00AM by etbe

Russ Allbery

Review: The Relentless Moon

Review: The Relentless Moon, by Mary Robinette Kowal

Series: Lady Astronaut #3
Publisher: Tor
Copyright: 2020
ISBN: 1-250-23648-7
Format: Kindle
Pages: 542

Content note: Discussion of eating disorders in this review and portrayal of an eating disorder in the novel.

The Relentless Moon is the third book of the Lady Astronaut series and the first one that doesn't feature Elma. It takes place simultaneously with The Fated Sky and tells the story of what happened on Earth, and the Moon, while Elma was in transit to Mars. It's meant to be read after The Fated Sky and would be a significant spoiler for that novel.

The protagonist of this novel is Nicole Wargin: wife of the governor of Kansas (a more prestigious state in this universe since the seat of government for the United States was relocated to Kansas City after the Meteor), expert politician's wife, and another of the original group of female astronauts. Kenneth, her husband, is considering a run for president. Nicole is working as an astronaut, helping build out the permanent Moon base. But there are a lot of people on Earth who are not happy with the amount of money and political attention that the space program is getting. They have decided to move beyond protests and political opposition to active sabotage.

Nicole was hoping to land an assignment piloting one of the larger rockets. What she gets instead is an assignment as secretary to the Lunar Colony Administrator, as cover. Her actual job is to watch for saboteurs that may or may not be operating on the Moon. Before she even leaves the planet, one of the chief engineers of the space program is poisoned. The pilot of the translunar shuttle falls ill during the flight to the Moon. And then the shuttle's controls fail during landing and disaster is only narrowly averted.

The story from there is a cloak and dagger sabotage investigation mixed with Kowal's meticulously-researched speculation about a space program still running on 1950s technology but drastically accelerated by the upcoming climate collapse of Earth. Nicole has more skills for this type of mission than most around her realize due to very quiet work she did during the war, not to mention the feel for personalities and politics that she's honed as a governor's wife. But, like Elma, she's also fighting some personal battles. Elma's are against anxiety; Nicole's are against an eating disorder.

I think my negative reaction to this aspect of the book is not the book's fault, but it was sufficiently strong that it substantially interfered with my enjoyment. The specific manifestation of Nicole's eating disorder is that she skips meals until she makes herself ill. My own anxious tendencies hyperfocus on prevention and on rule-following. The result is that once Kowal introduces the eating disorder subplot, my brain started anxiously monitoring everything that Nicole ate and keeping track of her last meal. This, in turn, felt horribly intrusive and uncomfortable. I did not want to monitor and police Nicole's eating, particularly when Nicole clearly was upset by anyone else doing exactly that, and yet I couldn't stop the background thread of my brain that did so. The result was a highly unsettling feeling that I was violating the privacy of the protagonist of the book that I was reading, mixed with anxiety and creeping dread about her calorie intake.

Part of this may have been intentional to give the reader some sense of how this felt to Nicole. (The negative interaction with my own anxiety was likely not intentional.) Kowal did an exceptionally good job at invoking reader empathy (at least in me) for Elma's anxiety in The Calculating Stars. I didn't like the experience much this time, but that doesn't make it an invalid focus for a book. It may, however, make me a poor reviewer for this part of the reading experience.

This was a major subplot, so it was hard to escape completely, but I quite enjoyed the rest of the book. It's not obvious who the saboteurs are or even how the sabotage is happening, and the acts of clear sabotage are complicated by other problems that may be more subtle sabotage, may be bad luck, or may be the inherent perils of trying to survive in space. Many of Nicole's suspicions do not pan out, which was a touch that I appreciated. She has to look for ulterior motives in everything, and in reality that means she'll be wrong most of the time, but fiction often unrealistically short-cuts that process. I also liked how Kowal handles the resolution, which avoids villain monologues and gives Nicole's opposition their own contingency plans, willingness to try to adapt to setbacks, and the intelligence to keep trying to manipulate the situation even when their plans fail.

As with the rest of this series, there's a ton of sexism and racism, which the characters acknowledge and which Nicole tries to resist as much as she can, but which is so thoroughly baked into the society that it's mostly an obstacle that has to be endured. This is not the book, or series, to read if you're looking for triumph over discrimination or for women being allowed to be awesome without having to handle and soothe men's sexist feelings about their abilities. Nicole gets a clear victory arc, but it's a victory despite sexism rather than an end to it.

The Relentless Moon did feel a bit long. There are a lot of scene-setting preliminaries before Nicole leaves for the Moon, and I'm not sure all of them were necessary at that length. Nicole also spends a lot of time being suspicious of everyone and second-guessing her theories, and at a few points I thought that dragged. But once things start properly happening, I thoroughly enjoyed the technological details and the thought that Kowal put into the mix of sabotage, accidents, and ill-advised human behavior that Nicole has to sort through. The last half of the book is the best, which is always a good property for a book to have.

The eating disorder subplot made me extremely uncomfortable for reasons that are partly peculiar to me, but outside of that, this is a solid entry in the series and fills in some compelling details of what was happening on the other end of the intermittent radio messages Elma received. If you've enjoyed the series to date, you will probably enjoy this installment as well. But if you didn't like the handling of sexism and racism as deeply ingrained social forces that can at best be temporarily bypassed, be warned that The Relentless Moon continues the same theme. Also, if you're squeamish about medical conditions in your fiction, be aware that the specific details of polio feature significantly in the book.

Rating: 7 out of 10

31 May, 2021 05:40AM

May 30, 2021

Russell Coker

Wifi Performance on Linux

Wifi usually just works. In the past I haven’t had to worry much about performance as for home use things have always been bearable and at work it’s never been my job so I just file a bug report with the relevant people when things go wrong. But a few years ago I had some problems.

For my home network I got a free Wifi AP which wasn’t performing well.

My AP supported 802.11 modes b/g or g/n (b, g, and n are slow, medium, and fast speeds). I initially had the AP running in b/g mode because I had an 802.11b USB wifi device that I used. When I replaced that with one that did 802.11g I tried changing the AP to g/n mode but performance was even worse on my laptop (although quite good on phones) so I switched back.

For phones it appeared to work well giving 54Mb/s while on my laptop (a second hand Thinkpad X1 Carbon) it was giving 11Mb/s at best and often much less than that. The best demonstration of problems was to start transferring a large file while pinging a system on the LAN the AP was connected to. Usually it would give ping times of 1s or more, sometimes 5s+ ping times. While this was happening the “Invalid misc” count increased rapidly, often by more than 100 per second.

The results of Google searches suggest that “Invalid misc” is due to interference and recommend changing the channel. My AP had been on channel 1 which had performed poorly, channels 2-8 were ok, and channel 9 seemed reasonably good. As an aside trying all channels manually is not a good idea, it takes a lot of time and gives little useful data. After changing to channel 9 it still only gave about 500KB/s when transferring large files with ping times of about 100ms, but that’s a big improvement. I tried running “iwlist scanning” to scan the Wifi network for other APs, that showed that channel 1 was used a lot but didn’t make it clear what I should do other than that.

The next thing I tried was the Wifi Analyser app on Android [1] (which doesn’t work on my latest phone, I don’t know if it’s still being actively maintained, it will definitely work on older phones). That has a nice graph mode that shows which channels are used and how the frequencies spread and interfere with other channels. One thing I hadn’t realised before I looked at the graphs is that 802.11n uses 4 channels and interferes past that. If you have two 802.11n devices you don’t have much space left out of the 14 channels available. To make more space I configured the Wifi AP in my ADSL modem to 802.11b/g mode and assigned it a channel away from the others making 4 channels available with no interference.

After that iwconfig reported between 60 and 120Mb/s and I got consistent transfer rates over 1.5MB/s while ping times remained below 100ms.

The 5GHz frequency range is less congested. But at the time I didn’t feel like buying 5GHz equipment.

Since that time I had signed up with an ISP that had a good deal on a Wifi AP that had 5GHz. Now I have all my devices configured to use 5GHz or 2.4GHz depending on which they think is best. So there’s less devices on 2.4GHz and the AP is configured for “20MHz channel width” in the 2.4GHz range (which means 802.11b/g).

Conclusion

802.11n seems to be a bad idea unless you run the only AP in an area. In a suburban area you will have 3 other houses broadcasting in your area and 802.11n is bad for everyone. The worst case scenario would be one person using 802.11n and interfering with everyone else’s 802.11g and then having everyone else turn on 802.11n to try and make things faster.

5GHz is less congested as most people run old hardware. It also has a shorter range which has the upside of getting less interference from other people. I’m considering installing 5GHz APs at both ends of my house and configuring all my new devices to not use 2.4GHz.

Wifi spectrum analysis software is much better than manual testing of channels or trying to deduce things from the output if “iwlist scanning“.

30 May, 2021 11:24PM by etbe

hackergotchi for Daniel Pocock

Daniel Pocock

Installing Tryton Chart of Accounts for Switzerland

I wanted to make it easy for more people to try the new Chart of Accounts discussed in my previous blog.

Therefore, I've published it as a Debian package for users of the current stable version of Debian 10, buster.

Here are the exact steps to get started with it:

Enable the Debify repository

$ wget -O - http://apt.debify.org/add-apt-debify | bash

You can use the same machine, for example, a laptop, as both client and server. You could also use two different machines.

Install the necessary packages for a server:

$ sudo apt update
$ sudo apt install tryton-modules-account-ch \
                   tryton-modules-all \
                   postgresql-11

Install the necessary packages on each client (the server machine could also be a client):

$ sudo apt update
$ sudo apt install tryton-client

On the server machine, you need to edit the file

/etc/tryton/trytond.conf
and uncomment these two lines:

listen = [::]:8000

uri = postgresql://tryton:tryton@/

On the server machine, create the user and database. Use trytond-admin to populate the database schema:

sudo systemctl stop tryton-server && \
  sudo -u postgres createuser --createdb tryton && \
  sudo -u postgres createdb --encoding=UNICODE --owner=tryton tryton && \
  sudo -u tryton trytond-admin -v -c /etc/tryton/trytond.conf --all -d tryton && \
  sudo systemctl restart tryton-server

Now you can start the Tryton client. When you see the login window, click the "Manage..." button to add details about your server.

When you log in for the first time, you will see the setup wizard. You can select the Chart of Accounts for any of the official or unofficial languages used in Switzerland:

You can then see both the Chart of Accounts and the Tax Codes presented in the language you selected:

30 May, 2021 09:00PM

Vincent Fourmond

QSoas quiz #2: averaging several Y values for the same X value

This second quiz may sound like the first one, but in fact, the approach used is completely different. The point is to gather some elementary statistics from a series of experiments performed under different conditions, but with several repeats at the same conditions.

Quiz

You are given a file (which you can download there) that contains a series of pH value data: the X column is the pH, the Y column the result of the experiment at the given pH (let's say the measure of the catalytic rate of an enzyme). Your task is to take this data and produce a single dataset which contains, for each pH value, the pH, the average of the results at that pH and the standard deviation. The result should be identical to the following file, and should look like that:
There are several ways to do this, but all ways must rely on stats, and the more natural way in QSoas is to take advantage of split-on-values, which is a very powerful command but somehow hard to master, which is the point of this Quiz.
By the way, the data file is purely synthetic, if you look in the GitHub repository, you'll see how it was generated.

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.0. You can download its source code there (or clone from the GitHub repository) and compile it yourself, or buy precompiled versions for MacOS and Windows there.

30 May, 2021 08:36PM by Vincent Fourmond (noreply@blogger.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

td 0.0.3 on CRAN: Maintenance release

The still recent-ish td package for accessing the twelvedata API for financial data has been updated on CRAN yesterday, and is now at released version 0.0.3.

A few URLs were updated to please the lint checker, and a Depends: on R 4.0.0 or later was added. We then realized (as always just after the release …) that the core issue was an incorrect version comparison which we already fixed in the git repo.

The NEWS entry follows.

Changes in version 0.0.3 (2021-05-29)

  • The package now (formally) depends on R (>= 4.0.0) as it uses a recently added R function for the default config file.

  • A few URLs were updated in the README.md file

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 May, 2021 11:53AM

Russ Allbery

Review: The Silver Chair

Review: The Silver Chair, by C.S. Lewis

Illustrator: Pauline Baynes
Series: Chronicles of Narnia #4
Publisher: Collier
Copyright: 1953
Printing: 1978
ISBN: 0-02-044250-5
Format: Mass market
Pages: 217

The Silver Chair is a sequel to The Voyage of the Dawn Treader and the fourth book of the Chronicles of Narnia in original publication order. (For more about publication order, see the introduction to my review of The Lion, the Witch and the Wardrobe.) Apart from a few references to The Voyage of the Dawn Treader at the start, it stands sufficiently on its own that you could read it without reading the other books, although I have no idea why you'd want to.

We have finally arrived at my least favorite of the Narnia books and the one that I sometimes skipped during re-reads. (One of my objections to the new publication order is that it puts The Silver Chair and The Last Battle back-to-back, and I don't think you should do that to yourself as a reader.) I was hoping that there would be previously unnoticed depth to this book that would redeem it as an adult reader. Sadly, no; with one very notable exception, it's just not very good.

MAJOR SPOILERS BELOW.

The Silver Chair opens on the grounds of the awful school to which Eustace's parents sent him: Experiment House. That means it opens (and closes) with a more extended version of Lewis's rant about schools. I won't get into this in detail since it's mostly a framing device, but Lewis is remarkably vicious and petty. His snide contempt for putting girls and boys in the same school did not age well, nor did his emphasis at the end of the book that the incompetent head of the school is a woman. I also raised an eyebrow at holding up ordinary British schools as a model of preventing bullying.

Thankfully, as Lewis says at the start, this is not a school story. This is prelude to Jill meeting Eustace and the two of them escaping the bullies via a magical door into Narnia. Unfortunately, that's the second place The Silver Chair gets off on the wrong foot.

Jill and Eustace end up in what the reader of the series will recognize as Aslan's country and almost walk off the vast cliff at the end of the world, last seen from the bottom in The Voyage of the Dawn Treader. Eustace freaks out, Jill (who has a much better head for heights) goes intentionally close to the cliff in a momentary impulse of arrogance before realizing how high it is, Eustace tries to pull her back, and somehow Eustace falls over the edge.

I do not have a good head for heights, and I wonder how much of it is due to this memorable scene. I certainly blame Lewis for my belief that pulling someone else back from the edge of a cliff can result in you being pushed off, something that on adult reflection makes very little sense but which is seared into my lizard brain. But worse, this sets the tone for the rest of the story: everything is constantly going wrong because Eustace and Jill either have normal human failings that are disproportionately punished or don't successfully follow esoteric and unreasonably opaque instructions from Aslan.

Eustace is safe, of course; Aslan blows him to Narnia and then gives Jill instructions before sending her afterwards. (I suspect the whole business with the cliff was an authorial cheat to set up Jill's interaction with Aslan without Eustace there to explain anything.) She and Eustace have been summoned to Narnia to find the lost Prince, and she has to memorize four Signs that will lead her on the right path.

Gah, the Signs. If you were the sort of kid that I was, you immediately went back and re-read the Signs several times to memorize them like Jill was told to. The rest of this book was then an exercise in anxious frustration. First, Eustace is an ass to Jill and refuses to even listen to the first Sign. They kind of follow the second but only with heavy foreshadowing that Jill isn't memorizing the Signs every day like she's supposed to. They mostly botch the third and have to backtrack to follow it. Meanwhile, the narrator is constantly reminding you that the kids (and Jill in particular) are screwing up their instructions. On re-reading, it's clear they're not doing that poorly given how obscure the Signs are, but the ominous foreshadowing is enough to leave a reader a nervous wreck.

Worse, Eustace and Jill are just miserable to each other through the whole book. They constantly bicker and snipe, Eustace doesn't want to listen to her and blames her for everything, and the hard traveling makes it all worse. Lewis does know how to tell a satisfying redemption arc; one of the things I have always liked about Edmund's story is that he learns his lesson and becomes my favorite character in the subsequent stories. But, sadly, Eustace's redemption arc is another matter. He's totally different here than he was at the start of The Voyage of the Dawn Treader (to the degree that if he didn't have the same name in both books, I wouldn't recognize him as the same person), but rather than a better person he seems to have become a different sort of ass. There's no sign here of the humility and appreciation for friendship that he supposedly learned from his time as a dragon.

On top of that, the story isn't very interesting. Rilian, the lost Prince, is a damp squib who talks in the irritating archaic accent that Lewis insists on using for all Narnian royalty. His story feels like Lewis lifted it from medieval Arthurian literature; most of it could be dropped into a collection of stories of knights of the Round Table without seeming out of place. When you have a country full of talking animals and weirdly fascinating bits of theology, it's disappointing to get a garden-variety story about an evil enchantress in which everyone is noble and tragic and extremely stupid.

Thankfully, The Silver Chair has one important redeeming quality: Puddleglum.

Puddleglum is a Marsh-wiggle, a bipedal amphibious sort who lives alone in the northern marshes. He's recruited by the owls to help the kids with their mission when they fail to get King Caspian's help after blowing the first Sign. Puddleglum is an absolute delight: endlessly pessimistic, certain the worst possible thing will happen at any moment, but also weirdly cheerful about it. I love Eeyore characters in general, but Puddleglum is even better because he gives the kids' endless bickering exactly the respect that it deserves.

"But we all need to be very careful about our tempers, seeing all the hard times we shall have to go through together. Won't do to quarrel, you know. At any rate, don't begin it too soon. I know these expeditions usually end that way; knifing one another, I shouldn't wonder, before all's done. But the longer we can keep off it—"

It's even more obvious on re-reading that Puddleglum is the only effective member of the party. Jill has only a couple of moments where she gets the three of them past some obstacle. Eustace is completely useless; I can't remember a single helpful thing he does in the entire book. Puddleglum and his pessimistic determination, on the other hand, is right about nearly everything at each step. And he's the one who takes decisive action to break the Lady of the Green Kirtle's spell near the end.

I was expecting a bit of sexism and (mostly in upcoming books) racism when re-reading these books as an adult given when they were written and who Lewis was, but what has caught me by surprise is the colonialism. Lewis is weirdly insistent on importing humans from England to fill all the important roles in stories, even stories that are entirely about Narnians. I know this is the inherent weakness of portal fantasy, but it bothers me how little Lewis believes in Narnians solving their own problems. The Silver Chair makes this blatantly obvious: if Aslan had just told Puddleglum the same information he told Jill and sent a Badger or a Beaver or a Mouse along with him, all the evidence in the book says the whole affair would have been sorted out with much less fuss and anxiety. Jill and Eustace are far more of a hindrance than a help, which makes for frustrating reading when they're supposedly the protagonists.

The best part of this book is the underground bits, once they finally get through the first three Signs and stumble into the Lady's kingdom far below the surface. Rilian is a great disappointment, but the fight against the Lady's mind-altering magic leads to one of the great quotes of the series, on par with Reepicheep's speech in The Voyage of the Dawn Treader.

"Suppose we have only dreamed, or made up, all those things — trees and grass and sun and moon and stars and Aslan himself. Suppose we have. Then all I can say is that, in that case, the made-up things seem a good deal more important than the real ones. Suppose this black pit of a kingdom of yours is the only world. Well, it strikes me as a pretty poor one. And that's a funny thing, when you come to think of it. We're just babies making up a game, if you're right. But four babies playing a game can make a play-world which licks your real world hollow. That's why I'm going to stand by the play world. I'm on Aslan's side even if there isn't any Aslan to lead it. I'm going to live as like a Narnian as I can even if there isn't any Narnia. So, thanking you kindly for our supper, if these two gentlemen and the young lady are ready, we're leaving your court at once and setting out in the dark to spend our lives looking for Overland. Not that our lives will be very long, I should think; but that's small loss if the world's as dull a place as you say."

This is Puddleglum, of course. And yes, I know that this is apologetics and Lewis is talking about Christianity and making the case for faith without proof, but put that aside for the moment, because this is still powerful life philosophy. It's a cynic's litany against cynicism. It's a pessimist's defense of hope.

Suppose we have only dreamed all those things like justice and fairness and equality, community and consensus and collaboration, universal basic income and effective environmentalism. The dreary magic of the realists and the pragmatists say that such things are baby's games, silly fantasies. But you can still choose to live like you believe in them. In Alasdair Gray's reworking of a line from Dennis Lee, "work as if you live in the early days of a better nation."

That's one moment that I'll always remember from this book. The other is after they kill the Lady of the Green Kirtle and her magic starts to fade, they have to escape from the underground caverns while surrounded by the Earthmen who served her and who they believe are hostile. It's a tense moment that turns into a delightful celebration when they realize that the Earthmen were just as much prisoners as the Prince was. They were forced from a far deeper land below, full of living metals and salamanders who speak from rivers of fire. It's the one moment in this book that I thought captured the magical strangeness of Narnia, that sense that there are wonderful things just out of sight that don't follow the normal patterns of medieval-ish fantasy.

Other than a few great lines from Puddleglum and some moments in Aslan's country, the first 60% of this book is a loss and remarkably frustrating to read. The last 40% isn't bad, although I wish Rilian had any discernible character other than generic Arthurian knight. I don't know what Eustace is doing in this book at all other than providing a way for Jill to get into Narnia, and I wish Lewis had realized Puddleglum could be the protagonist. But as frustrating as The Silver Chair can be, I am still glad I re-read it. Puddleglum is one of the truly memorable characters of children's literature, and it's a shame he's buried in a weak mid-series book.

Followed, in the original publication order, by The Horse and His Boy.

Rating: 6 out of 10

30 May, 2021 05:53AM

May 29, 2021

hackergotchi for Lisandro Damián Nicanor Pérez Meyer

Lisandro Damián Nicanor Pérez Meyer

On configuring RAK LoRa devices, or how to avoid their Windows-only serial application

tl;dr: use a serial terminal which can buffer input and send it all at once, lines should end with \CR\LF.

I'm am currently working on bringing up a LoRa network in Bahía Blanca. Parts of the nodes I need to set up are made by RAK Wireless.

According to their documentation the nodes can be configured by using a serial connection to them. So I quickly turned to minicom for it, with no avail. Somehow I could read whatever the device was writing to my machine but could not write any commands back to it.

In order to get the issue solved I switched to running their RAK serial port tool under wine. Making it work made me download and install a huge amount of Windows libraries and tools, but in the end I wanted a Linux-only solution.

After much  digging the web, trial and error I've found a way to solve this:

  1. Commands should end with \CR\LF.
  2. The command needs to be sent quickly, all in one go, trough the serial port. This means it can't be typed and sent as normal serial consoles.

The solution for (1) in minicom is easy, but I don't know if minicom is capable of doing the buffering thing, so I went to use cutecom, for which one has to enter the input and send it all at once.

29 May, 2021 06:06PM by Lisandro Damián Nicanor Pérez Meyer (noreply@blogger.com)

hackergotchi for Joey Hess

Joey Hess

the end of the olduse.net exhibit

Ten years ago I began the olduse.net exhibit, spooling out Usenet history in real time with a 30 year delay. My archive has reached its end, and ten years is more than long enough to keep running something you cobbled together overnight way back when. So, this is the end for olduse.net.

The site will continue running for another week or so, to give you time to read the last posts. Find the very last one, if you can!

The source code used to run it, and the content of the website have themselves been archived up for posterity at The Internet Archive.

Sometime in 2022, a spammer will purchase the domain, but not find it to be of much value.

The Utzoo archives that underlay it have currently sadly been censored off the Internet by someone. This will be unsuccessful; by now they have spread and many copies will live on.


I told a lie ten years ago.

You can post to olduse.net, but it won't show up for at least 30 years.

Actually, those posts drop right now! Here are the followups to 30-year-old Usenet posts that I've accumulated over the past decade.

Mike replied in 2011 to JPM's post in 1981 on fa.arms-d "Re: CBS Reports"

A greeting from the future: I actually watched this yesterday (2011-06-10) after reading about it here.

Christian Brandt replied in 2011 to schrieb phyllis's post in 1981 on the "comments" newsgroup "Re: thank you rrg"

Funny, it will be four years until you post the first subnet post i ever read and another eight years until my own first subnet post shows up.

Bernard Peek replied in 2012 to mark's post in 1982 on net.sf-lovers "Re: luke - vader relationship"

i suggest that darth vader is luke skywalker's mother.

You may be on to something there.

Martijn Dekker replied in 2012 to henry's post in 1982 on the "test" newsgroup "Re: another boring test message"

trentbuck replied in 2012 to dwl's post in 1982 on the "net.jokes" newsgroup "Re: A child hood poem"

Eveline replied in 2013 to a post in 1983 on net.jokes.q "Re: A couple"

Ha!

Bill Leary replied in 2015 to Darin Johnson's post in 1985 on net.games.frp "Re: frp & artwork"

Frederick Smith replied in 2021 to David Hoopes's post in 1990 on trial.rec.metalworking "Re: Is this group still active?"

29 May, 2021 05:26PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

Planes, Pandemic and Medical Devices – I

The Great Electric Airplane Race

It took me quite sometime to write as have been depressed about things. Then a few days back saw Nova’s The Great Electric Airplane Race. While it was fabulous and a pleasure to see and know that there are more than 200 odd startups who are in the race of making an electric airplane which works and has FAA certification. I was disappointed though that there no coverage of any University projects.

From what little I know, almost all advanced materials which U.S. had made has been first researched in mostly Universities and when it is close to fruition then either spin-off as a startup or give to some commercial organization/venture to make it scalable and profitable. If they had, I am sure more people could be convinced to join sciences and engineering in college. I actually do want to come to this as part of both general medicine and vaccine development in U.S. but will come later. The idea that industry works alone should be discouraged, but that perhaps may require another article to articulate why I believe so.

Medical Device – Ventilators in India

Before the pandemic, probably most didn’t know what a ventilator is and was, at least I didn’t, although I probably used it during my somewhat brief hospital stay a couple of years ago. It entered into the Indian twitter lexicon more so in the second wave as the number of people who got infected became more and more and the ventilators which were serving them became less and less just due to sheer mismatch of numbers and requirements.

Rich countries donated/gifted ventilators to India on which GOI put GST of 28%. Apparently, they are a luxury item, just like my hearing aid.

Last week Delhi High Court passed a judgement that imposition of GST should not be on a gift like ventilators or oxygenators. The order can be found here. Even without reading the judgement the shout from the right was ‘judicial activism’ while after reading it is a good judgement which touches on several points. The first, in itself, stating the dichotomy that if a commercial organization wanted to import a ventilator or an oxygenator the IGST payable is nil while for an individual it is 12%. The State (here State refers to State Government in this case Gujarat Govt.) did reduce the IGST for state from 12% to NIL IGST for federal states but that to till only 30.06.2021. No relief to individuals on that account.

The Court also made use of Mr. Arvind Datar, as Amicus Curiae or friend of court. The petitioner, an 85-year-old gentleman who has put it up has put broad assertions under Article 21 (right to live) and the court in its wisdom also added Article 14 which enshrines equality of everyone before law.

The Amicus Curiae, as his duty, guided the court into how the IGST law works and shared a brief history of the law and the changes happening before and after it. During his submissions, he also shared the Mega Exemption Notification no. 50/2017 under which several items are there which are exempted from putting IGST. The Amicus Curiae did note that such exemptions were also there before Mega Exemption Notification had come into play.

However, DGFT (Directorate General of Foreign Trade) on 30-04-2021 issued notification No. 4/2015-2020 through which oxygenators had been exempted from Custom Duty/BCD (Basic Customs Duty. In another notification on no. 30/2021 dated 01.05.2021 it reduced IGST from 28% to 12% for personal use. If however the oxygenator was procured by a canalizing agency (bodies such as State Trading Corporation of India (STC) or/and Metals and Minerals Corporation (MMTC) and such are defined as canalising agents) then it will be fully exempted from paying any sort of IGST, albeit subject to certain conditions. What the conditions are were not shared in the open court.

The Amicus Curiae further observed that it is contrary to practice where both BCD and IGST has been exempted for canalising agents and others, some IGST has to be paid for personal use. To share within the narrow boundaries of the topic, he shared entry no. 607A of General Exemption no.190 where duty and IGST in case of life-saving drugs are zero provided the life-saving drugs imported have been provided by zero cost from an overseas supplier for personal use.

He further shared that the oxygen generator would fall in the same entry of 607A as it fulfills all the criteria as shared for life-saving medicines and devices. He also used the help of Drugs and Cosmetics Act 1940 which provides such a relief.

The Amicus Curiae further noted that GOI amended its foreign trade policy (2015-2020) via notification no.4/2015-2020, dated 30.04.2021, issued by DGFT where Rakhi and life-saving drugs for personal use has been exempted from BCD till 30-07-2021. No reason not to give the same exemption to oxygenators which fulfill the same thing.

The Amicus Curiae, further observes that there are “exceptional circumstances” provisions as adverted to in sub-section (2) of Section 25 of the Customs Act, whereby Covid-19 which is known and labelled as a pandemic where the distinctions between the two classes of individuals or agencies do not make any sense. While he did make the observation that exemption from duty is not a right, in the light of the pandemic and Article 14, it does not make sense to have distinctions between the two classes of importers.

He further shared from Circular no. 9/2014-Customs, dated 19.08.2014 by CBEC (Central Board of Excise and Customs) which gave broad exemptions under Section 25 (2) of the same act in respect of goods and services imported for safety and rehabilitation of people suffering and effected by natural disasters and epidemics.

He further submits that the impugned notification is irrational as there is no intelligible differentia rule applied or observed in classifying the import of oxygen concentrators into two categories. One, by the State and its agencies; and the other, by an individual for personal use by way of gift. So there was an absence of ‘adequate determining principle’. To bolster his argument, he shared the judgements of –

a) Union of India vs. N.S. Rathnam & Sons, (2015) 10 SCC 681 (N.S. Ratnams and Sons Case)

b) Shayara Bano vs. Union of India, (2017) 9 SCC 1 (Shayara Bano Case)

The Amicus Curiae also rightly observed that the right to life also encompasses within it, the right to health. You cannot have one without the other and within that is the right to have affordable treatment. He further stated that the state does not only have a duty but a positive obligation is cast upon it to ensure that the citizen’s health is secured. He again cited Navtej Singh Johars vs Union of India (Navtej Singh Johar Case) in defence of right to life. Mr. Datar also shared that unlike in normal circumstances, it is and should be enough to show ‘distinct and noticeable burdensomeness’ which is directly attributable to the impugned/questionable tax. The gentleman cited Indian Express Newspapers (Bombay) Private Limited vs. Union of India, (1985) 1 SCC 641 (Indian Express case) 1985 which shared both about Article 19 (1) (a) and Article 21.

Bloggers note – At this juncture, I should point out which I am sharing the judgement and I would be sharing only the Amicus Curiae POV and then the judge’s final observations. While I was reading it, I was stuck by the fact that the Amicus Curiae had cited 4 cases till now, 3 of them are pretty well known both in the legal fraternity and even among public at large. Another 3 which have been shared below which are also of great significance. Hence, felt the need to share the whole judgement.

The Amicus Curiae further observed that this tax would have to be disproportionately will have to be paid by the old and the infirm, and they might find it difficult to pay the amounts needed to pay the customs duty/IGST as well as find the agent to pay in this pandemic.

Blogger Note – The situation with the elderly is something like this. Now there are a few things to note, only Central Govt. employees and pensioners get pensions which has been freezed since last year. The rest of the elderly population does not. The rate of interest has fallen to record lows from 5-6% in savings interest rate to 2% and on Fixed Deposits at 4.9% while the nominal inflation rate has up by 6% while CPI and real inflation rates are and would be much more. And this is when there is absolutely no demand in the economy. To add to all this, RBI shared a couple of months ago that fraud of 5 trillion rupees has been committed between 2015 and 2019 in banks. And this is different from the number of record NPA’s that have been both in Public and Private Sector banks. To get out of this, the banks have squeezed their customers and are squeezing as well as asking GOI for bailouts. How much GOI is responsible for the frauds as well as NPA’s would probably require its own space. And even now, RBI and banks have made heavy provisions as lockdowns are still a facet and are supposed to remain a facet till the end of the year or even next year (all depending upon when we get the vaccine).

The Amicus Curiae further argued that the ventilators which are available locally are of bad quality. The result of all this has resulted in a huge amount of unsurmountable pressure on hospitals which they are unable to overcome. Therefore, the levy of IGST on oxygenators has direct impact on health of the citizen. So the examination of the law should not be by what intention it was but how it is affecting citizen rights now. For this he shared R.C.Cooper vs Union of India (another famous case R.C. Cooper vs Union of India) especially paragraph 49 and Federation of Hotel & Restaurant Association of India vs. Union of India, (1989) at paragraph 46 (Federation of Hotel Case)

Mr. Datar further shared the Supreme Court order dated 18.12.2020, passed in Suo Moto Writ Petition(Civil) No.7/2020, to buttress the plea that the right to health includes the right to affordable treatment.

Blogger’s Note – For those, who don’t know Suo Moto is when the Court, whether Supreme Court or the High Courts take up a matter for public good. It could be in anything, law and order, Banking, Finance, Public Health etc. etc. This was the norm before 2014. The excesses of the executive were curtailed by both the Higher and the lower Judiciary. That is and was the reason that Judiciary is and was known as the third pillar of Indian democracy. A good characterization of Suo Moto can be found here.

Before ending his submission, the learned Amicus Curiae also shared Jeeja Ghosh vs. Union of India, (2016) (Jeeja Ghosh Case, an outstanding case as it deals with people with disabilities and their rights and the observations made by the Division Bench of Hon’ble Mr. Justice A. K. Sikri as well as Hon’ble Mr. Justice R. K. Agrawal.)

After Amicus Curiae completed his submissions, it was the turn of Mr. Sudhir Nandrajog, and he adopted the arguments and submissions made by the Amicus Curiae. The gentleman reiterated the facts of the case and how the impugned notification was violative of both Article 14 and 21 of the Indian Constitution.

Blogger’s Note – The High Court’s judgement which shows all the above arguments by the Amicus Curiae and the petitioner’s lawyer also shared the State’s view. It is only on page 24, where the Delhi High Court starts to share its own observations on the arguments of both sides.

Judgement continued – The first observation that the Court makes is that while the petitioner demonstrated that the impugned tax imposition would have a ‘distinct and noticeable burdensomeness’ while the State did not state or share in any way how much of a loss it would incur if such a tax were let go and how much additional work would have to be done in order to receive this specific tax. It didn’t need to do something which is down the wire or mathematically precise, but it didn’t even care to show even theoretically how many people will be affected by the above. The counter-affidavit by the State is silent on the whole issue.

The Court also contended that the State failed to prove how collecting IGST from the concerned individuals would help in fighting coronavirus in any substantial manner for the public at large. The High Court shared observations from the Navtej Singh Johar case where it is observed that the State has both negative and positive obligations to ensure that its citizens are able to enjoy the right to health.

The High Court further made the point that no respectable person does like to be turned into a ‘charity case.’ If the State contends that those who obey the law should pay the taxes then it is also obligatory on the state’s part to lessen exactions such as taxes at the very least in times of war, famine, floods, epidemics and pandemics. Such an approach would lead a person to live a life of dignity which is part of Article 21 of the Constitution.

Another point that was made by the State that only the GST council is able to make any changes as regards to exemptions rather than the State were found to be false as the State had made some exemptions without going to the GST council using its own powers under Section 25 of the Customs Act.

The Court also points out that it does send a discriminatory pattern when somebody like petitioner has to pay the tax for personal use while those who are buying it for commercial use do not have to pay the tax.

The Court agreed of the view of the Amicus Curiae, Mr. Datar that oxygenator should be taxed at NIL rate at IGST as it is part of life-saving drugs and oxygenator fits the bill as medical equipment as it is used in the treatment, mitigation and prevention of spread of Coronavirus. Mr. Datar also did show that oxygenator is placed at the same level as other life-saving drugs. The Court felt further emboldened as the observations by Supreme Court in State of Andhra Pradesh vs. Linde India Limited, 2020 ( State of Andhra Pradesh vs Linde Ltd.)

The Court further shared many subsequent notifications from the State and various press releases by the State itself which does make the Court’s point that oxygenators indeed are drugs as defined in the court case above. The State should have it as part of notification 190. This would preserve the start of the notification date from 03.05.2021 and the state would not have to issue a new notification.

The Court further went to postulate that any persons similar to the petitioner could avail of the same, if they furnish a letter of undertaking to an officer designated by the State that the medical equipment would not be put to commercial use. Till the state does not do that, in the interim the importer could give the same undertaking to Joint Secretary, Customs or their nominee can hand over the same to custom officer.

The Court also shared that it does not disagree with the State’s arguments but the challenges which have arisen are in a unique time period/circumstances, so they are basing their judgement based on how the situation is.

The Court also mentioned an order given by Supreme Court Diary No. 10669/2020 passed on 20.03.2020 where SC has taken pains to understand the issues faced by the citizens. The court also mentioned the Small Scale Industrial Manufactures Association Case (both of these cases I don’t know) .

So in conclusion, the Court holds the imposition of IGST on oxygenator which are imported by individuals as gifts from their relatives as unconstitutional. They also shared that any taxes taken by GOI in above scenario have to be returned. The relief to the state is they will not have to pay interest cost on the same.

To check misuse of the same, the petitioner or people who are in similar circumstances would have to give a letter of undertaking to an officer designated by the State within 7 days of the state notifying the patient or anybody authorized by him/her to act on their behalf to share the letter of undertaking with the State. And till the State doesn’t provide an officer, the above will continue.

Hence, both the writ petition and the pending application are disposed off.

The Registry is directed to release any money deposited by the petitioner along with any interest occurred on it (if any) .

At the end they record appreciation of Mr. Arvind Datar, Mr. Zoheb Hossain, Mr. Sudhir Nandrajog as well as Mr. Siddharth Bambha. It is only due to their assistance that the court could reach the conclusion it did.

For Delhi High Court

RAJIV SHAKDHER, J.

TALWANT SINGH, J.

May 21, 2020

Blogger’s Observations – Now, after the verdict GOI does have few choices, either accept the verdict or appeal in the SC. A third choice is to make a committee and come to the same conclusions via the committee. GOI has done something similar in the past. If that happens and the same conclusions are reached as before, then the aggrieved may have no choice but to appear in the highest court of law. And this will put the aggrieved at a much more vulnerable place than before as SC court fees, lawyer fees etc. are quite high compared to High Courts. So, there is a possibility that the petitioner may not even approach unless and until some non-profit (NGO) does decide to fight and put it up as common cause or something similar.

There is another judgement that I will share, probably tomorrow. Thankfully, that one is pretty short compared to this one. So it should be far more easier to read. FWIW, I did learn about the whole freeenode stuff and many channels who have shifted from freenode to libera. I will share my own experience of the same but that probably will take a day or two.

Zeeshan of IYC (India Youth Congress) along with Salman Khan’s non-profit Being Human getting oxygenators

The above picture of Zeeshan. There have been a whole team of Indian Youth Congress workers (main opposition party to the ruling party) who have doing lot of relief effort. They have been buying Oxygenators from abroad with help of Being Human Foundation started by Salman Khan, an actor who works in A-grade movies in Bollywood.

29 May, 2021 01:12PM by shirishag75

May 28, 2021

hackergotchi for Bits from Debian

Bits from Debian

Donation from rsync.net to the Debian Project and benefits for Debian members

We are pleased to announce that offsite backup and cloud storage company rsync.net has generously donated several Terabytes of storage space to the Debian Project! This new storage medium will be used to backup our Debian Peertube instance.

In addition to this bountiful offer, rsync.net is also providing a free-forever 500 GB account to every Debian Developer.

rsync.net is a dedicated offsite backup company. Since 2001, they have provided customers with a secure UNIX filesystem accessible with most SSH/SFTP applications. rsync.net’s infrastructure is spread across multiple continents with a core IPv6 network and a ZFS redundant file-system assuring customer data is kept securely with integrity.

The Debian Project thanks rsync.net for their generosity and support.

28 May, 2021 11:45AM by Donald Norwood and Laura Arjona Reina

hackergotchi for Jonathan Dowland

Jonathan Dowland

Queueing theory

Last year I began looking at queuing theory, to try and see if I could use it as a robust underpinning for a cost model to evaluate rewritten stream-processing programs.

I started to work with my co-supervisor, Dr. Paul Ezhilchelvan, who is an expert in this area. In order to brief him on the work I'd done so far as well as my initial efforts to map queueing theory concepts onto Striot's concepts, I prepared a brief presentation:

28 May, 2021 09:55AM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Trying to understand Kubernetes networking

I previously built a single node Kubernetes cluster as a test environment to learn more about it. The first thing I want to try to understand is its networking. In particular the IP addresses that are listed are all 10.* and my host’s network is a 192.168/24. I understand each pod gets its own virtual ethernet interface and associated IP address, and these are generally private within the cluster (and firewalled out other than for exposed services). What does that actually look like?

$ ip route
default via 192.168.53.1 dev enx00e04c6851de
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev weave proto kernel scope link src 192.168.0.1
192.168.53.0/24 dev enx00e04c6851de proto kernel scope link src 192.168.53.147

Huh. No sign of any way to get to 10.107.66.138 (the IP my echoserver from the previous post is available on directly from the host). What about network interfaces? (under the cut because it’s lengthy)

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enx00e04c6851de: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:e0:4c:68:51:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.53.147/24 brd 192.168.53.255 scope global dynamic enx00e04c6851de
       valid_lft 41571sec preferred_lft 41571sec
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 74:d8:3e:70:3b:18 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:18:04:9e:08 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d2:5a:fd:c1:56:23 brd ff:ff:ff:ff:ff:ff
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 12:82:8f:ed:c7:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global weave
       valid_lft forever preferred_lft forever
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether b6:49:88:d6:6d:84 brd ff:ff:ff:ff:ff:ff
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 6e:6c:03:1d:e5:0e brd ff:ff:ff:ff:ff:ff
11: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 9a:af:c5:0a:b3:fd brd ff:ff:ff:ff:ff:ff
13: vethwepl534c0a6@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 1e:ac:f1:85:61:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: vethwepl9ffd6b6@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 56:ca:71:2a:ab:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethwepl62b369d@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether e2:a0:bb:ee:fc:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: vethwepl6669168@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether f2:e7:e6:95:e0:61 brd ff:ff:ff:ff:ff:ff link-netnsid 3

That looks like a collection of virtual ethernet devices that are being managed by the weave networking plugin, and presumably partnered inside each pod. They’re bridged to the weave interface (the master weave bit). Still no clues about the 10.* range. What about ARP?

ip neigh
192.168.53.1 dev enx00e04c6851de lladdr e4:8d:8c:35:98:d5 DELAY
192.168.0.4 dev datapath lladdr da:22:06:96:50:cb STALE
192.168.0.2 dev weave lladdr 66:eb:ce:16:3c:62 REACHABLE
192.168.53.136 dev enx00e04c6851de lladdr 00:e0:4c:39:f2:54 REACHABLE
192.168.0.6 dev weave lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.3 dev datapath lladdr f2:42:c9:c3:08:71 STALE
192.168.0.3 dev weave lladdr f2:42:c9:c3:08:71 REACHABLE
192.168.0.2 dev datapath lladdr 66:eb:ce:16:3c:62 STALE
192.168.0.6 dev datapath lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.4 dev weave lladdr da:22:06:96:50:cb STALE
192.168.0.5 dev datapath lladdr fe:6f:1b:14:56:5a STALE
192.168.0.5 dev weave lladdr fe:6f:1b:14:56:5a REACHABLE

Nope. That just looks like addresses on the weave managed bridge. Alright. What about firewalling?

nft list ruleset
table ip nat {
	chain DOCKER {
		iifname "docker0" counter packets 0 bytes 0 return
	}

	chain POSTROUTING {
		type nat hook postrouting priority srcnat; policy accept;
		 counter packets 531750 bytes 31913539 jump KUBE-POSTROUTING
		oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 84 masquerade 
		counter packets 525600 bytes 31544134 jump WEAVE
	}

	chain PREROUTING {
		type nat hook prerouting priority dstnat; policy accept;
		 counter packets 180 bytes 12525 jump KUBE-SERVICES
		fib daddr type local counter packets 23 bytes 1380 jump DOCKER
	}

	chain OUTPUT {
		type nat hook output priority -100; policy accept;
		 counter packets 527005 bytes 31628455 jump KUBE-SERVICES
		ip daddr != 127.0.0.0/8 fib daddr type local counter packets 285425 bytes 17125524 jump DOCKER
	}

	chain KUBE-MARK-DROP {
		counter packets 0 bytes 0 meta mark set mark or 0x8000 
	}

	chain KUBE-MARK-MASQ {
		counter packets 0 bytes 0 meta mark set mark or 0x4000 
	}

	chain KUBE-POSTROUTING {
		mark and 0x4000 != 0x4000 counter packets 4622 bytes 277720 return
		counter packets 0 bytes 0 meta mark set mark xor 0x4000 
		 counter packets 0 bytes 0 masquerade 
	}

	chain KUBE-KUBELET-CANARY {
	}

	chain INPUT {
		type nat hook input priority 100; policy accept;
	}

	chain KUBE-PROXY-CANARY {
	}

	chain KUBE-SERVICES {
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 9153 counter packets 0 bytes 0 jump KUBE-SVC-JD5MR3NA4I4DYORP
		meta l4proto tcp ip daddr 10.107.66.138  tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip daddr 10.111.16.129  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EZYNCFY2F7N6OQA2
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 443 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 80 counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 10.96.0.1  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-NPX46M4PTMTKRN6Y
		meta l4proto udp ip daddr 10.96.0.10  udp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-TCOU7JCQXEZGVUNU
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-ERIFXISQEP7F7OF4
		 fib daddr type local counter packets 3312 bytes 198720 jump KUBE-NODEPORTS
	}

	chain KUBE-NODEPORTS {
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
	}

	chain KUBE-SVC-NPX46M4PTMTKRN6Y {
		 counter packets 0 bytes 0 jump KUBE-SEP-Y6PHKONXBG3JINP2
	}

	chain KUBE-SEP-Y6PHKONXBG3JINP2 {
		ip saddr 192.168.53.147  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.53.147:6443
	}

	chain WEAVE {
		# match-set weaver-no-masq-local dst  counter packets 135966 bytes 8160820 return
		ip saddr 192.168.0.0/24 ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ip saddr != 192.168.0.0/24 ip daddr 192.168.0.0/24 counter packets 0 bytes 0 masquerade 
		ip saddr 192.168.0.0/24 ip daddr != 192.168.0.0/24 counter packets 33 bytes 2941 masquerade 
	}

	chain WEAVE-CANARY {
	}

	chain KUBE-SVC-JD5MR3NA4I4DYORP {
		  counter packets 0 bytes 0 jump KUBE-SEP-6JI23ZDEH4VLR5EN
		 counter packets 0 bytes 0 jump KUBE-SEP-FATPLMAF37ZNQP5P
	}

	chain KUBE-SEP-6JI23ZDEH4VLR5EN {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:9153
	}

	chain KUBE-SVC-TCOU7JCQXEZGVUNU {
		  counter packets 0 bytes 0 jump KUBE-SEP-JTN4UBVS7OG5RONX
		 counter packets 0 bytes 0 jump KUBE-SEP-4TCKAEJ6POVEFPVW
	}

	chain KUBE-SEP-JTN4UBVS7OG5RONX {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	}

	chain KUBE-SVC-ERIFXISQEP7F7OF4 {
		  counter packets 0 bytes 0 jump KUBE-SEP-UPZX2EM3TRFH2ASL
		 counter packets 0 bytes 0 jump KUBE-SEP-KPHYKKPVMB473Z76
	}

	chain KUBE-SEP-UPZX2EM3TRFH2ASL {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	}

	chain KUBE-SEP-4TCKAEJ6POVEFPVW {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	}

	chain KUBE-SEP-KPHYKKPVMB473Z76 {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	}

	chain KUBE-SEP-FATPLMAF37ZNQP5P {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:9153
	}

	chain KUBE-SVC-666FUMINWJLRRQPD {
		 counter packets 1 bytes 60 jump KUBE-SEP-LYLDBZYLHY4MT3AQ
	}

	chain KUBE-SEP-LYLDBZYLHY4MT3AQ {
		ip saddr 192.168.0.4  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 1 bytes 60 dnat to 192.168.0.4:8080
	}

	chain KUBE-XLB-EDNDUDH2C75GIR6O {
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	}

	chain KUBE-XLB-CG5I4G2RS3ZVWGLK {
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	}

	chain KUBE-SVC-EDNDUDH2C75GIR6O {
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	}

	chain KUBE-SEP-BLQHCYCSXY3NRKLC {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:443
	}

	chain KUBE-SVC-CG5I4G2RS3ZVWGLK {
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	}

	chain KUBE-SEP-5XVRKWM672JGTWXH {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:80
	}

	chain KUBE-SVC-EZYNCFY2F7N6OQA2 {
		 counter packets 0 bytes 0 jump KUBE-SEP-JYW326XAJ4KK7QPG
	}

	chain KUBE-SEP-JYW326XAJ4KK7QPG {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:8443
	}
}
table ip filter {
	chain DOCKER {
	}

	chain DOCKER-ISOLATION-STAGE-1 {
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
		counter packets 0 bytes 0 return
	}

	chain DOCKER-ISOLATION-STAGE-2 {
		oifname "docker0" counter packets 0 bytes 0 drop
		counter packets 0 bytes 0 return
	}

	chain FORWARD {
		type filter hook forward priority filter; policy drop;
		iifname "weave"  counter packets 213 bytes 54014 jump WEAVE-NPC-EGRESS
		oifname "weave"  counter packets 150 bytes 30038 jump WEAVE-NPC
		oifname "weave" ct state new counter packets 0 bytes 0 log group 86 
		oifname "weave" counter packets 0 bytes 0 drop
		iifname "weave" oifname != "weave" counter packets 33 bytes 2941 accept
		oifname "weave" ct state related,established counter packets 0 bytes 0 accept
		 counter packets 0 bytes 0 jump KUBE-FORWARD
		ct state new  counter packets 0 bytes 0 jump KUBE-SERVICES
		ct state new  counter packets 0 bytes 0 jump KUBE-EXTERNAL-SERVICES
		counter packets 0 bytes 0 jump DOCKER-USER
		counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-1
		oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
		oifname "docker0" counter packets 0 bytes 0 jump DOCKER
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
		iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
	}

	chain DOCKER-USER {
		counter packets 0 bytes 0 return
	}

	chain KUBE-FIREWALL {
		 mark and 0x8000 == 0x8000 counter packets 0 bytes 0 drop
		ip saddr != 127.0.0.0/8 ip daddr 127.0.0.0/8  ct status dnat counter packets 0 bytes 0 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		ct state new  counter packets 527014 bytes 31628984 jump KUBE-SERVICES
		counter packets 36324809 bytes 6021214027 jump KUBE-FIREWALL
		meta l4proto != esp  mark and 0x20000 == 0x20000 counter packets 0 bytes 0 drop
	}

	chain INPUT {
		type filter hook input priority filter; policy accept;
		 counter packets 35869492 bytes 5971008896 jump KUBE-NODEPORTS
		ct state new  counter packets 390938 bytes 23457377 jump KUBE-EXTERNAL-SERVICES
		counter packets 36249774 bytes 6030068622 jump KUBE-FIREWALL
		meta l4proto tcp ip daddr 127.0.0.1 tcp dport 6784 fib saddr type != local ct state != related,established  counter packets 0 bytes 0 drop
		iifname "weave" counter packets 907273 bytes 88697229 jump WEAVE-NPC-EGRESS
		counter packets 34809601 bytes 5818213726 jump WEAVE-IPSEC-IN
	}

	chain KUBE-KUBELET-CANARY {
	}

	chain KUBE-PROXY-CANARY {
	}

	chain KUBE-EXTERNAL-SERVICES {
	}

	chain KUBE-NODEPORTS {
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
	}

	chain KUBE-SERVICES {
	}

	chain KUBE-FORWARD {
		ct state invalid counter packets 0 bytes 0 drop
		 mark and 0x4000 == 0x4000 counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
	}

	chain WEAVE-NPC-INGRESS {
	}

	chain WEAVE-NPC-DEFAULT {
		# match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst  counter packets 14 bytes 840 accept
		# match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst  counter packets 0 bytes 0 accept
		# match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst  counter packets 0 bytes 0 accept
		# match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst  counter packets 0 bytes 0 accept
		# match-set weave-iLgO^}{o=U/*%KE[@=W:l~|9T dst  counter packets 9 bytes 540 accept
	}

	chain WEAVE-NPC {
		ct state related,established counter packets 124 bytes 28478 accept
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 accept
		# PHYSDEV match --physdev-out vethwe-bridge --physdev-is-bridged counter packets 3 bytes 180 accept
		ct state new counter packets 23 bytes 1380 jump WEAVE-NPC-DEFAULT
		ct state new counter packets 0 bytes 0 jump WEAVE-NPC-INGRESS
	}

	chain WEAVE-NPC-EGRESS-ACCEPT {
		counter packets 48 bytes 3769 meta mark set mark or 0x40000 
	}

	chain WEAVE-NPC-EGRESS-CUSTOM {
	}

	chain WEAVE-NPC-EGRESS-DEFAULT {
		# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src  counter packets 0 bytes 0 return
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 return
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src  counter packets 0 bytes 0 return
		# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 return
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 return
	}

	chain WEAVE-NPC-EGRESS {
		ct state related,established counter packets 907425 bytes 88746642 accept
		# PHYSDEV match --physdev-in vethwe-bridge --physdev-is-bridged counter packets 0 bytes 0 return
		fib daddr type local counter packets 11 bytes 640 return
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ct state new counter packets 50 bytes 3961 jump WEAVE-NPC-EGRESS-DEFAULT
		ct state new mark and 0x40000 != 0x40000 counter packets 2 bytes 192 jump WEAVE-NPC-EGRESS-CUSTOM
	}

	chain WEAVE-IPSEC-IN {
	}

	chain WEAVE-CANARY {
	}
}
table ip mangle {
	chain KUBE-KUBELET-CANARY {
	}

	chain PREROUTING {
		type filter hook prerouting priority mangle; policy accept;
	}

	chain INPUT {
		type filter hook input priority mangle; policy accept;
		counter packets 35716863 bytes 5906910315 jump WEAVE-IPSEC-IN
	}

	chain FORWARD {
		type filter hook forward priority mangle; policy accept;
	}

	chain OUTPUT {
		type route hook output priority mangle; policy accept;
		counter packets 35804064 bytes 5938944956 jump WEAVE-IPSEC-OUT
	}

	chain POSTROUTING {
		type filter hook postrouting priority mangle; policy accept;
	}

	chain KUBE-PROXY-CANARY {
	}

	chain WEAVE-IPSEC-IN {
	}

	chain WEAVE-IPSEC-IN-MARK {
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	}

	chain WEAVE-IPSEC-OUT {
	}

	chain WEAVE-IPSEC-OUT-MARK {
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	}

	chain WEAVE-CANARY {
	}
}

Wow. That’s a lot of nftables entries, but it explains what’s going on. We have a nat entry for:

meta l4proto tcp ip daddr 10.107.66.138 tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD

which ends up going to KUBE-SEP-LYLDBZYLHY4MT3AQ and:

meta l4proto tcp counter packets 1 bytes 60 dnat to 192.168.0.4:8080

So packets headed for our echoserver are eventually ending up in a container that has a local IP address of 192.168.0.4. Which we can see in our routing table via the weave interface. Mystery explained. We can see the ingress for the externally visible HTTP service as well:

meta l4proto tcp ip daddr 192.168.33.147 tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK

which ends up redirected to:

meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:80

So from that we’d expect the IP inside the echoserver pod to be 192.168.0.4 and the IP address instead our nginx ingress pod to be 192.168.0.5. Let’s look:

root@udon:/# docker ps | grep echoserver
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run.…"   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
root@udon:/# docker exec -it 7cbb177bee18 /bin/bash
root@hello-node-59bffcc9fd-8hkgb:/# awk '/32 host/ { print f } {f=$2}' <<< "$(</proc/net/fib_trie)" | sort -u
127.0.0.1
192.168.0.4

It’s a slightly awkward method of determining the local IPs addresses due to the stripped down nature of the container, but it clearly shows the expected 192.168.0.4 address.

I’ve touched here upon the ability to actually enter a container and have a poke around its running environment by using docker directly. Next step is to use that to investigate what containers have actually been spun up and what they’re doing. I’ll also revisit networking when I get to the point of building a multi-node cluster, to examine how the bridging between different hosts is done.

28 May, 2021 06:43AM