October 22, 2021

Enrico Zini

Scanning for imports in Python scripts

I had to package a nontrivial Python codebase, and I needed to put dependencies in setup.py.

I could do git grep -h import | sort -u, then review the output by hand, but I lacked the motivation for it. Much better to take a stab at solving the general problem

The result is at https://github.com/spanezz/python-devel-tools.

One fun part is scanning a directory tree, using ast to find import statements scattered around the code:

class Scanner:
    def __init__(self):
        self.names: Set[str] = set()

    def scan_dir(self, root: str):
        for dirpath, dirnames, filenames, dir_fd in os.fwalk(root):
            for fn in filenames:
                if fn.endswith(".py"):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        self.scan_file(fd, os.path.join(dirpath, fn))
                st = os.stat(fn, dir_fd=dir_fd)
                if st.st_mode & (stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH):
                    with dirfd_open(fn, dir_fd=dir_fd) as fd:
                        try:
                            lead = fd.readline()
                        except UnicodeDecodeError:
                            continue
                        if re_python_shebang.match(lead):
                            fd.seek(0)
                            self.scan_file(fd, os.path.join(dirpath, fn))

    def scan_file(self, fd: TextIO, pathname: str):
        log.info("Reading file %s", pathname)
        try:
            tree = ast.parse(fd.read(), pathname)
        except SyntaxError as e:
            log.warning("%s: file cannot be parsed", pathname, exc_info=e)
            return

        self.scan_tree(tree)

    def scan_tree(self, tree: ast.AST):
        for stm in tree.body:
            if isinstance(stm, ast.Import):
                for alias in stm.names:
                    if not isinstance(alias.name, str):
                        print("NAME", repr(alias.name), stm)
                    self.names.add(alias.name)
            elif isinstance(stm, ast.ImportFrom):
                if stm.module is not None:
                    self.names.add(stm.module)
            elif hasattr(stm, "body"):
                self.scan_tree(stm)

Another fun part is grouping the imported module names by where in sys.path they have been found:

    scanner = Scanner()
    scanner.scan_dir(args.dir)

    sys.path.append(args.dir)
    by_sys_path: Dict[str, List[str]] = collections.defaultdict(list)
    for name in sorted(scanner.names):
        spec = importlib.util.find_spec(name)
        if spec is None or spec.origin is None:
            by_sys_path[""].append(name)
        else:
            for sp in sys.path:
                if spec.origin.startswith(sp):
                    by_sys_path[sp].append(name)
                    break
            else:
                by_sys_path[spec.origin].append(name)

    for sys_path, names in sorted(by_sys_path.items()):
        print(f"{sys_path or 'unidentified'}:")
        for name in names:
            print(f"  {name}")

An example. It's kind of nice how it can at least tell apart stdlib modules so one doesn't need to read through those:

$ ./scan-imports …/himblick
unidentified:
  changemonitor
  chroot
  cmdline
  mediadir
  player
  server
  settings
  static
  syncer
  utils
…/himblick:
  himblib.cmdline
  himblib.host_setup
  himblib.player
  himblib.sd
/usr/lib/python3.9:
  __future__
  argparse
  asyncio
  collections
  configparser
  contextlib
  datetime
  io
  json
  logging
  mimetypes
  os
  pathlib
  re
  secrets
  shlex
  shutil
  signal
  subprocess
  tempfile
  textwrap
  typing
/usr/lib/python3/dist-packages:
  asyncssh
  parted
  progressbar
  pyinotify
  setuptools
  tornado
  tornado.escape
  tornado.httpserver
  tornado.ioloop
  tornado.netutil
  tornado.web
  tornado.websocket
  yaml
built-in:
  sys
  time

Maybe such a tool already exists and works much better than this? From a quick search I didn't find it, and it was fun to (re)invent it.

22 October, 2021 08:52PM

October 21, 2021

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.23 “25th Anniversary Edition” for Debian

In the last week, KDE released version 5.23 – 25th Anniversary Edition – of the Plasma desktop with the usual long list of updates and improvements. This release celebrates 25 years of KDE, and Plasma 5.23.0 was released right on the day 25 years ago Matthias Ettrich sent an email to the de.comp.os.linux.misc newsgroup explaining a project he was working on. And Plasma 5.23 (with the bug fix 5.23.1) is now available for all Debian releases. (And don’t forget KDE Gears/Apps 21.08!)

As usual, I am providing packages via my OBS builds. If you have used my packages till now, then you only need to change the plasma522 line to read plasma523. To give full details, I repeat (and update) instructions for all here: First of all, you need to add my OBS key say in /etc/apt/trusted.gpg.d/obs-npreining.asc and add a file /etc/apt/sources.lists.d/obs-npreining-kde.list, containing the following lines, replacing the DISTRIBUTION part with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma523/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps2108/DISTRIBUTION/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other/DISTRIBUTION/ ./

The sharp eye might have detected also the apps2108 line, yes the KDE Gear suite of packages hgas been updated to 21.08 some time ago and is also available in my OBS builds (and in Debian/experimental).

Uploads to Debian

Plasma 5.23.0 has been uploaded to Debian, and is currently in transition to testing. Due to incorrect/insufficient Break/Depends, currently Debian/testing with the official packages for Plasma are broken. And as it looks this situation will continue for considerable time, considering that kwin is blocked by mesa, which in turn is blocked by llvm-toolchain-12, which has quite some RC bugs preventing it from transitioning. What a bad coincidence.

KDE Gears 21.08 are all in Debian Unstable and Testing, so the repositories here are mostly for users of Debian Bullseye (stable).

Krita beta

Krita has released the second beta of Krita 5.0, and this is available from the krita-beta repository, but only for amd64 architectures. Just add

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/krita-beta/DISTRIBUTION/ ./

Enjoy the new Plasma!

21 October, 2021 12:31AM by Norbert Preining

October 20, 2021

Ian Jackson

Going to work for the Tor Project

I have accepted a job with the Tor Project.

I joined XenSource to work on Xen in late 2007, as XenSource was being acquired by Citrix. So I have been at Citrix for about 14 years. I have really enjoyed working on Xen. There have been a variety of great people. I'm very proud of some of the things we built and achieved. I'm particularly proud of being part of a community that has provided the space for some of my excellent colleagues to really grow.

But the opportunity to go and work on a project I am so ideologically aligned with, and to work with Rust full-time, is too good to resist. That Tor is mostly written in C is quite terrifying, and I'm very happy that I'm going to help fix that!



comment count unavailable comments

20 October, 2021 05:55PM

Arturo Borrero González

Iterating on how we do NFS at Wikimedia Cloud Services

Logos

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

NFS is a central piece of infrastructure that is essential to services like Toolforge. Recently, the Cloud Services team at Wikimedia had been reviewing how we do NFS.

The current situation

NFS is a central piece of technology for some of the services that the Wikimedia Cloud Services team offers to the community. We have several shares that power different use cases: Toolforge user home directories live on NFS, and Cloud VPS users can also access dumps using this protocol. The current setup involves several physical hardware servers, with about 20TB of storage, offering shares over 10G links to the cloud. For the system to be more fault-tolerant, we duplicate each share for redundancy using DRBD.

Running NFS on dedicated hardware servers has traditionally offered us advantages: mostly on the performance and the capacity fields.

As time has passed, we have been enumerating more and more reasons to review how we do NFS. For one, the current setup is in violation of some of our internal rules regarding realm separation. Additionally, we had been longing for additional flexibility managing our servers: we wanted to use virtual machines managed by Openstack Nova. The DRBD-based high-availability system required mostly a hand-crafted procedure for failover/failback. There’s also some scalability concerns as NFS is easy to grow up, but not to grow horizontally, and of course, we have to be able to keep the tenancy setup while doing so, something that NFS does by using LDAP/Unix users and may get complicated too when growing. In general, the servers have become ‘too big to fail’, clearly technical debt, and it has taken us years to decide on taking on the task to rethink the architecture. It’s worth mentioning that in an ideal world, we wouldn’t depend on NFS, but the truth is that it will still be a central piece of infrastructure for years to come in services like Toolforge.

Over a series of brainstorming meetings, the WMCS team evaluated the situation and sorted out the many moving parts. The team managed to boil down the potential service future to two competing options:

  • Adopt and introduce a new Openstack component into our cloud: Manila — this was the right choice if we were interested in a general NFS as a service offering for our Cloud VPS users.
  • Put the data on Cinder volumes and serve NFS from a couple of virtual machines created by hand — this was the right choice if we wanted something that required low effort to engineer and adopt.

Then we decided to research both options in parallel. For a number of reasons, the evaluation was timeboxed to three weeks. Both ideas had a couple of points in common: the NFS data would be stored on our Ceph farm via Cinder volumes, and we would rely on Ceph reliability to avoid using DRBD. Another open topic was how to back up data from Ceph, to store our important bits in more than one basket. We will get to the back up topic later.

The manila experiment

The Wikimedia Foundation was an early adopter of some Openstack components (Nova, Glance, Designate, Horizon), but Manila was never evaluated for usage until now. Our approach for this experiment was to closely follow the upstream guidelines. We read the documentation and tried to understand the different setups you can build with Manila. As we often feel with other Openstack components, the documentation doesn’t perfectly describe how to introduce a given component in your particular local setup. Here we use an admin-controller flat-topology Neutron network. This network is shared by all tenants (or projects) in our Openstack deployment. Also, Manila can use many different driver backends, for things like NetApps or CephFS—that we don’t use…, yet. After some research, the generic driver was the one that seemed to better fit our use case. The generic driver leverages Nova virtual machines instances plus Cinder volume to create and manage the shares. In general, Manila supports two operational modes, whether it should create/destroy the share servers (i.e, the virtual machine instances) or not. This option is called driver_handles_share_server (or DHSS) and takes a boolean value.

We were interested in trying with DHSS=true, to really benefit from the potential of the setup.

Manila diagram NFS idea 6, original image in Wikitech

So, after sorting all these variables, we moved on with our initial testing. We built a PoC setup as depicted in the diagram above, with the manila-share component running in a virtual machine inside the cloud. The PoC led to us reporting several bugs upstream:

In some cases we tried to address these bugs ourselves:

It’s worth mentioning that the upstream community was extra-welcoming to us, and we’re thankful for that. However, at the end of our three-week period, our Manila setup still wasn’t working as expected. Your experience may change with other drivers—perhaps the ZFSonLinux or the CephFS ones. In general, we were having trouble making the setup work as expected, so we decided to abandon this approach in favor of the other option we were considering at the beginning.

Simple virtual machine serving NFS

The alternative was to create a Nova virtual machine instance by hand and to configure it using puppet. We have been investing in an automation framework lately, so the idea is to not actually create the server by hand. Anyway, the data would be decoupled from the instance into Cinder volumes, which led us to the question we left for later: How should we back up those terabytes of important information? Just to be clear, the backup problem was independent of the above options; with Manila we would still have had to solve the same challenge. We would like to see our data be backed up somewhere else other than in Ceph. And that’s exactly where we are at right now. We’ve been exploring different backup strategies and will finally use the Cinder backup API.

Conclusion

The iteration will end with the dedicated NFS hardware servers being stopped, and the shares being served from within the cloud. The migration will take some time to happen because we will check and double-check that everything works as expected (including from the performance point of view) before making definitive changes. We already have some plans to make sure our users experience as little service impact as possible. The most troublesome shares will be those related to Toolforge. At some point we will need to disallow writes to the NFS share, rsync the data out of the hardware servers into the Cinder volumes, point the NFS clients to the new virtual machines, and then enable writes again. The main Toolforge share has about 8TB of data, so this will take a while.

We will have more updates in the future. Who knows, perhaps our next-next iteration, in a couple of years, will see us adopting Openstack Manila for good.

Featured image credit: File:(from break water) Manila Skyline – panoramio.jpg, ewol, CC BY-SA 3.0

This post was originally published in the Wikimedia Tech blog, authored by Arturo Borrero Gonzalez.

20 October, 2021 09:47AM

Russell Coker

Strange Apache Reload Issue

I recently had to renew the SSL certificate for my web server, nothing exciting about that but Certbot created a new directory for the key because I had removed some domains (moved to a different web server). This normally isn’t a big deal, change the Apache configuration to the new file names and run the “reload” command. My monitoring system initially said that the SSL certificate wasn’t going to expire in the near future so it looked fine. Then an hour later my monitoring system told me that the certificate was about to expire, apparently the old certificate came back!

I viewed my site with my web browser and the new certificate was being used, it seemed strange. Then I did more tests with gnutls-cli which revealed that exactly half the connections got the new certificate and half got the old one. Because my web server isn’t doing anything particularly demanding the mpm_event configuration only starts 2 servers, and even that may be excessive for what it does. So it seems that the Apache reload command had reloaded the configuration on one mpm_event server but not the other!

Fortunately this was something that was easy to test and was something that was automatically tested. If the change that didn’t get accepted was something small it would be a particularly insidious bug.

I haven’t yet tried to reproduce this. But if I get the time I’ll do so and file a bug report.

20 October, 2021 01:41AM by etbe

October 19, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RVowpalWabbit 0.0.16: One More CRAN Request

Another maintenance RVowpalWabbit released brings us to version 0.0.16 on CRAN. This is last package for which configure.ac needed an update to current standards (see the updates of corels, RcppGSL, RQuantLib, and littler). The make matters more interesting we also had to address one UBSAN issue we could not reproduce locally (which, it turns out, was our fault because we had not rebuilt one package dependency under UBSAN). But Prof Ripley confirmed the issue as addressed so all is good for now.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into Vowpal Wabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 October, 2021 02:16AM

October 18, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

raspi.debian.net now hosted on Debian infrastructure

So, since I registered the URL for serving the unofficial Debian images for the Raspberry computers, raspi.debian.net, in April 2020, I had been hosting it in my Dreamhost webspace.

Over two years ago –yes, before I finished setting it up in Dreamhost– Steve McIntyre approached me and invited me to host the images under the Debian cdimages user group. I told him I’d first just get the setup running, and later I would approach him for finalizing the setup.

Then, I set up the build on my own server, hosted on my Dreamhost account… and forgot about it for many months. Last month, there was a not particularly happy flamewar in debian-arm@lists.debian.org finished with me stating I would be moving the hosting to Debian infrastructure soon.

Well… It took me a bit over a month to get this sorted out, together with several days of half-broken links, but it is finally done: raspi.debian.net is a CNAME for ftp.acc.umu.se, which is the same system that hosts cdimage.debian.org.

And, of course — it is also reachable as https://cdimage.debian.org/cdimage/unofficial/raspi/ — looks more official, but is less memorable 😉

Thanks a lot to Steve for the nudging, and to maswan to help finalizing the setup.

What next? Well, the images are being built on my server. I’d love to move the builder over to Debian machines as well. When? How? That’s still in the air.

18 October, 2021 06:49PM

Russ Allbery

rra-c-util 10.0

It's been a while since I pushed out a release of my collection of utility libraries and test suite programs, so I've accumulated quite a lot of chanages. Here's a summary; for more, see the NEWS file.

  • Remove RRA_SET_LIBDIR macro, previously used by PAM modules to guess the install path.
  • Fix RRA_SET_LDFLAGS to always use the multilib directory if it exists, required for OpenSSL 3.0 support.
  • Perl tests and test support libraries now require Perl 5.10 or later.
  • Update RRA_LIB_PYTHON to use sysconfig by preference.
  • Fix SUN_LEN on macOS X Big Sur.
  • Add support for testing pam_end argments to the PAM testing framework.
  • Fix error handling in portable k_haspag replacement if getgroups fails.
  • Avoid deprecation warnings in Autoconf macros from Autoconf 2.71.
  • Work around fallout for Perl::Critic tests from the renaming of Perl::Critic::Freenode to Perl::Critic::Community, and suppress warnings that I haven't found useful.
  • Tweak the perltidy configuration.
  • Add a new t/docs/changes.t Perl module test for the format of Changes files.

You can get the latest version from the rra-c-util distribution page.

18 October, 2021 04:57AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dang 0.0.14: Several Updates

A new release of the dang package arrived at CRAN a couple of hours ago, exactly eight months after the previous release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor from the last release as well the checkCRANStatus() function recently tweeted about by Tim Taylor.

This release regroups a few small edits to several functions, adds a sample function for character encoding reading and conversion using a library already used by R (hence “look Ma, no new depends”), adds a weekday helper, and a sample usage (computing rolling min/max values) of a new simple vector class added to tidyCpp (and the function and class need to get another blog post or study …), and an experimental git sha1sum and date marker (as I am not the fan of autogenerated binaries from repos as opposed to marked released meaning: we may see different binary release with the same version number).

The full NEWS entry follows.

Changes in version 0.0.14 (2021-10-17)

  • Updated continuous integration to run on Linux only.

  • Edited checkNonAscii.cpp for readability.

  • More robust title display in intradayMarketMonitor.R.

  • New C++-based function to read and convert encoding via the R-supplied iconv library, noted a potential variability.

  • New function wday returning day of the week as integer.

  • The signature to as.data.table was standardized.

  • A new function rollMinMax was added illustrating use of the NumVec class from tidyCpp.

  • The configure script can record the last commit date and sha1 to automate timestamping builds, but not activated in this release.

  • checkCRANStatus() now works correctly for single-package lookups (Jordan Mark Barbone in #4).

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 October, 2021 01:41AM

October 17, 2021

Russ Allbery

pam-krb5 4.11

The primary change in this release of my Kerberos PAM module is support for calling pam_end with PAM_DATA_SILENT. I had not known that the intent of this flag was to signal that only process resources were being cleaned up and external resources should not be (in part because an older version of the man page doesn't make this clear).

This flag is used when a proces forks with an open PAM library handle and wants to clean it up in the child process. In previous versions, this would delete the user's ticket cache, which is not the desired behavior. This version correctly leaves the ticket cache alone.

The implementation required some improvements to the PAM testing framework to support this case as well.

The other significant change in this release is that the build system no longer attempts to guess the correct PAM module installation path and instead documents that to install the module in a Linux system PAM module path, you will probably need to set --libdir explicitly. The logic used to decide between Debian and Red Hat multiarch paths broke in the presence of Debian usrmerge systems and was incredibly fragile even before that, so I've now dropped it completely.

You can get the latest version from the pam-krb5 distribution page.

17 October, 2021 11:00PM

October 15, 2021

Sven Hoexter

ThinkPad P15v Gen1, Xorg and a Samsung QHD Display

Wasted quite some hours until I found a working Modeline in this stack exchange post so the ThinkPad works with a HDMI attached Samsung QHD display.

Internal display of the ThinkPad is a FHD display detected as eDP-1, the external one is DP-3 and according to the packaging known by Samsung as S24A600NWU. The auto deteced EDID modes for QHD - 2560x1440 - did not work at all, the display simply stays dark. After a lot of back and forth with the i915 driver vs nouveau vs nvidia/nvidia-drm with and without modesetting, the following Modeline did the magic:

xrandr --newmode 2560x1440_54.97  221.00  2560 2608 2640 2720  1440 1443 1447 1478  +HSync -VSync
xrandr --addmode DP-3 2560x1440_54.97
xrandr --output DP-3 --mode 2560x1440_54.97 --right-of eDP-1 --primary

Modelines for 50Hz and 60Hz generated with cvt 2560 1440 60 did not work, neither did the one extracted with edid-decode -X from the hex blob found in .local/share/xorg/Xorg.0.log.

From the auto-detected Modelines FHD - 1920x1080 - did work. In case someone struggles with a similar setup, that might be a starting point. Fun part, if I attach my several years old Dell E7470 everything is just fine out of the box. But that one just has an Intel GPU and not the unholy combination I've here:

$ lspci|grep -E "VGA|3D"
00:02.0 VGA compatible controller: Intel Corporation CometLake-H GT2 [UHD Graphics] (rev 05)
01:00.0 3D controller: NVIDIA Corporation GP107GLM [Quadro P620] (rev ff)

15 October, 2021 02:19PM

hackergotchi for Adnan Hodzic

Adnan Hodzic

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

15 October, 2021 01:20PM by admin

October 13, 2021

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

GitHub Streak: Round Eight

Seven years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld’s secret to productivity: Just keep at it. Don’t break the streak.

and then showed the first chart of GitHub streaking 366 days:

github activity october 2013 to october 2014
github activity october 2013 to october 2014

And six years ago a first follow-up appeared in this post about 731 days:

github activity october 2014 to october 2015
github activity october 2014 to october 2015

And five years ago we had a followup at 1096 days

github activity october 2015 to october 2016
github activity october 2015 to october 2016

And four years ago we had another one marking 1461 days

github activity october 2016 to october 2017
github activity october 2016 to october 2017

And three years ago another one for 1826 days

github activity october 2017 to october 2018
github activity october 2017 to october 2018

And two year another one bringing it to 2191 days

github activity october 2018 to october 2019
github activity october 2018 to october 2019

And last year another one bringing it to 2257 days

github activity october 2019 to october 2020
github activity october 2019 to october 2020

And as today is October 12, here is the newest one from 2020 to 2021 with a new total of 2922 days:

github activity october 2020 to october 2021
github activity october 2020 to october 2021

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 02:44AM

RcppQuantuccia 0.0.4 on CRAN: Updated Calendar

A new release of RcppQuantuccia arrived on CRAN earlier today. RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release is the first in two years and brings a few internal updates (such as a swift to continuous integration to the trusted r-ci setup) along with a first update of the United States calendar. Which, just like RQuantLib, now knows about two new calendars LiborUpdate and FederalReserve. So now we can for example look for holidays during June of next year under the ‘Federal Reserve’ calendar and see

> library(RcppQuantuccia)
> setCalendar("UnitedStates/FederalReserve")
> getHolidays(as.Date("2022-06-01"), as.Date("2022-06-30"))
[1] "2022-06-20"
> 

that Juneteenth 2022 will be observed on (Monday) June 20th.

We should note that Quantuccia itself was a bit of a trial balloon and is not actively maintained so we may concentrate on these calendaring functions to keep them in sync with QuantLib. Being a header-only subset is good, and the removal of the (very !!) “expensive” (in terms of compiled library size) Sobol sequence-based RNG in release 0.0.3 was the right call. So time permitting, a leaner, meaner RcppQuantuccia with a calendaring focus may emerge.

The complete list changes follows.

Changes in version 0.0.4 (2021-10-12)

  • Allow for 'Null' calendar without weekends or holidays

  • Switch CI use to r-ci

  • Updated UnitedStates calendar to current QuantLib calendar

  • Small updates to DESCRIPTION and README.md

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2021 12:38AM

October 12, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Apache bug with mpm-itk

It seems there's a bug in Apache 2.4.49 (or newer) and mpm-itk; any forked child will segfault instead of exiting cleanly. This is, well, aesthetically not nice, and also causes problems with exit hooks for certain modules not being run.

It seems Apache upstream is on the case; from my limited understanding of the changes, there's not a lot mpm-itk as an Apache module can do here, so we'll just have to wait for upstream to deal with it. I hope we can get whatever fix in as a regression update to bullseye-security, though :-)

12 October, 2021 10:23PM

October 11, 2021

Free Software Fellowship

OSCAL, Open Labs, Mozilla & grooming women for Outreachy

Monday, 11 October is the UN's International Day for the Girl Child. The definition of Girl Child varies from country to country. In the law of Albania, the age of consent can be as low as 14 if you can convince (bribe) a judge that a 14 year old girl has achieved sexual maturity. Legal text.

What does it look like when children aged 16 and 17 make software for big corporations to use?

Today we find out

Here is a team photo from Ura Design, one of the Albanian companies using the hackerspace as a front for people trafficking. In the team we see a former Outreachy, Renata Gegaj beside a future Outreachy, Anja Xhakani.

Ura Design, Elio Qoshi, Renata Gegaj, Anja Xhakani, Ergi Shkelzeni, Anxhelo Lushka

Anja started with Open Labs when she was 16, we see a younger photo in her profile on the Open Labs forum:

Anja Xhakani

Here is a copy of the profile photo:

Anja Xhakani

She began dating the director of Ura Design, a former Open Labs board member and Mozilla Tech Speaker, Elio Qoshi, before turning 18:

Elio Qoshi, Mozilla Tech Speaker

Anja and other teenage girls follow the men around at OSCAL. They have to wear red uniforms, just like the women in the Handmaid's Tale and they have to stand so the men can sit with their laptops.

Anja Xhakani

Insurance company Munich Re admitted their salesmen from Germany were rewarded with a "sex junket" to Budapest. Women at the event were colour coded with ribbons and they were stamped each time somebody took ownership of them. In Albania we also find youngsters wearing ribbons and stamps:

Mozilla, branding women, children

When a 16 year old girl child comes to the hackerspace and hears about other women winning tickets to Brazil and Outreachy scholarships that pay more money in 12 weeks than most Albanian women earn in 12 months, the girl may want to look at their code and see how those women were selected.

Debian Community News recently revealed that the Github profiles of the six Albanian women already selected for Outreachy and Google Summer of Code (GSoC) are rather stagnant. When a 16 year old girl child comes to the hackerspace and sees this example, why would she be motivated to learn development skills and publish her work?

Anja Xhakani, Github

The FSFE have recently announced their Youth Hacking 4 Freedom program. The five month unpaid internships, disguised as a competition, have been denounced as child labor. FSFE targets the same group as the Albanian laws on statutory rape, youth between 14 and 18 years of age. Matthias Kirschner has written in the announcement:

we would like to encourage teenagers of all genders to join!

Here is the photo of the FSFE President Matthias Kirschner with young women at OSCAL in Tirana, Albania:

Matthias Kirschner, women, Tirana, Albania, FSFE

It is not only teenage girls. Ura Design and other Albanian companies used the Open Labs front organization to exploit the 16 year old sysadmin Boris Budini. Older IT professionals know what is really going on at Open Labs and refuse to work there for free so Elio and Redon have to scrape the bottom of the barrel looking for 16 year olds. Here is Boris Budini (16 years in OSCAL2018) with Mariana Balla (Fedora/Red Hat/IBM) and Redon Skikuli:

Boris Budini, Mariana Balla, Redon Skikuli

Chris Lamb from Debian appears to be completely comfortable with these people, another photo from OSCAL17 shows Redon Skikuli, Chris Lamb and Italo Vignoli planning to bring another event, LibreOffice Conference (LibOcon) to Albania so the developers can enjoy the Albanian hospitality:

Redon Skikuli, Chris Lamb, Italo Vignoli, OSCAL, Tirana, Albania

When men gain positions of power, they feel a sense of entitlement. There is nothing new about this.

Prince Andrew, Virginia Giuffre

11 October, 2021 08:30PM

October 10, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2021

In August I was assigned 12.75 hours of work by Freexian's Debian LTS initiative and carried over 18 hours from earlier months. I worked 2 hours and will carry over the remainder.

I started work on an update to the linux package, but did not make an upload yet.

10 October, 2021 12:03PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live contrib archive available via CTAN mirrors

The TeX Live contrib repository has been for many years now a valuable source of packages that cannot enter proper TeX Live due to license restrictions etc. I took over maintenance of it in 2017 from Taco, and since then the repository has been available via my server. Since a few weeks, tlcontrib is now available via the CTAN mirror network, the Comprehensive TeX Archive Network.

Thanks to the team of CTAN who offered to mirror the tlcontrib, users can get much faster (and reliable) access via the mirror, by adding tlcontrib as additional repository source for tlmgr, either permanently via:

tlmgr repository add https://mirrors.ctan.org/systems/texlive/tlcontrib tlcontrib

or via a one-shot

tlmgr --repository https://mirrors.ctan.org/systems/texlive/tlcontrib install PACKAGE

The list of packages can be seen here, and includes besides others:

  • support for commercial fonts (lucida, garamond, …)
  • Noto condensed
  • various sets of programs around acrotex

(and much more!).

You can install all packages from the repository by installing the new collection-contrib.

Thanks to the whole CTAN team, and please switch your repositories to the CTAN mirror to get load of my server, thanks a lot!

Enjoy.

10 October, 2021 03:39AM by Norbert Preining

October 09, 2021

Debian Community News

Karen Sandler, Outreachy & Debian Money in Albania

Photos contributed by Aigars Mahinovs.

We found the six Albanian/Kosovan women who received GSoC & Outreachy money, tickets to DebConf and many other events following the former leader around Europe.

These are snapshots of their Github activity for the last 12 months. We decided not to write their names.

We don't want to vilify these women. We want to ask who decided to spend over $30,000 on them without any plan?

Outreachy

Google Summer of Code (GSoC)

Chris Lamb made it happen

Subject: Re: [TREASURER #2025] [sfconservancy.org #310] [Outreachy] Invoice for Software in the Public Interest, Inc - May 2018 to August 2018 round
Date: Sat, 12 May 2018 11:17:16 +0100
From: Chris Lamb <lamby@debian.org>
To: Martin Michlmayr via RT <treasurer@rt.spi-inc.org>, accounts-receivable@tix.sfconservancy.org
CC: deblanc@riseup.net, leader@debian.org, organizers@outreachy.org, outreach@debian.org

Martin,

> > > Thank you for Debian's continued sponsorship of Outreachy. Please find an
> > > invoice for this round's sponsorship attached. Payment options are listed on
> > > the second page. If you have any questions, please don't hesitate to contact
> > > us.
> >
> > Debian Outreach folks, please confirm this was approved by the DPL.
>
> lamby, can you please confirm that 1 Outreachy student (May-August) at
> $6500 is approved by you
. I can't find an approval anywhere.

Not to complicate things any further (and I may be under-caffeinated at this point in the morning) but I remember being convinced by Karen at DC17 that Debian should sponsor two students, not one...

(ie. I think I'm missing something, so I'm not going to jump into an approval...)

Best wishes,

--
,''`.
: :' : Chris Lamb
`. `'` lamby@debian.org / chris-lamb.co.uk

DebConf19, Chris Lamb, lamby, outreachy candidate, dinner date

The comments that Sage Sharp sent to Chris Lamb in November 2017:

From: Sage Sharp
Date: November 2017

[... snip ...] While you may not have intended it this way, this sounds like favoritism, ...

[... snip ...] Again, this sounds a bit like favoritism of a person who is already active in open source.

[... snip ...] Again, this may be how your experience with GSoC works, but the goal of Outreachy is to try and get people who aren't currently open source contributors (or who are not as active contributors) to be accepted as Outreachy interns. That's why we have the rule that past GSoC or Outreachy applicants are ineligible to apply.

The woman never saw this email from Sharp. Lamb did see it. Why did he continue to lead this woman on for 2 years and eventually grant her an Outreachy internship? Some men like to have this power over women.

09 October, 2021 11:00PM

Thorsten Alteholz

My Debian Activities in September 2021

FTP master

This month I accepted 224 and rejected 47 packages. This is almost thrice the rejects of last month. Please, be more careful and check your package twice before uploading. The overall number of packages that got accepted was 233.

Debian LTS

This was my eighty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.75h. During that time I did LTS and normal security uploads of:

  • [DLA 2755-1] btrbk security update for one CVE
  • [DLA 2762-1] grilo security update for one CVE
  • [DLA 2766-1] openssl security update for one CVE
  • [DLA 2774-1] openssl1.0 security update for one CVE
  • [DLA 2773-1] curl security update for two CVEs

I also started to work on exiv2 and faad2.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the thirty-ninth ELTS month.

Unfortunately during my allocated time I could not process any upload. I worked on openssl, curl and squashfs-tools but for one reason or another the prepared packages didn’t pass all tests. In order to avoid regressions, I postponed the uploads (meanwhile an ELA for curl was published …).

Last but not least I did some days of frontdesk duties.

Other stuff

On my neverending golang challenge I again uploaded some packages either for NEW or as source upload.

As Odyx took a break from all Debian activities, I volunteered to take care of the printing packages. Please be merciful when somethings breaks after I did an upload. My first printing upload was hplip

09 October, 2021 07:42PM by alteholz

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Lotus to Lily

The Lotus story so far

My very first experience with water flowering plants was pretty good. I learnt a good deal of things; from setting up the pond, germinating the lotus seeds, setting up the right soil, witnessing the growth of the lotus plant, fish eco-system to take care of the pond. Overall, a lot of things learnt.

But I couldn’t succeed in getting the Lotus flower. A lot many reasons. The granite container developed some leakage, which I had to fix by emptying it, which might have caused some shock to the lotus. But more than that, in my understanding, the reason for not being able to flower the lotus, was the amount of sunlight. From what I have learned, these plants need a minimum of 6-8 hrs of sunlight to really give you with the flowering result, whereas the setup of my pond was on the ground with hardly 3-4 hrs of sun. And that too, with all the plants growing, resulted in indirect sunlight.

Lotus to Lily

For my new setup, I chose a large oval container. And this one, I placed on my terrace, carefully choosing a spot where it’d get 6-8 hrs of very bright sun on usual days. Other than that, the rest of the setup is pretty similar to my previous setup in the garden. Guppies, Solar Water Fountain etc.

Initial lily pond setup

Initial lily pond setup

The good thing about the terrace is that the setup gets ample amount of sun. You can see that in the picture above, with the amount of algae that has been formed. Something that is vital for the plant’s ecosystem.

I must thank my wonderful neighbor who kindly shared a sapling from their lily plant. They already had had success with flowering the lily. So I had high hopes to see the day come when I’d be happy to write down my experience in this blog post. Though, a lot of patience is needed. I got the lily some time in January this year. And it blossomed now, in October.

So, here’s me sharing my happiness here, in particular order of how I documented the process.

Monday morning greeted with a blossomed lily

Monday morning greeted with a blossomed lily

Lily Blossom Closeup

Lily Blossom Closeup

Beautiful water reflection

Beautiful water reflection

Dawn to Dusk

The other thing that I learned in this whole lily episode is that the flower goes back to sleeping at dusk. And back to flowering again at dawn. There’s so much to learn in the surrounding, only if you spare some time to the little things with mother nature.

Lily status at dusk

Lily status at dusk

Lily the next day

Lily the next day

Not sure how long this phenomenon is to last, but overall witnessing this whole process has been mesmerizing.

This past week has been great. ��

09 October, 2021 04:22PM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

corels 0.0.3 on CRAN: Update

An updated version of the corels package is now on CRAN!

The change is chiefly an updated configure.ac (just like RcppGSL yesterday, RQuantLib two days ago, and littler three days ago.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 October, 2021 02:45AM

October 08, 2021

hackergotchi for Neil Williams

Neil Williams

Using Salsa with contrib and non-free

OK, I know contrib and non-free aren't popular topics to many but I've had to sort out some simple CI for such contributions and I thought it best to document how to get it working. You will need access to the GitLab Settings for the project in Salsa - or ask someone to add some CI/CD variables on your behalf. (If CI isn't running at all, the settings will need to be modified to enable debian/salsa-ci.yml first, in the same way as packages in main).

The default Salsa config (debian/salsa-ci.yml) won't get a passing build for packages in contrib or non-free:

# For more information on what jobs are run see:
# https://salsa.debian.org/salsa-ci-team/pipeline
#
---
include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml

Variables need to be added. piuparts can use the extra contrib and non-free components directly from these variables.

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'

Many packages in contrib and non-free only support amd64 - so the i386 build job needs to be removed from the pipeline by extending the variables dictionary:

variables:
   RELEASE: 'unstable'
   SALSA_CI_COMPONENTS: 'main contrib non-free'
   SALSA_CI_DISABLE_BUILD_PACKAGE_I386: 1

The extra step is to add the apt source file variable to the CI/CD settings for the project.

The CI/CD settings are at a URL like:

https://salsa.debian.org/<team>/<project>/-/settings/ci_cd

Expand the section on Variables and add a <b>File</b> type variable:

Key: SALSA_CI_EXTRA_REPOSITORY

Value: deb https://deb.debian.org/debian/ sid contrib non-free

The pipeline will run at the next push - alternatively, the CI/CD pipelines page has a "Run Pipeline" button. The settings added to the main CI/CD settings will be applied, so there is no need to add a variable at this stage. (This can be used to test the variables themselves but only with manually triggered CI pipelines.)

For more information and additional settings (for example disabling or allowing certain jobs to fail), check https://salsa.debian.org/salsa-ci-team/pipeline

08 October, 2021 04:03PM by Neil Williams

hackergotchi for Chris Lamb

Chris Lamb

Reproducible Builds: Increasing the Integrity of Software Supply Chains (2021)

I didn't blog about it at the time, but a paper I co-authored with Stefano Zacchiroli was accepted by IEEE Software in April of this year. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains, the abstract of the paper is as follows:

Although it is possible to increase confidence in Free and Open Source Software (FOSS) by reviewing its source code, trusting code is not the same as trusting its executable counterparts. These are typically built and distributed by third-party vendors with severe security consequences if their supply chains are compromised.

In this paper, we present reproducible builds, an approach that can determine whether generated binaries correspond with their original source code. We first define the problem and then provide insight into the challenges of making real-world software build in a "reproducible" manner — that is, when every build generates bit-for-bit identical results. Through the experience of the Reproducible Builds project making the Debian Linux distribution reproducible, we also describe the affinity between reproducibility and quality assurance (QA).

The full text of the paper can be found in PDF format and should appear, with an alternative layout, within a forthcoming issue of the physical IEEE Software magazine.

08 October, 2021 02:22PM

October 07, 2021

Tim Retout

hackergotchi for Matthew Palmer

Matthew Palmer

Discovering AWS IAM accounts

Let’s say you’re someone who happens to discover an AWS account number, and would like to take a stab at guessing what IAM users might be valid in that account. Tricky problem, right? Not with this One Weird Trick!

In your own AWS account, create a KMS key and try to reference an ARN representing an IAM user in the other account as the principal. If the policy is accepted by PutKeyPolicy, then that IAM account exists, and if the error says “Policy contains a statement with one or more invalid principals” then the user doesn’t exist.

As an example, say you want to guess at IAM users in AWS account 111111111111. Then make sure this statement is in your key policy:

{
  "Sid": "Test existence of user",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111111111111:user/bob"
  },
  "Action": "kms:DescribeKey",
  "Resource": "*"
}

If that policy is accepted, then the account has an IAM user named bob. Otherwise, the user doesn’t exist. Scripting this is left as an exercise for the reader.

Sadly, wildcards aren’t accepted in the username portion of the ARN, otherwise you could do some funky searching with ...:user/a*, ...:user/b*, etc. You can’t have everything; where would you put it all?

I did mention this to AWS as an account enumeration risk. They’re of the opinion that it’s a good thing you can know what users exist in random other AWS accounts. I guess that means this is a technique you can put in your toolbox safe in the knowledge it’ll work forever.

Given this is intended behaviour, I assume you don’t need to use a key policy for this, but that’s where I stumbled over it. Also, you can probably use it to enumerate roles and anything else that can be a principal, but since I don’t see as much use for that, I didn’t bother exploring it.

There you are, then. If you ever need to guess at IAM users in another AWS account, now you can!

07 October, 2021 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

October 06, 2021

hackergotchi for Thomas Goirand

Thomas Goirand

OpenStack Xena, the 24th OpenStack release, is out

It was out at 3pm, and I managed to finish uploading the last bits to Unstable at 9pm… Of course, that’s because all of the packaging and testing work was done before the release date. All of it is, as usual, also available through a Bullseye non-official backports repository that can be added using extrepo (ie: “extrepo enable openstack_xena”).

06 October, 2021 09:26PM by Goirand Thomas

Infomaniak launches its public IaaS cloud with ground breaking prices

My employer, the biggest Swiss server hosting company, Infomaniak, has just opened registration for its new IaaS (Infrastructure as a Service) OpenStack-based public cloud. Well, in fact, it’s been opened since a week or so. Previously, it was only in beta (during that beta period, we hosted (for free) the whole Debconf 21 infrastructure). Nothing really new in the market, except that it is by far cheaper than most (if not all) of its (OpenStack-based or not) competitors, including AWS, GCE or Azure.

Also, everything is hosted in Switzerland, in our own data centers, where data protection is written in the law (and Infomaniak often advertises about data privacy: this is real here…).

Not only Infomaniak is (by far…) the cheapest offer in the market (including a 300 CHF free tier: enough for our smallest VM for a full year), but we also have very good technical support, and the hardware we used is top notch:

  • 6th Gen NVMe (read intensive) Ceph-based block devices
  • AMD Epyc CPU (128 threads per server)
  • 2x 25Gbits/s (using BGP-to-the-host networking)

Some of our customers didn’t even believe how we could do such pricing. Well, the reason is simple: most of our competitors are simply really overpriced, and are making too much money. Since we’re late in the market, and that newer hardware (with many cores on a single server) makes is possible to increase density without too much over-commit, my bosses decided that since we could, we would be the cheapest! Hopefully, this will work as a good business strategy.

All of that public cloud infrastructure has been setup with OpenStack Cluster Installer for which I’m the main author, and that is fully in Debian. All of this is running on a plain, unmodified Debian Bullseye (well, with a few OpenStack packages a little bit more up-to-date, but really not much, and all of that is publicly available…).

Last, choosing the cheapest and best offer is also a good action: it promotes OpenStack and cloud computing in Debian, which I believe is the least vendor locked-in IaaS solution.

06 October, 2021 09:23PM by Goirand Thomas

Reproducible Builds

Reproducible Builds in September 2021

The goal behind “reproducible builds” is to ensure that no deliberate flaws have been introduced during compilation processes via promising or mandating that identical results are always generated from a given source. This allowing multiple third-parties to come to an agreement on whether a build was compromised or not by a system of distributed consensus.

In these reports we outline the most important things that have been happening in the world of reproducible builds in the past month:


First mentioned in our March 2021 report, Martin Heinz published two blog posts on sigstore, a project that endeavours to offer software signing as a “public good, [the] software-signing equivalent to Let’s Encrypt”. The two posts, the first entitled Sigstore: A Solution to Software Supply Chain Security outlines more about the project and justifies its existence:

Software signing is not a new problem, so there must be some solution already, right? Yes, but signing software and maintaining keys is very difficult especially for non-security folks and UX of existing tools such as PGP leave much to be desired. That’s why we need something like sigstore - an easy to use software/toolset for signing software artifacts.

The second post (titled Signing Software The Easy Way with Sigstore and Cosign) goes into some technical details of getting started.


There was an interesting thread in the /r/Signal subreddit that started from the observation that Signal’s apk doesn’t match with the source code:

Some time ago I checked Signal’s reproducibility and it failed. I asked others to test in case I did something wrong, but nobody made any reports. Since then I tried to test the Google Play Store version of the apk against one I compiled myself, and that doesn’t match either.


BitcoinBinary.org was announced this month, which aims to be a “repository of Reproducible Build Proofs for Bitcoin Projects”:

Most users are not capable of building from source code themselves, but we can at least get them able enough to check signatures and shasums. When reputable people who can tell everyone they were able to reproduce the project’s build, others at least have a secondary source of validation.


Distribution work

Frédéric Pierret announceda new testing service at beta.tests.reproducible-builds.org, showing actual rebuilds of binaries distributed by both the Debian and Qubes distributions.

In Debian specifically, however, 51 reviews of Debian packages were added, 31 were updated and 31 were removed this month to our database of classified issues. As part of this, Chris Lamb refreshed a number of notes, including the build_path_in_record_file_generated_by_pybuild_flit_plugin issue.

Elsewhere in Debian, Roland Clobus posted his Fourth status update about reproducible live-build ISO images in Jenkins to our mailing list, which mentions (amongst other things) that:

  • All major configurations are still built regularly using live-build and bullseye.
  • All major configurations are reproducible now; Jenkins is green.
    • I’ve worked around the issue for the Cinnamon image.
    • The patch was accepted and released within a few hours.
  • My main focus for the last month was on the live-build tool itself.

Related to this, there was continuing discussion on how to embed/encode the build metadata for the Debian “live” images which were being worked on by Roland Clobus.


Ariadne Conill published another detailed blog post related to various security initiatives within the Alpine Linux distribution. After summarising some conventional security work being done (eg. with sudo and the release of OpenSSH version 3.0), Ariadne included another section on reproducible builds: “The main blocker [was] determining what to do about storing the build metadata so that a build environment can be recreated precisely”.

Finally, Bernhard M. Wiedemann posted his monthly reproducible builds status report.


Community news

On our website this month, Bernhard M. Wiedemann fixed some broken links [] and Holger Levsen made a number of changes to the Who is Involved? page [][][]. On our mailing list, Magnus Ihse Bursie started a thread with the subject Reproducible builds on Java, which begins as follows:

I’m working for Oracle in the Build Group for OpenJDK which is primary responsible for creating a built artifact of the OpenJDK source code. […] For the last few years, we have worked on a low-effort, background-style project to make the build of OpenJDK itself building reproducible. We’ve come far, but there are still issues I’d like to address. []


diffoscope

diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 183, 184 and 185 as well as performed significant triaging of merge requests and other issues in addition to making the following changes:

  • New features:

    • Support a newer format version of the R language’s .rds files. []
    • Update tests for OCaml 4.12. []
    • Add a missing format_class import. []
  • Bug fixes:

    • Don’t call close_archive when garbage collecting Archive instances, unless open_archive definitely returned successfully. This prevents, for example, an AttributeError where PGPContainer’s cleanup routines were rightfully assuming that its temporary directory had actually been created. []
    • Fix (and test) the comparison of R language’s .rdb files after refactoring temporary directory handling. []
    • Ensure that “RPM archives” exists in the Debian package description, regardless of whether python3-rpm is installed or not at build time. []
  • Codebase improvements:

    • Use our assert_diff routine in tests/comparators/test_rdata.py. []
    • Move diffoscope.versions to diffoscope.tests.utils.versions. []
    • Reformat a number of modules with Black. [][]

However, the following changes were also made:

  • Mattia Rizzolo:

    • Fix an autopkgtest caused by the androguard module not being in the (expected) python3-androguard Debian package. []
    • Appease a shellcheck warning in debian/tests/control.sh. []
    • Ignore a warning from h5py in our tests that doesn’t concern us. []
    • Drop a trailing .1 from the Standards-Version field as it’s required. []
  • Zbigniew Jędrzejewski-Szmek:

    • Stop using the deprecated distutils.spawn.find_executable utility. [][][][][]
    • Adjust an LLVM-related test for LLVM version 13. []
    • Update invocations of llvm-objdump. []
    • Adjust a test with a one-byte text file for file version 5.40. []

And, finally, Benjamin Peterson added a --diff-context option to control unified diff context size [] and Jean-Romain Garnier fixed the Macho comparator for architectures other than x86-64 [].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Testing framework

The Reproducible Builds project runs a testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:

  • Holger Levsen:

    • Drop my package rebuilder prototype as it’s not useful anymore. []
    • Schedule old packages in Debian bookworm. []
    • Stop scheduling packages for Debian buster. [][]
    • Don’t include PostgreSQL debug output in package lists. []
    • Detect Python library mismatches during build in the node health check. []
    • Update a note on updating the FreeBSD system. []
  • Mattia Rizzolo:

    • Silence a warning from Git. []
    • Update a setting to reflect that Debian bookworm is the new testing. []
    • Upgrade the PostgreSQL database to version 13. []
  • Roland Clobus (Debian “live” image generation):

    • Workaround non-reproducible config files in the libxml-sax-perl package. []
    • Use the new DNS for the ‘snapshot’ service. []
  • Vagrant Cascadian:

    • Also note that the armhf architecture also systematically varies by the kernel. []


Contributing

If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 October, 2021 04:56PM

October 05, 2021

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

plocate 1.1.12 released

plocate 1.1.12 has been released, with some minor bugfixes and a minor new option.

More interesting is that plocate is now one year old! plocate 1.0.0 was released October 11th, 2020, so I'm maybe getting a bit ahead of myself, but it feels like a good milestone. I haven't really achieved my goal of being in the default Debian install, simply because there is too much resistance to having a default locate at all, but it's now hit most major distributions (thanks to a host of packagers) and has largely supplanted mlocate in general.

plocate still feels to me like the obvious way of doing a locate; like, “why didn't anyone just do this earlier”. But io_uring really couldn't have been done just a few years ago, and added a few very interesting touches (both as a programmer, and for users). In general, it feels like plocate is “done”; it's doing one thing, doing it well, and there's nothing obvious I'm missing. (I keep getting bug reports, but they're getting increasingly obscure, and it's more like a trickle than a flood.) But I'd still love for basic UNIX tools to care more about performance—our data sets are bigger than ever, yet we wrote the for a time when our systems had just a few thousand files. The old UNIX brute force mantra just isn't good enough in the 2020s. And we don't have the manpower (in terms of developer interest) to fix it.

05 October, 2021 10:09PM

Free Software Fellowship

Google, FSFE & Child labor

FSFE, one of Google's mouthpieces in the free software world, has announced a dubious competition called Youth Hacking 4 Freedom.

The target audience is between 14 and 18 years of age. Participants compete by working for free. There are numerous cases where people completed work for Google Summer of Code and they were not paid yet the rules for YH4F are even worse and the victims are younger. Google Code-In was a similar program targetting teenagers between 13 and 17 years. Google gave the child laborers t-shirts and certificates in lieu of payment. It looks like ethical concerns may have been a factor in Google's decision to mothball the Google Code-In last year. Yet a program that is even more demanding has appeared in a Google proxy organization, the FSFE.

A recent news story gives various examples of Google trying to obfuscate controversial employment practices. Child labor crosses a red line.

In July 2019, the UN General Assembly resolved to make 2021 the international year for the eradication of child labor.

Stephanie Taylor of Google announced the 2019 edition of Google Code-In would be the last time. Taylor didn't mention the recent UN resolution, it is just a coincidence.

Specifically, the rules that these children joining this new program must obey state the following:

From Monday 1 November 2021 to Thursday 31 March 2022, you will have five months to come up with an idea for a Free Software program and write it.

Writing software is a job that requires certain skill and deserves to be paid. Yet only six of the participants will receive any payment. The others must work for free.

Five months is a lot of work. Children in this age group have no prior experience of estimating the number of hours required to complete a project over this timespan. Some may start projects and then give up without any prospect of payment for the hours they already worked.

FSFE, Germany, child labor, yh4f

To be considered for the prize, the participants are told that they must publish all their code under a free software license: in other words, they must give up all rights to any future payment. This doesn't happen in any other industry, certainly not with children.

The maximum prize is €4,096. That is more than a year's salary for many university students in eastern European countries. It appears that these are the regions where Google/FSFE is hoping to entice children. This has happened before, the Wikipedia page about forced labor in Germany has chosen a picture of a 14 year old boy from Ukraine in a German factory. FSFE headquarters are in Berlin.

2021 is the ILO's International Year to End Child Labor. The Facebook whistleblower Frances Haugen was giving testimony in the US Senate today about the way Mark Zuckerberg and Facebook employees systematically target children. Google/FSFE doesn't seem to get it.

fsfe, child labor, yh4f, germany

Google/FSFE makes the following assertion on their marketing banner:

Apply now and win up to 4096€ for your own freedom

Yet a Free Software license is not about the freedom of the producers, it is about the freedom of the consumers. FSFE's sponsors past and present include the largest corporations in the world. Some of those companies are the largest consumers of Freedom and Free Software. They may not want to have the risks of child labor, as young as 14, in their supply chain. Some private donors have announced they are quitting the FSFE after seeing this.

Google/FSFE are not the first to promote working for free. The gates to infamous concentration camps like Auschwitz bear the words ARBEIT MACHT FREI (Working for Freedom)

Some 1.5 million children died in four years during the Holocaust. That is 375,000 children per year from 1941 to 1945. Today, globally, 2,780,000 children die from child labor each year. This is one of the few cases where comparison to statistics from the Holocaust may be appropriate. Google/FSFE's five-month unpaid internships for 14 year olds are simply wrong.

Deaths from Child Labor (annually)
Germany, Holocaust, 1941-1945, total 1.5 million children375,000
Worldwide, 2019 (source ILO)2,780,000
Arbeit Macht Frei, yh4f

05 October, 2021 09:00PM

October 04, 2021

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2021-09

Here’s a bunch of uploads for September. Mostly catching up with a few things after the Bullseye release.

2021-09-01: Upload package bundlewrap (4.11.2-1) to Debian unstable.

2021-09-01: Upload package calamares (3.2.41.1-1) to Debian unstable.

2021-09-01: Upload package g-disk (1.0.8-2) to Debian unstable (Closes: #993109).

2021-09-01: Upload package bcachefs-tools (0.1+git20201025.742dbbdb-1) to Debian unstable (Closes: #976474).

2021-09-02: Upload package fabulous (0.4.0+dfsg1-1) to Debian unstable (Closes: #983247).

2021-09-02: Upload package feed2toot (0.17-1) to Debian unstable.

2021-09-02: Merge MR!1 for fracplanet.2021-09-02:2021-09-02:

2021-09-02: Upload package fracplanet (0.5.1-6) to Debian unstable (Closes: #980808).

2021-09-02: Upload package toot (0.28.0-1) to Debian unstable.

2021-09-02: Upload package toot (0.28.0-2) to Debian unstable.

2021-09-02: Merge MR!1 for gnome-shell-extension-gamemode.

2021-09-02: Merge MR!1 for gnome-shell-extension-no-annoyance.

2021-09-02: Upload package gnome-shell-extension-no-annoyance (0+20210717-12dc667) to Debian unstable (Closes: #993193).

2021-09-02: Upload package gnome-shell-extension-gamemode (5-2) to Debian unstable.

2021-09-02: Merge MR!2 for gnome-shell-extension-harddisk-led.

2021-09-02: Upload package gnome-shell-extension-pixelsaver (1.24-2) to Debian unstable (Closes: #993195).

2021-09-02: Upload package gnome-shell-extension-dash-to-panel (43-1) to Debian unstable (Closes: #993058, #989546).

2021-09-02: Upload package gnome-shell-extension-harddisk-led (25-1) to Debian unstable (Closes: #993181).

2021-09-02: Upload package gnome-shell-extension-impatience (0.4.5+git20210412-e8e132f-1) to Debian unstable (Closes: #993190).

2021-09-02: Upload package s-tui (1.1.3-1) to Debian unstable.

2021-09-02: Upload package flask-restful (0.3.9-2) to Debian unstable.

2021-09-02: Upload package python-aniso8601 (9.0.1-2) to Debian unstable.

2021-09-03: Sponsor package fonts-jetbrains-mono (2.242+ds-1) for Debian unstable (Debian Mentors request).

2021-09-03: Sponsor package python-toml (0.10.2-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package buildbot (3.3.0-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-strictyaml (1.4.4-1) for Debian unstable (Python team request).

2021-09-03: Sponsor package python-absl (0.13.0-1) for Debian unstable (Python team request).

2021-09-03: Merge MR!1 for xabacus.

2021-09-03: Upload package aalib (1.4p5-49) to Debian unstable (Closes: #981503).

2021-09-03: File ROM for gnome-shell-extension-remove-dropdown-arrows (#993577, closing: #993196).

2021-09-03: Upload package bcachefs-tools (0.1+git20210805.6c42566-2) to Debian unstable.

2021-09-05: Upload package tuxpaint (0.9.26-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-config (0.17rc1-1~exp1) to Debian experimental.

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1~exp1) to Debian experimental (Closes: #988347).

2021-09-05: Upload package tuxpaint-stamps (2021.06.28-1) to Debian experimental.

2021-09-05: Upload package tuxpaint (0.9.26-1) to Debian unstable (Closes: #942889).

2021-09-06: Merge MR!2 for connectagram.

2021-09-06: Upload package connectagram (1.2.11-2) to Debian unstable.

2021-09-06: Upload package aalib (1.4p5-50) to Debian unstable (Closes: #993729).

2021-09-06: Upload packag gdisk (1.0.8-3) to Debian unstable (Closes: #993732).

2021-09-06: Upload package tuxpaint-config (0.17rc1-1) to Debian unstable.

2021-09-06: Upload package grapefruit (0.1_a3+dfsg-10) to Debian unstable.

2021-09-07: File ROM for gnome-shell-extension-hide-activities ().

2021-09-09: Upload package calamares (3.2.42-1) to Debian unstable.

2021-09-09: Upgraded peertube.debian.social to PeerTube 3.4.0.

2021-09-17: Upload calamares (3.2.43-1) to Debian unstable.

2021-09-28: Upload calamares (3.2.44.2-1) to Debian unstable.

04 October, 2021 12:50PM by jonathan

Paul Wise

FLOSS Activities September 2021

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: reopened bugs closed by a spammer
  • Debian wiki: unblock IP addresses, approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The purple-discord/harmony/pyemd/librecaptcha/esprima-python work was sponsored by my employer. All other work was done on a volunteer basis.

04 October, 2021 04:48AM

October 03, 2021

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Human Society

In my past, I’ve had experiences that have had me thinking. My experiences have been mostly in the South Asian Indian Sub-Continent, so may not be fair to generalize it.

  • Help with finding a job: I’ve learnt many times, that when people reach out asking for help, say, for helping them with finding a job; it isn’t about you making a recommendation/referral for them. It, instead, implies that you are indirectly being asked to find and arrange them a job.

  • Gifts for people: My impression of offering a gift to someone is usually presenting them with something I’ve found useful and dear to me. This is irrespective of whether the gift is a brand new unpacked item or a used (immaculate) one. On the contrary, many people define a gift as an item which is unpacked and one that comes with its sealed original packaging.

03 October, 2021 10:54AM by Ritesh Raj Sarraf (rrs@researchut.com)

hackergotchi for Junichi Uekawa

Junichi Uekawa

Using podman for most of my local development environment.

Using podman for most of my local development environment. For my personal/upstream development I started using podman instead of lxc and pbuilder and other toolings. Most projects provide reasonable docker images (such as rust) and I am happier keeping my environment as a whole stable while I can iterate. I have a Dockerfile for the development environment like this:

03 October, 2021 08:02AM by Junichi Uekawa

October 02, 2021

François Marier

Jacob Adams

SSH Port Forwarding and the Command Cargo Cult

Someone is Wrong on the Internet

If you look up how to only forward ports with ssh, you may come across solutions like this:

ssh -nNT -L 8000:example.com:80 user@bastion.example.com

Or perhaps this, if you also wanted to send ssh to the background:

ssh -NT -L 3306:db.example.com:3306 example.com &

Both of these use at least one option that is entirely redundant, and the second can cause ssh to fail to connect if you happen to be using password authentication. However, they seem to still persist in various articles about ssh port forwarding. I myself was using the first variation until just recently, and I figured I would write this up to inform others who might be still using these solutions.

The correct option for this situation is not -nNT but simply -N, as in:

ssh -N -L 8000:example.com:80 user@bastion.example.com

If you want to also send ssh to the background, then you’ll want to add -f instead of using your shell’s built-in & feature, because you can then input passwords into ssh if necessary1

Honestly, that’s the point of this article, so you can stop here if you want. If you’re looking for a detailed explaination of what each of these options actually does, or if you have no idea what I’m talking about, read on!

What is SSH Port Forwarding?

ssh is a powerful tool for remote access to servers, allowing you to execute commands on a remote machine. It can also forward ports through a secure tunnel with the -L and -R options. Basically, you can forward a connection to a local port to a remote server like so:

ssh -L 8080:other.example.com:80 ssh.example.com

In this example, you connect to ssh.example.com and then ssh forwards any traffic on your local machine port 80802 to other.example.com port 80 via ssh.example.com. This is a really powerful feature, allowing you to jump3 inside your firewall with just an ssh server exposed to the world.

It can work in reverse as well with the -R option, allowing connections on a remote host in to a server running on your local machine. For example, say you were running a website on your local machine on port 8080 but wanted it accessible on example.com port 804. You could use something like:

ssh -R 8080:example.com:80 example.com

The trouble with ssh port forwarding is that, absent any additional options, you also open a shell on the remote machine. If you’re planning to both work on a remote machine and use it to forward some connection, this is fine, but if you just need to forward a port quickly and don’t care about a shell at that moment, it can be annoying, especially since if the shell closes ssh will close the forwarding port as well.

This is where the -N option comes in.

SSH just forwarding ports

In the ssh manual page5, -N is explained like so:

Do not execute a remote command. This is useful for just forwarding ports.

This is all we need. It instructs ssh to run no commands on the remote server, just forward the ports specified in the -L or -R options. But people seem to think that there are a bunch of other necessary options, so what do those do?

SSH and stdin

-n controls how ssh interacts with standard input, specifically telling it not to:

Redirects stdin from /dev/null (actually, prevents reading from stdin). This must be used when ssh is run in the background. A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically for‐ warded over an encrypted channel. The ssh program will be put in the background. (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)

SSH passwords and backgrounding

-f sends ssh to background, freeing up the terminal in which you ran ssh to do other things.

Requests ssh to go to background just before command execution. This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This implies -n. The recommended way to start X11 programs at a remote site is with something like ssh -f host xterm.

As indicated in the description of -n, this does the same thing as using the shell’s & feature with -n, but allows you to put in any necessary passwords first.

SSH and pseudo-terminals

-T is a little more complicated than the others and has a very short explanation:

Disable pseudo-terminal allocation.

It has a counterpart in -t, which is explained a little better:

Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.

As the description of -t indicates, ssh is allocating a pseudo-terminal on the remote machine, not the local one. However, I have confirmed6 that -N doesn’t allocate a pseudo-terminal either, since it doesn’t run any commands. Thus this option is entirely unnecessary.

What’s a pseudo-terminal?

This is a bit complicated, but basically it’s an interface used in UNIX-like systems, like Linux or BSD, that pretends to be a terminal (thus pseudo-terminal). Programs like your shell, or any text-based menu system made in libraries like ncurses, expect to be connected to one (when used interactively at least). Basically it fakes as if the input it is given (over the network, in the case of ssh) was typed on a physical terminal device and do things like raise an interrupt (SIGINT) if Ctrl+C is pressed.

Why?

I don’t know why these incorrect uses of ssh got passed around as correct, but I suspect it’s a form of cargo cult, where we use example commands others provide and don’t question what they do. One stack overflow answer I read that provided these options seemed to think -T was disabling the local pseudo-terminal, which might go some way towards explaining why they thought it was necessary.

I guess the moral of this story is to question everything and actually read the manual, instead of just googling it.

  1. Not that you SHOULD be using ssh with password authentication anyway, but people do. 

  2. Only on your loopback address by default, so that you’re not allowing random people on your network to use your tunnel. 

  3. In fact, ssh even supports Jump Hosts, allowing you to automatically forward an ssh connection through another machine. 

  4. I can’t say I recommend a setup like this for anything serious, as you’d need to ssh as root to forward ports less than 1024. SSH forwarding is not for permanent solutions, just short-lived connections to machines that would be otherwise inaccessible. 

  5. Specifically, my source is the ssh(1) manual page in OpenSSH 8.4, shipped as 1:8.4p1-5 in Debian bullseye. 

  6. I just forwarded ports with -N and then logged in to that same machine and looked at psuedo-terminal allocations via ps ux. No terminal is associated with ssh connections using just the -N option. 

02 October, 2021 12:00AM

October 01, 2021

Russell Coker

Getting Started With Kali

Kali is a Debian based distribution aimed at penetration testing. I haven’t felt a need to use it in the past because Debian has packages for all the scanning tools I regularly use, and all the rest are free software that can be obtained separately. But I recently decided to try it.

Here’s the URL to get Kali [1]. For a VM you can get VMWare or VirtualBox images, I chose VMWare as it’s the most popular image format and also a much smaller download (2.7G vs 4G). For unknown reasons the torrent for it didn’t work (might be a problem with my torrent client). The download link for it was extremely slow in Australia, so I downloaded it to a system in Germany and then copied it from there.

I don’t want to use either VMWare or VirtualBox because I find KVM/Qemu sufficient to do everything I want and they are in the Main section of Debian, so I needed to convert the image files. Some of the documentation on converting image formats to use with QEMU/KVM says to use a program called “kvm-img” which doesn’t seem to exist, I used “qemu-img” from the qemu-utils package in Debian/Bullseye. The man page qemu-img(1) doesn’t list the types of output format supported by the “-O” option and the examples returned by a web search show using “-O qcow2“. It turns out that the following command will convert the image to “raw” format which is the format I prefer. I use BTRFS for storing all my VM images and that does all the copy-on-write I need.

qemu-img convert Kali-Linux-2021.3-vmware-amd64.vmdk ../kali

After converting it the file was 500M smaller than the VMWare files (10.2 vs 10.7G). Probably the Kali distribution file could be reduced in size by converting it to raw and then back to VMWare format. The Kali VMWare image is compressed with 7zip which has a good compression ratio, I waited almost 90 minutes for zstd to compress it with -19 and the result was 12% larger than the 7zip file.

VMWare apparently likes to use an emulated SCSI controller, I spent some time trying to get that going in KVM. Apparently recent versions of QEMU changed the way this works and therefore older web pages aren’t helpful. Also allegedly the SCSI emulation is buggy and unreliable (but I didn’t manage to get it going so can’t be sure). It turns out that the VM is configured to work with the virtio interface, the initramfs.conf has the configuration option “MODULES=most” which makes it boot on all common configurations (good work by the initramfs-tools maintainers). The image works well with the Spice display interface, so it doesn’t capture my mouse, the window for the VM works the same way as other windows on my desktop and doesn’t capture the mouse cursor. I don’t know if this level of Spice integration is in Debian now, last time I tested it didn’t work that way.

I also downloaded Metasploitable [2] which is a VM image designed to be full of security flaws for testing the tools that are in Kali. Again it worked nicely after converting from VMWare to raw format. One thing to note about Metasploitable is that you must not make it available on the public Internet. My home network has NAT for IPv4 but all systems get public IPv6 addresses. It’s usually nice that those things just work on VMs but not for this. So I added an iptables command to block IPv6 to /etc/rc.local.

Conclusion

Installing VMs for both these distributions was quite easy. Most of my time was spent downloading from a slow server, trying to get SCSI emulation working, working out how to convert image files, and testing different compression options. The time spent doing stuff once I knew what to do was very small.

Kali has zsh as the default shell, it’s quite nice. I’ve been happy with bash for decades, but I might end up trying zsh out on other machines.

01 October, 2021 04:56AM by etbe

hackergotchi for Junichi Uekawa

Junichi Uekawa

Garbage collecting with podman system prune.

Garbage collecting with podman system prune. Tells me it freed 20GB when it seems to have freed 4GB. Wondering where that discrepancy comes from.

01 October, 2021 02:37AM by Junichi Uekawa

September 30, 2021

hackergotchi for Holger Levsen

Holger Levsen

20210930-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021 is almost over...

The Debian Reunion Hamburg 2021 is almost over now, half the attendees have already left for Regensburg, while five remaining people are still busy here, though tonight there will be two concerts at the venue, plus some lovely food and more. Together with the day trip tomorrow (involving lots of water but hopefully not from above...) I don't expect much more work to be done, so that I feel comfortable publishing the following statistics now, even though I expect some more work will be done while travelling back or due to renewed energy from the event! So I might update these numbers later :-)

Together we did:

  • 27 uploads plus 117 uploads from Gregor from the Perl term
  • 6 RC bugs closed
  • 2 RC bugs opened
  • 1 presentation given
  • 2 DM upload permission was given
  • 1 DNS entry was setup for beta.tests.reproducible-builds.org, showing preliminary real-world data for Debian and Qubes OS, thanks to Qubes OS developer Frédéric Pierret
  • 1 dinner cooked
  • 5 people didn't show up, only 2 notified us
  • 2 people showed up without registration
  • had pretty good times and other quality stuff which is harder to quantify

I think that's a pretty awesome and am very happy we did this event!

Debian Reunion / MiniDebConf Hamburg 2022 - save the date, almost!

Thus I think we should have another Debian event at Fux in 2022, and after checking suitable free dates with the venue I think what could work out is an event from Monday May 23rd until Sunday May 29th 2022. What do you think?

For now these dates are preliminary. If you know any reasons why these dates could be less than optimal for such an event, please let me know. Assuming there's no feedback indicating this is a bad idea, the dates shall be finalized by November 1st 2021. Obviously assuming having physical events is still and again a thing! ;-)

30 September, 2021 06:06PM

Antoine Beaupré

Another syncmaildir crash

So I had another major email crash with my syncmaildir setup. This time I was at least able to confirm the issue, and I still haven't lost mail thanks to backups and sheer luck (again).

The crash

It is not really worth going over the crash in details, it's fairly similar to the last one: something bad happened and smd started destroying everything. The hint is that it takes a long time to do what usually takes seconds. It helps that I now have a second monitor showing logs.

I still lost much more mail than the last time. I used to have "301 723 messages", according to notmuch. But then when I ran smd-pull by hand, it was telling me:

95K emails scanned

Oops. You can see notmuch happily noticing the destroyed files on the server:

jun 28 16:33:40 marcos notmuch[28532]: No new mail. Removed 65498 messages. Detected 1699 file renames.
jun 28 16:36:05 marcos notmuch[29746]: No new mail. Removed 68883 messages. Detected 2488 file renames.
jun 28 16:41:40 marcos notmuch[31972]: No new mail. Removed 118295 messages. Detected 3657 file renames.

The final count ended up being 81 042 messages, according to notmuch. A whopping 220 000 mails deleted.

The interesting bit, this time around, is that I caught smd in the act of running two processes in parallel:

jun 28 16:30:09 curie systemd[2845]: Finished pull emails with syncmaildir. 
jun 28 16:30:09 curie systemd[2845]: Starting push emails with syncmaildir... 
jun 28 16:30:09 curie systemd[2845]: Starting pull emails with syncmaildir... 

So clearly that is the source of the bug.

Recovery

Emergency stop on curie:

notmuch dump > notmuch.dump
systemctl --user --now disable smd-pull.service smd-pull.timer smd-push.service smd-push.timer notmuch-new.service notmuch-new.timer

On marcos (the server), guessed the number of messages delivered since the last backup to be 71, just looking at timestamps in the mail log. Made a list:

grep postfix/local /var/log/mail.log | tail -71 > lost-mail

Found postfix queue IDs:

sed 's/.*\]://;s/:.*//' lost-mail > qids

Turn those into message IDs, find those that are missing from the disk (had previously ran notmuch new just to be sure it's up to date):

while read qid ; do 
    grep "$qid: message-id" /var/log/mail.log
done < qids  | sed 's/.*message-id=<//;s/>//' | while read msgid; do
    sudo -u anarcat notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid
done

Copy this back on curie as missing-msgids and:

$ wc -l missing-msgids 
48 missing-msgids
$ while read msgid ; do notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid ; done < missing-msgids
mailman.189.1624881611.23397.nodes-reseaulibre.ca@reseaulibre.ca
AnwMy7rdSpK-N-vt4AiOag@ismtpd0148p1mdw1.sendgrid.net

only two mails missing! whoohoo!

Copy those back onto marcos as really-missing-msgids, and look at the full mail logs to see what they are:

~anarcat/src/koumbit-scripts/mail/postfix-trace --from-file really-missing-msgids2

I actually remembered deleting those, so no mail lost!

Rebuild the list of msgids that were lost, on marcos:

while read qid ; do grep "$qid: message-id" /var/log/mail.log; done < qids  | sed 's/.*message-id=<//;s/>//'

Copy that on curie as lost-mail-msgids, then copy the files over in a test dir:

while read msgid ; do
    notmuch search --output=files --exclude=false "id:$msgid"
done < lost-mail-msgids | sed 's#/home/anarcat/Maildir/##' | rsync -v  --files-from=- /home/anarcat/Maildir/ shell.anarc.at:restore/Maildir-angela/

If that looks about right, on marcos:

find restore/Maildir-angela/ -type f | wc -l

... should match the number of missing mails, roughly.

Copy if in the real spool:

while read msgid ; do
    notmuch search --output=files --exclude=false "id:$msgid"
done < lost-mail-msgids | sed 's#/home/anarcat/Maildir/##' | rsync -v  --files-from=- /home/anarcat/Maildir/ shell.anarc.at:Maildir/

Then on the server, notmuch new should find the new emails, and we shouldn't have any lost mail anymore:

while read qid ; do grep "$qid: message-id" /var/log/mail.log; done < qids  | sed 's/.*message-id=<//;s/>//' | while read msgid; do sudo -u anarcat notmuch count --exclude=false id:$msgid | grep -q 0 && echo $msgid ; done

Then, crucial moment, try to pull the new mails from the backups on curie:

anarcat@curie:~(main)$ smd-pull  -n  --show-tags -v
Found lockfile of a dead instance. Ignored.
Phase 0: handshake
Phase 1: changes detection
    5K emails scanned
   10K emails scanned
   15K emails scanned
   20K emails scanned
   25K emails scanned
   30K emails scanned
   35K emails scanned
   40K emails scanned
   45K emails scanned
   50K emails scanned
Phase 2: synchronization
Phase 3: agreement
default: smd-client@localhost: TAGS: stats::new-mails(49687), del-mails(0), bytes-received(215752279), xdelta-received(3703852)
"smd-pull  -n  --show-tags -v" took 3 mins 39 secs

This brought me back to the state after the backup plus the mails delivered during the day, which means I had to catchup with all my holiday's read emails (1440 mails!) but thankfully I made a dump of the notmuch database on curie at the start of the procedure, so this actually restored a sane state:

pv notmuch.dump | notmuch restore

Phew!

Workaround

I have filed this as a bug in upstream issue 18. Considering I filed 11 issues and only 3 of those were closed, I'm not holding my breath. I nevertheless filed PR 19 in the hope that this will fix my particular issue, but I'm not even sure this is the right fix...

Fix

At this point, I'm really ready to give up on SMD. It's really, really nice to be able to sync mail over SSH because I don't need to store my IMAP password on disk. But surely there are more reliable syncing mechanisms. I do not remember ever losing that much mail before. At worst, offlineimap would duplicate emails like mad, but never destroy my entire mail spool that way.

As mentioned before, there are other programs that sync mail. I'm looking at:

  • offlineimap3: requires IMAP, used the py2 version in the past, might just still work, first sync painful (IIRC), ways to tunnel over SSH, see comment below
  • isync/mbsync: might be faster, I remember having trouble switching from offlineimap to this, has support for TLS client certs, running over SSH (e.g. presumably with Tunnel "ssh -C -q imap.example.com /usr/lib/dovecot/imap"), and generally has good words from multiple Debian and notmuch people (Arch tutorial)
  • getmail: just downloads email, might not be enough
  • nncp: treat the local spool as another mail server, might not be compatible with my "multiple clients" setup
  • doveadm-sync: requires dovecot on both ends, but supports using SSH to sync, will try this next, may have performance problems, see comment below
  • interimap: syncs two IMAP servers, apparently faster than doveadm and offlineimap, but requires running an IMAP server locally
  • mail-sync: notify support, happens over any piped transport (e.g. ssh), diff/patch system, requires binary on both ends, mentions UUCP in the manpage, seems awfully complicated to setup, mentions rsmtp which is a nice name for rsendmail

30 September, 2021 02:28PM

September 29, 2021

Ingo Juergensmann

LetsEncrypt CA Chain Issues with Ejabberd

UPDATE:
It’s not as simple as described below, I’m afraid… It appears that it’s not that easy to obtain new/correct certs from LetsEncrypt that are not cross-signed by DST Root X3 CA. Additionally older OpenSSL version (1.0.x) seems to have problems. So even when you think that your system is now ok, the remote server might refuse to accept your SSL cert. The same is valid for the SSL check on xmpp.net, which seems to be very outdated and beyond repair.

Honestly, I think the solution needs to be provided by LetsEncrypt…


I was having some strange issues on my ejabberd XMPP server the other day: some users complained that they couldn’t connect anymore to the MUC rooms on my server and in the logfiles I discovered some weird warnings about LetsEncrypt certificates being expired – although they were just new and valid until end of December.

It looks like this:

[warning] <0.368.0>@ejabberd_pkix:log_warnings/1:393 Invalid certificate in /etc/letsencrypt.sh/certs/buildd.net/fullchain.pem: at line 37: certificate is no longer valid as its expiration date has passed

and…

[warning] <0.18328.2>@ejabberd_s2s_out:process_closed/2:157 Failed to establish outbound s2s connection nerdica.net -> forum.friendi.ca: Stream closed by peer: Your server's certificate is invalid, expired, or not trusted by forum.friendi.ca (not-authorized); bouncing for 237 seconds

When checking out with some online tools like SSLlabs or XMPP.net the result was strange, because SSLlabs reported everything was ok while XMPP.net was showing the chain with X3 and D3 certs as having a short term validity of a few days:

After some days of fiddling around with the issue, trying to find a solution, it appears that there is a problem in Ejabberd when there are some old SSL certifcates being found by Ejabberd that are using the old CA chain. Ejabberd has a really nice feature where you can just configure a SSL cert directory (or a path containing wildcars. Ejabberd then reads all of the SSL certs and compare them to the list of configured domains to see which it will need and which not.

What helped (for me at least) was to delete all expired SSL certs from my directory, downloading the current CA file pems from LetsEncrypt (see their blog post from September 2020), run update-ca-certificates and ejabberdctl restart (instead of just ejabberdctl reload-config). UPDATE: be sure to use dpkg-reconfigure ca-certificates to uncheck the DST Root X3 cert (and others if necessary) before renewing the certs or running update-ca-certificates. Otherwise the update will bring in the expired cert again.

Currently I see at least two other XMPP domains in my server logs having certicate issues and in some MUCs there are reports of other domains as well.

Disclaimer: Again: this helped me in my case. I don’t know if this is a bug in Ejabberd or if this procedure will help you in your case nor if this is the proper solution. But maybe my story will help you solving your issue if you experience SSL certs issues in the last few days, especially now that the R3 cert has already expired and the X3 cert following in a few hours.

29 September, 2021 09:49PM by ij

Ian Jackson

Rust for the Polyglot Programmer

Rust is definitely in the news. I'm definitely on the bandwagon. (To me it feels like I've been wanting something like Rust for many years.) There're a huge number of intro tutorials, and of course there's the Rust Book.

A friend observed to me, though, that while there's a lot of "write your first simple Rust program" there's a dearth of material aimed at the programmer who already knows a dozen diverse languages, and is familiar with computer architecture, basic type theory, and so on. Or indeed, for the impatient and confident reader more generally. I thought I would have a go.

Rust for the Polyglot Programmer is the result.

Compared to much other information about Rust, Rust for the Polyglot Programmer is:

  • Dense: I assume a lot of starting knowledge. Or to look at it another way: I expect my reader to be able to look up and digest non-Rust-specific words or concepts.

  • Broad: I cover not just the language and tools, but also the library ecosystem, development approach, community ideology, and so on.

  • Frank: much material about Rust has a tendency to gloss over or minimise the bad parts. I don't do that. That also frees me to talk about strategies for dealing with the bad parts.

  • Non-neutral: I'm not afraid to recommend particular libraries, for example. I'm not afraid to extol Rust's virtues in the areas where it does well.

  • Terse, and sometimes shallow: I often gloss over what I see as unimportant or fiddly details; instead I provide links to appropriate reference materials.

After reading Rust for the Polyglot Programmer, you won't know everything you need to know to use Rust for any project, but should know where to find it.

Thanks are due to Simon Tatham, Mark Wooding, Daniel Silverstone, and others, for encouragement, and helpful reviews including important corrections. Particular thanks to Mark Wooding for wrestling pandoc and LaTeX into producing a pretty good-looking PDF. Remaining errors are, of course, mine.

Comments are welcome of course, via the Dreamwidth comments or Salsa issue or MR. (If you're making a contribution, please indicate your agreement with the Developer Certificate of Origin.)

edited 2021-09-29 16:58 UTC to fix Salsa link targe, and 17:01 and 17:21 to for minor grammar fixes



comment count unavailable comments

29 September, 2021 05:21PM

September 28, 2021

hackergotchi for Jonathan McDowell

Jonathan McDowell

Adding Zigbee to my home automation

SonOff Zigbee Door Sensor

My home automation setup has been fairly static recently; it does what we need and generally works fine. One area I think could be better is controlling it; we have access Home Assistant on our phones, and the Alexa downstairs can control things, but there are no smart assistants upstairs and sometimes it would be nice to just push a button to turn on the light rather than having to get my phone out. Thanks to the fact the UK generally doesn’t have neutral wire in wall switches that means looking at something battery powered. Which means wifi based devices are a poor choice, and it’s necessary to look at something lower power like Zigbee or Z-Wave.

Zigbee seems like the better choice; it’s a more open standard and there are generally more devices easily available from what I’ve seen (e.g. Philips Hue and IKEA TRÅDFRI). So I bought a couple of Xiaomi Mi Smart Home Wireless Switches, and a CC2530 module and then ignored it for the best part of a year. Finally I got around to flashing the Z-Stack firmware that Koen Kanters kindly provides. (Insert rant about hardware manufacturers that require pay-for tool chains. The CC2530 is even worse because it’s 8051 based, so SDCC should be able to compile for it, but the TI Zigbee libraries are only available in a format suitable for IAR’s embedded workbench.)

Flashing the CC2530 is a bit of faff. I ended up using the CCLib fork by Stephan Hadinger which supports the ESP8266. The nice thing about the CC2530 module is it has 2.54mm pitch pins so nice and easy to jumper up. It then needs a USB/serial dongle to connect it up to a suitable machine, where I ran Zigbee2MQTT. This scares me a bit, because it’s a bunch of node.js pulling in a chunk of stuff off npm. On the flip side, it Just Works and I was able to pair the Xiaomi button with the device and see MQTT messages that I could then use with Home Assistant. So of course I tore down that setup and went and ordered a CC2531 (the variant with USB as part of the chip). The idea here was my test setup was upstairs with my laptop, and I wanted something hooked up in a more permanent fashion.

Once the CC2531 arrived I got distracted writing support for the Desk Viking to support CCLib (and modified it a bit for Python3 and some speed ups). I flashed the dongle up with the Z-Stack Home 1.2 (default) firmware, and plugged it into the house server. At this point I more closely investigated what Home Assistant had to offer in terms of Zigbee integration. It turns out the ZHA integration has support for the ZNP protocol that the TI devices speak (I’m reasonably sure it didn’t when I first looked some time ago), so that seemed like a better option than adding the MQTT layer in the middle.

I hit some complexity passing the dongle (which turns up as /dev/ttyACM0) through to the Home Assistant container. First I needed an override file in /etc/systemd/nspawn/hass.nspawn:

[Files]
Bind=/dev/ttyACM0:/dev/zigbee

[Network]
VirtualEthernet=true

(I’m not clear why the VirtualEthernet needed to exist; without it networking broke entirely but I couldn’t see why it worked with no override file.)

A udev rule on the host to change the ownership of the device file so the root user and dialout group in the container could see it was also necessary, so into /etc/udev/rules.d/70-persistent-serial.rules went:

# Zigbee for HASS
SUBSYSTEM=="tty", ATTRS{idVendor}=="0451", ATTRS{idProduct}=="16a8", SYMLINK+="zigbee", \
	MODE="660", OWNER="1321926676", GROUP="1321926676"

In the container itself I had to switch PrivateDevices=true to PrivateDevices=false in the home-assistant.service file (which took me a while to figure out; yay for locking things down and then needing to use those locked down things).

Finally I added the hass user to the dialout group. At that point I was able to go and add the integration with Home Assistant, and add the button as a new device. Excellent. I did find I needed a newer version of Home Assistant to get support for the button, however. I was still on 2021.1.5 due to upstream dropping support for Python 3.7 and not being prepared to upgrade to Debian 11 until it was actually released, so the version of zha-quirks didn’t have the correct info. Upgrading to Home Assistant 2021.8.7 sorted that out.

There was another slight problem. Range. Really I want to use the button upstairs. The server is downstairs, and most of my internal walls are brick. The solution turned out to be a TRÅDFRI socket, which replaced the existing ESP8266 wifi socket controlling the stair lights. That was close enough to the server to have a decent signal, and it acts as a Zigbee router so provides a strong enough signal for devices upstairs. The normal approach seems to be to have a lot of Zigbee light bulbs, but I have mostly kept overhead lights as uncontrolled - we don’t use them day to day and it provides a nice fallback if the home automation has issues.

Of course installing Zigbee for a single button would seem to be a bit pointless. So I ordered up a Sonoff door sensor to put on the front door (much smaller than expected - those white boxes on the door are it in the picture above). And I have a 4 gang wireless switch ordered to go on the landing wall upstairs.

Now I’ve got a Zigbee setup there are a few more things I’m thinking of adding, where wifi isn’t an option due to the need for battery operation (monitoring the external gas meter springs to mind). The CC2530 probably isn’t suitable for my needs, as I’ll need to write some custom code to handle the bits I want, but there do seem to be some ARM based devices which might well prove suitable…

28 September, 2021 02:42PM

hackergotchi for Holger Levsen

Holger Levsen

20210928-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, klein aber fein / small but beautiful

So the Debian Reunion Hamburg 2021 has been going on for not yet 48h now and it appears people are having fun, enjoying discussions between fellow Debian people and getting some stuff done as well. I guess I'll write some more about it once the event is over...

Sharing android screens...

For now I just want to share one little gem I learned about yesterday on the hallway track:

$ sudo apt install scrcpy
$ scrcpy

And voila, once again I can type on my phone with a proper keyboard and copy and paste URLs between the two devices. One can even watch videos on the big screen with it :)

(This requires ADB debugging enabled on the phone, but doesn't require root access.)

28 September, 2021 02:07PM

September 27, 2021

hackergotchi for Wouter Verhelst

Wouter Verhelst

SReview::Video is now Media::Convert

SReview, the video review and transcode tool that I originally wrote for FOSDEM 2017 but which has since been used for debconfs and minidebconfs as well, has long had a sizeable component for inspecting media files with ffprobe, and generating ffmpeg command lines to convert media files from one format to another.

This component, SReview::Video (plus a number of supporting modules), is really not tied very much to the SReview webinterface or the transcoding backend. That is, the webinterface and the transcoding backend obviously use the ffmpeg handling library, but they don't provide any services that SReview::Video could not live without. It did use the configuration API that I wrote for SReview, but disentangling that turned out to be very easy.

As I think SReview::Video is actually an easy to use, flexible API, I decided to refactor it into Media::Convert, and have just uploaded the latter to CPAN itself.

The intent is to refactor the SReview webinterface and transcoding backend so that they will also use Media::Convert instead of SReview::Video in the near future -- otherwise I would end up maintaining everything twice, and then what's the point. This hasn't happened yet, but it will soon (this shouldn't be too difficult after all).

Unfortunately Media::Convert doesn't currently install cleanly from CPAN, since I made it depend on Alien::ffmpeg which currently doesn't work (I'm in communication with the Alien::ffmpeg maintainer in order to get that resolved), so if you want to try it out you'll have to do a few steps manually.

I'll upload it to Debian soon, too.

27 September, 2021 12:31PM

Russ Allbery

Review: The Problem with Work

Review: The Problem with Work, by Kathi Weeks

Publisher: Duke University Press
Copyright: 2011
ISBN: 0-8223-5112-9
Format: Kindle
Pages: 304

One of the assumptions baked deeply into US society (and many others) is that people are largely defined by the work they do, and that work is the primary focus of life. Even in Marxist analysis, which is otherwise critical of how work is economically organized, work itself reigns supreme. This has been part of the feminist critique of both capitalism and Marxism, namely that both devalue domestic labor that has traditionally been unpaid, but even that criticism is normally framed as expanding the definition of work to include more of human activity. A few exceptions aside, we shy away from fundamentally rethinking the centrality of work to human experience.

The Problem with Work begins as a critical analysis of that centrality of work and a history of some less-well-known movements against it. But, more valuably for me, it becomes a discussion of the types and merits of utopian thinking, including why convincing other people is not the only purpose for making a political demand.

The largest problem with this book will be obvious early on: the writing style ranges from unnecessarily complex to nearly unreadable. Here's an excerpt from the first chapter:

The lack of interest in representing the daily grind of work routines in various forms of popular culture is perhaps understandable, as is the tendency among cultural critics to focus on the animation and meaningfulness of commodities rather than the eclipse of laboring activity that Marx identifies as the source of their fetishization (Marx 1976, 164-65). The preference for a level of abstraction that tends not to register either the qualitative dimensions or the hierarchical relations of work can also account for its relative neglect in the field of mainstream economics. But the lack of attention to the lived experiences and political textures of work within political theory would seem to be another matter. Indeed, political theorists tend to be more interested in our lives as citizens and noncitizens, legal subjects and bearers of rights, consumers and spectators, religious devotees and family members, than in our daily lives as workers.

This is only a quarter of a paragraph, and the entire book is written like this.

I don't mind the occasional use of longer words for their precise meanings ("qualitative," "hierarchical") and can tolerate the academic habit of inserting mostly unnecessary citations. I have less patience with the meandering and complex sentences, excessive hedge words ("perhaps," "seem to be," "tend to be"), unnecessarily indirect phrasing ("can also account for" instead of "explains"), or obscure terms that are unnecessary to the sentence (what is "animation of commodities"?). And please have mercy and throw a reader some paragraph breaks.

The writing style means substantial unnecessary effort for the reader, which is why it took me six months to read this book. It stalled all of my non-work non-fiction reading and I'm not sure it was worth the effort. That's unfortunate, because there were several important ideas in here that were new to me.

The first was the overview of the "wages for housework" movement, which I had not previously heard of. It started from the common feminist position that traditional "women's work" is undervalued and advocated taking the next logical step of giving it equality with paid work by making it paid work. This was not successful, obviously, although the increasing prevalence of day care and cleaning services has made it partly true within certain economic classes in an odd and more capitalist way. While I, like Weeks, am dubious this was the right remedy, the observation that household work is essential to support capitalist activity but is unmeasured by GDP and often uncompensated both economically and socially has only become more accurate since the 1970s.

Weeks argues that the usefulness of this movement should not be judged by its lack of success in achieving its demands, which leads to the second interesting point: the role of utopian demands in reframing and expanding a discussion. I normally judge a political demand on its effectiveness at convincing others to grant that demand, by which standard many activist campaigns (such as wages for housework) are unsuccessful. Weeks points out that making a utopian demand changes the way the person making the demand perceives the world, and this can have value even if the demand will never be granted. For example, to demand wages for housework requires rethinking how work is defined, what activities are compensated by the economic system, how such wages would be paid, and the implications for domestic social structures, among other things. That, in turn, helps in questioning assumptions and understanding more about how existing society sustains itself.

Similarly, even if a utopian demand is never granted by society at large, forcing it to be rebutted can produce the same movement in thinking in others. In order to rebut a demand, one has to take it seriously and mount a defense of the premises that would allow one to rebut it. That can open a path to discussing and questioning those premises, which can have long-term persuasive power apart from the specific utopian demand. It's a similar concept as the Overton Window, but with more nuance: the idea isn't solely to move the perceived range of accepted discussion, but to force society to examine its assumptions and premises well enough to defend them, or possibly discover they're harder to defend than one might have thought.

Weeks applies this principle to universal basic income, as a utopian demand that questions the premise that work should be central to personal identity. I kept thinking of the Black Lives Matter movement and the demand to abolish the police, which (at least in popular discussion) is a more recent example than this book but follows many of the same principles. The demand itself is unlikely to be met, but to rebut it requires defending the existence and nature of the police. That in turn leads to questions about the effectiveness of policing, such as clearance rates (which are far lower than one might have assumed). Many more examples came to mind. I've had that experience of discovering problems with my assumptions I'd never considered when debating others, but had not previously linked it with the merits of making demands that may be politically infeasible.

The book closes with an interesting discussion of the types of utopias, starting from the closed utopia in the style of Thomas More in which the author sets up an ideal society. Weeks points out that this sort of utopia tends to collapse with the first impossibility or inconsistency the reader notices. The next step is utopias that acknowledge their own limitations and problems, which are more engaging (she cites Le Guin's The Dispossessed). More conditional than that is the utopian manifesto, which only addresses part of society. The least comprehensive and the most open is the utopian demand, such as wages for housework or universal basic income, which asks for a specific piece of utopia while intentionally leaving unspecified the rest of the society that could achieve it. The demand leaves room to maneuver; one can discuss possible improvements to society that would approach that utopian goal without committing to a single approach.

I wish this book were better-written and easier to read, since as it stands I can't recommend it. There were large sections that I read but didn't have the mental energy to fully decipher or retain, such as the extended discussion of Ernst Bloch and Friedrich Nietzsche in the context of utopias. But that way of thinking about utopian demands and their merits for both the people making them and for those rebutting them, even if they're not politically feasible, will stick with me.

Rating: 5 out of 10

27 September, 2021 04:41AM

September 22, 2021

hackergotchi for Gunnar Wolf

Gunnar Wolf

New book out! «Mecanismos de privacidad y anonimato en redes, una visión transdisciplinaria»

Three years ago, I organized a fun and most interesting colloquium at Facultad de Ingeniería, UNAM about privacy and anonymity online.

I would have loved to share this earlier with the world, but… The university’s processes are quite slow (and, to be fair, I also took quite a bit of time to push things through). But today, I’m finally happy to share the result of that work with all of you. We managed to get 11 of the talks in the colloquium as articles. The back-cover text reads (in Spanish):

We live in an era where human to human interactions are more and more often mediated by technology. This, of course, means everything leaves a digital trail, a trail that can follow and us relentlessly. Privacy is recognized, however, as a human right — although one that is under growing threats. Anonymity is the best tool to secure it. Throughout history, clear steps have been taken –legally, technically and technologically– to defend it. Various studies point out this is not only a known issue for the network's users, but that a large majority has searched for alternatives to protect their communications' privacy. This book stems from a colloquium held by *Laboratorio de Investigación y Desarrollo de Software Libre* (LIDSOL) of Facultad de Ingeniería, UNAM, towards the end of 2018, where we invited experts from disciplines so far apart as law and systems development, psychology and economics, to contribute with their experiences to a transdisciplinary vision.

If this interests you, you can get the book at our institutional repository.

Oh, and… What about the birds?

In Spanish (Mexican only?), we have a saying, «hay pájaros en el alambre», meaning watch your words, as uninvited people might be listening, as birds resting over the wires over which phone calls used to be made (back in the day where wiretapping was that easy). I found the design proposed by our editor ingenious and very fitting for our topic!

22 September, 2021 06:26PM

Ian Jackson

Tricky compatibility issue - Rust's io::ErrorKind

This post is about some changes recently made to Rust's ErrorKind, which aims to categorise OS errors in a portable way.

Audiences for this post

  • The educated general reader interested in a case study involving error handling, stability, API design, and/or Rust.
  • Rust users who have tripped over these changes. If this is you, you can cut to the chase and skip to How to fix.

Background and context

Error handling principles

Handling different errors differently is often important (although, sadly, often neglected). For example, if a program tries to read its default configuration file, and gets a "file not found" error, it can proceed with its default configuration, knowing that the user hasn't provided a specific config.

If it gets some other error, it should probably complain and quit, printing the message from the error (and the filename). Otherwise, if the network fileserver is down (say), the program might erroneously run with the default configuration and do something entirely wrong.

Rust's portability aims

The Rust programming language tries to make it straightforward to write portable code. Portable error handling is always a bit tricky. One of Rust's facilities in this area is std::io::ErrorKind which is an enum which tries to categorise (and, sometimes, enumerate) OS errors. The idea is that a program can check the error kind, and handle the error accordingly.

That these ErrorKinds are part of the Rust standard library means that to get this right, you don't need to delve down and get the actual underlying operating system error number, and write separate code for each platform you want to support. You can check whether the error is ErrorKind::NotFound (or whatever).

Because ErrorKind is so important in many Rust APIs, some code which isn't really doing an OS call can still have to provide an ErrorKind. For this purpose, Rust provides a special category ErrorKind::Other, which doesn't correspond to any particular OS error.

Rust's stability aims and approach

Another thing Rust tries to do is keep existing code working. More specifically, Rust tries to:

  1. Avoid making changes which would contradict the previously-published documentation of Rust's language and features.
  2. Tell you if you accidentally rely on properties which are not part of the published documentation.

By and large, this has been very successful. It means that if you write code now, and it compiles and runs cleanly, it is quite likely that it will continue work properly in the future, even as the language and ecosystem evolves.

This blog post is about a case where Rust failed to do (2), above, and, sadly, it turned out that several people had accidentally relied on something the Rust project definitely intended to change. Furthermore, it was something which needed to change. And the new (corrected) way of using the API is not so obvious.

Rust enums, as relevant to io::ErrorKind

(Very briefly:)

When you have a value which is an io::ErrorKind, you can compare it with specific values:

    if error.kind() == ErrorKind::NotFound { ...
  
But in Rust it's more usual to write something like this (which you can read like a switch statement):
    match error.kind() {
      ErrorKind::NotFound => use_default_configuration(),
      _ => panic!("could not read config file {}: {}", &file, &error),
    }
  

Here _ means "anything else". Rust insists that match statements are exhaustive, meaning that each one covers all the possibilities. So if you left out the line with the _, it wouldn't compile.

Rust enums can also be marked non_exhaustive, which is a declaration by the API designer that they plan to add more kinds. This has been done for ErrorKind, so the _ is mandatory, even if you write out all the possibilities that exist right now: this ensures that if new ErrorKinds appear, they won't stop your code compiling.

Improving the error categorisation

The set of error categories stabilised in Rust 1.0 was too small. It missed many important kinds of error. This makes writing error-handling code awkward. In any case, we expect to add new error categories occasionally. I set about trying to improve this by proposing new ErrorKinds. This obviously needed considerable community review, which is why it took about 9 months.

The trouble with Other and tests

Rust has to assign an ErrorKind to every OS error, even ones it doesn't really know about. Until recently, it mapped all errors it didn't understand to ErrorKind::Other - reusing the category for "not an OS error at all".

Serious people who write serious code like to have serious tests. In particular, testing error conditions is really important. For example, you might want to test your program's handling of disk full, to make sure it didn't crash, or corrupt files. You would set up some contraption that would simulate a full disk. And then, in your tests, you might check that the error was correct.

But until very recently (still now, in Stable Rust), there was no ErrorKind::StorageFull. You would get ErrorKind::Other. If you were diligent you would dig out the OS error code (and check for ENOSPC on Unix, corresponding Windows errors, etc.). But that's tiresome. The more obvious thing to do is to check that the kind is Other.

Obvious but wrong. ErrorKind is non_exhaustive, implying that more error kinds will appears, and, naturally, these would more finely categorise previously-Other OS errors.

Unfortunately, the documentation note

Errors that are Other now may move to a different or a new ErrorKind variant in the future.
was only added in May 2020. So the wrongness of the "obvious" approach was, itself, not very obvious. And even with that docs note, there was no compiler warning or anything.

The unfortunate result is that there is a body of code out there in the world which might break any time an error that was previously Other becomes properly categorised. Furthermore, there was nothing stopping new people writing new obvious-but-wrong code.

Chosen solution: Uncategorized

The Rust developers wanted an engineered safeguard against the bug of assuming that a particular error shows up as Other. They chose the following solution:

There is now a new ErrorKind::Uncategorized which is now used for all OS errors for which there isn't a more specific categorisation. The fallback translation of unknown errors was changed from Other to Uncategorised.

This is de jure justified by the fact that this enum has always been marked non_exhaustive. But in practice because this bug wasn't previously detected, there is such code in the wild. That code now breaks (usually, in the form of failing test cases). Usually when Rust starts to detect a particular programming error, it is reported as a new warning, which doesn't break anything. But that's not possible here, because this is a behavioural change.

The new ErrorKind::Uncategorized is marked unstable. This makes it impossible to write code on Stable Rust which insists that an error comes out as Uncategorized. So, one cannot now write code that will break when new ErrorKinds are added. That's the intended effect.

The downside is that this does break old code, and, worse, it is not as clear as it should be what the fixed code looks like.

Alternatives considered and rejected by the Rust developers

Not adding more ErrorKinds

This was not tenable. The existing set is already too small, and error categorisation is in any case expected to improve over time.

Just adding ErrorKinds as had been done before

This would mean occasionally breaking test cases (or, possibly, production code) when an error that was previously Other becomes categorised. The broken code would have been "obvious", but de jure wrong, just as it is now, So this option amounts to expecting this broken code to continue to be written and continuing to break it occasionally.

Somehow using Rust's Edition system

The Rust language has a system to allow language evolution, where code declares its Edition (2015, 2018, 2021). Code from multiple editions can be combined, so that the ecosystem can upgrade gradually.

It's not clear how this could be used for ErrorKind, though. Errors have to be passed between code with different editions. If those different editions had different categorisations, the resulting programs would have incoherent and broken error handling.

Also some of the schemes for making this change would mean that new ErrorKinds could only be stabilised about once every 3 years, which is far too slow.

How to fix code broken by this change

Most main-line error handling code already has a fallback case for unknown errors. Simply replacing any occurrence of Other with _ is right.

How to fix thorough tests

The tricky problem is tests. Typically, a thorough test case wants to check that the error is "precisely as expected" (as far as the test can tell). Now that unknown errors come out as an unstable Uncategorized variant that's not so easy. If the test is expecting an error that is currently not categorised, you want to write code that says "if the error is any of the recognised kinds, call it a test failure".

What does "any of the recognised kinds" mean here ? It doesn't meany any of the kinds recognised by the version of the Rust stdlib that is actually in use. That set might get bigger. When the test is compiled and run later, perhaps years later, the error in this test case might indeed be categorised. What you actually mean is "the error must not be any of the kinds which existed when the test was written".

IMO therefore the right solution for such a test case is to cut and paste the current list of stable ErrorKinds into your code. This will seem wrong at first glance, because the list in your code and in Rust can get out of step. But when they do get out of step you want your version, not the stdlib's. So freezing the list at a point in time is precisely right.

You probably only want to maintain one copy of this list, so put it somewhere central in your codebase's test support machinery. Periodically, you can update the list deliberately - and fix any resulting test failures.

Unfortunately this approach is not suggested by the documentation. In theory you could work all this out yourself from first principles, given even the situation prior to May 2020, but it seems unlikely that many people have done so. In particular, cutting and pasting the list of recognised errors would seem very unnatural.

Conclusions

This was not an easy problem to solve well. I think Rust has done a plausible job given the various constraints, and the result is technically good.

It is a shame that this change to make the error handling stability more correct caused the most trouble for the most careful people who write the most thorough tests. I also think the docs could be improved.

edited shortly after posting, and again 2021-09-22 16:11 UTC, to fix HTML slips



comment count unavailable comments

22 September, 2021 04:10PM

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2021 for Debian

The release of TeX Live 2021 is already half a year away, but due to the delay of waiting for Debian/Bullseye release, we haven’t updated TeX Live in Debian for quite some time. But the waiting is over, today I uploaded the first packages of TeX Live 2021 to unstable.

All the changes listed in the upstream release blog apply also to the Debian packages.

I expect a few hiccups, but it is good to see it out of the door finally.

Enjoy.

22 September, 2021 05:42AM by Norbert Preining

September 21, 2021

hackergotchi for Daniel Pocock

Daniel Pocock

Mission to Courgent

In 2019, Xavier asked me to deliver a special gift to the people of Courgent in France. The gift arrived at my home in early 2020, just days before many European countries went into lockdown.

The tale of Australian and British airmen who perished in Courgent is extraordinary. Courgent was in the occupied part of France at the time when the crew of LM338 PO-U crashed there. Local citizens gave the crew a full ceremonial burial, just as they would for their own soldiers.

High resolution video (1080p)

You can play the video here or click to download a copy and save it for later:

If you have trouble with playback please try one of the more compressed videos below or download the file, save it and watch it after the download finishes.

Medium resolution video (720p)

Low resolution video (270p)

Some words from Flying Officer Philip Ryan, 1922-1944

The gift was an original printed copy of the Xaverian from 1945. It includes many letters describing Flying Officer Ryan's experiences in the war. In fact, maybe half the pages dedicated to the war were solely the writing of this one former student who never made it back home to Australia.

Some comments from the introduction and his obituary:

We print here a diary written by Phil Ryan, who was killed in a raid over France last year. Phil was one of our Chroniclers in his last years at school. Phil came from Beechworth, and was at Xavier 1931-38.
He was very good at his studies but in debates he really shone. He was a keen debater, one of the keenest we have known. He put forward his arguments well, and he backed them with real conviction. He could ask the most telling questions and he could give the most elaborate replies.
As Editor of the Xaverian I take this opportunity of paying a tribute to Phil’s work. He was always most helpful, and most practical in his suggestions, and invaluable for the regularity of his work.

Flying Officer Ryan had a big interest in written and spoken communication. In fact, in his final letter written days before his death, Flying Officer Ryan makes a strong endorsement of villages like Courgent:

23 April 1943: Instead of going into Sydney from the camp I stayed home and had a sleep. This shows that I was not very impressed with Sydney. The surrounding country is rather pretty, but the city itself is just a maze of streets running in all directions.
We had quite a good trip in the train. The country we passed through was very nice – all beautifully green.
2 July 1944: I spent my leave in London again, but I think that will be about the last time, as the city life here does not appeal to me anymore than it did at home, and there is even more rushing around in London than in Melbourne, and certainly a lot more strap-hanging in buses and waiting in queues.

Flying Officer Ryan and his crew were shot down by Germans a few days later in the early hours of the morning on 8 July 1944.

On his way to Europe, he crossed the international date line on Anzac Day, in other words, a double Anzac Day. It was also Easter Sunday.

Xaverian 1945, Philip Ryan, Xavier College, Courgent

The long road to Courgent

The gift made the journey from Australia to Europe in just a few days in early 2020. From my letterbox, the journey became more colorful and I wasn't able to deliver it until 18 September 2021.

The citizens of Courgent who searched the wreck of LM338 PO-U, at risk from both the occupying Germans and the unexploded bombs, demonstrated a level of courage on the level of the Anzacs.

Everybody I met in France was extremely welcoming and the gifts from Xavier were finally delivered to the town hall.

I've prepared a short video with highlights of this visit, to capture the moment we stood shoulder-to-shoulder and commemorated these victims of fascism and delivered this gift in the town hall.

Visiting Courgent

Australians are always welcome to visit war graves in France.

There are some significant dates when ceremonies are more likely to take place and when it is most convenient for the French to welcome visitors.

Some villages hold commemorative events on the anniversary of battles and plane crashes. The people of Courgent have indicated that the next significant commemoration would be on the 80th anniversary of the crash, 7/8 July 2024.

8 May is a public holiday in France, equivalent to Anzac Day in Australia.

11 November has a higher status in France than Australia. It is a full public holiday in France.

Monsieur Jean-Louis Dupré has maintained a detailed blog about his research into the crew of LM338 PO-U.

Some official announcements are published from time to time on the web page of the village of Courgent.

Official page about the cemetery with crew details from the Commonwealth War Graves Commission.

21 September, 2021 04:00PM

hackergotchi for Clint Adams

Clint Adams

Outrage culture killed my dog

Why can't Debian have embarrassing flamewars like this thread?

Posted on 2021-09-21
Tags: barks

21 September, 2021 03:36PM

Russell Coker

hackergotchi for Norbert Preining

Norbert Preining

Plasma 5.23 Anniversary Edition Beta for Debian available for testing

Last week has seen the release of the first beta of Plasma 5.23 Anniversary Edition. Work behind the scenes to get this release as soon as possible into Debian has progressed well.

Starting with today, we provide binaries of Plasma 5.23 for Debian stable (bullseye), testing, and unstable, in each case for three architectures: amd64, i386, and aarch64.

To test the current beta, please add

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma523/DISTRIBUTION/ ./

to your apt sources (replacing DISTRIBUTION with one of Debian_11 (for Bullseye), Debian_Testing, or Debian_Unstable). For further details see this blog.

Warning

This is a beta release, and let me recall the warning from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 25th Anniversary Edition Beta in a production environment or as your daily desktop. If you do install Plasma 25th Anniversary Edition Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.

— https://kde.org/announcements/plasma/5/5.22.90/

Enjoy, and please report bugs!

21 September, 2021 05:53AM by Norbert Preining

September 20, 2021

Jamie McClelland

Putty Problems

I upgraded my first servers from buster to bullseye over the weekend and it went very smoothly, so big thank you to all the debian developers who contributed your labor to the bullseye release!

This morning, however, I hit a snag when the first windows users tried to login. It seems like a putty bug (see update below).

First, the user received an error related to algorithm selection. I didn’t record the exact error and simply suggested that the user upgrade.

Once the user was running the latest version of putty (0.76), they received a new error:

Server refused public-key signature despite accepting key!

I turned up debugging on the server and recorded:

Sep 20 13:10:32 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:32 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:32 container001 sshd[1647842]: Postponed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: userauth-request for user XXXXXXXXX service ssh-connection method publickey [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: attempt 2 failures 0 [preauth]
Sep 20 13:10:33 container001 sshd[1647842]: debug1: temporarily_use_uid: 1000/1000 (e=0/0)
Sep 20 13:10:33 container001 sshd[1647842]: debug1: trying public key file /home/XXXXXXXXX/.ssh/authorized_keys
Sep 20 13:10:33 container001 sshd[1647842]: debug1: fd 5 clearing O_NONBLOCK
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: matching key found: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:33 container001 sshd[1647842]: debug1: /home/XXXXXXXXX/.ssh/authorized_keys:6: key options: agent-forwarding port-forwarding pty user-rc x11-forwarding
Sep 20 13:10:33 container001 sshd[1647842]: Accepted key RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4 found at /home/XXXXXXXXX/.ssh/authorized_keys:6
Sep 20 13:10:33 container001 sshd[1647842]: debug1: restore_uid: 0/0
Sep 20 13:10:33 container001 sshd[1647842]: debug1: auth_activate_options: setting new authentication options
Sep 20 13:10:33 container001 sshd[1647842]: Failed publickey for XXXXXXXXX from xxx.xxx.xxx.xxx port 63579 ssh2: RSA SHA256:t3DVS5wZmO7DVwqFc41AvwgS5gx1jDWnR89apGmFpf4
Sep 20 13:10:39 container001 sshd[1647514]: debug1: Forked child 1648153.
Sep 20 13:10:39 container001 sshd[1648153]: debug1: Set /proc/self/oom_score_adj to 0
Sep 20 13:10:39 container001 sshd[1648153]: debug1: rexec start in 5 out 5 newsock 5 pipe 8 sock 9
Sep 20 13:10:39 container001 sshd[1648153]: debug1: inetd sockets after dupping: 4, 4

The server log seems to agree with the client returned message: first the key was accepted, then it was refused.

We re-generated a new key. We turned off the windows firewall. We deleted all the putty settings via the windows registry and re-set them from scratch.

Nothing seemed to work. Then, another windows user reported no problem (and that user was running putty version 0.74). So the first user downgraded to 0.74 and everything worked fine.

Update

Wow, very impressed with the responsiveness of putty devs!

And, who knew that putty is available in debian??

Long story short: putty version 0.76 works on linux and, from what I can tell, works for everyone except my one user. Maybe it’s their provider doing some filtering? Maybe a nuance to their version of Windows?

20 September, 2021 12:27PM

hackergotchi for Andy Simpkins

Andy Simpkins

COVID-19

Nearly 4 weeks after contracting COVID-19 I am finally able to return to work…

Yes I have had both Jabs (my 2nd dose was back in June), and this knocked me for six. I spent most of the time in bed, and only started to get up and about 10 days ago.

I passed this on to both my wife and daughter (my wife has also been double jabbed), fortunately they didn’t get it as bad as me and have been back at work / school for the last week. I also passed it on to a friend at the UK Debian BBQ, hosted once again by Sledge and Randombird, before I started showing symptoms. Fortunately (after a lot of PCR tests for attendees) it doesn’t look like I passed it to anyone else

I wouldn’t wish this on anyone.

I went on holiday back in August (still in England) thinking that having both jabs we would be fine. We stayed in self catering accommodation and we spent our time outside, we visited open air museums, walked around gardens etc, however we did eat out in relatively empty pubs and restaurants.

And yes we did all have face masks on when we went indoors (although obviously we removed them whilst eating).

I guess that is when I caught this, but have no idea exactly when or where.

Even after vaccination, it is still possible to both catch and spread this virus. Fortunately having been vaccinated my resulting illness was (statistically) less bad than it would otherwise have been.

I dread to think how bad I would have been if I had not already been vaccinated, I suspect that I would have ended up in ICU.  I am still very tired, and have been told it may take many more weeks to get back to my former self. Other than being overweight, prior to this I have been in good health.

If you are reading this and have NOT yet had a vaccine and one is available for you, please, please get it done.

20 September, 2021 11:07AM by andy

hackergotchi for Holger Levsen

Holger Levsen

20210920-Debian-Reunion-Hamburg-2021

Debian Reunion Hamburg 2021, we still have free beds

We still have some free slots and beds available for the "Debian Reunion Hamburg 2021" taking place in Hamburg at the venue of the 2018 & 2019 MiniDebConfs from Monday, Sep 27 2021 until Friday Oct 1 2021, with Sunday, Sep 26 2021 as arrival day.

So again, Debian people will meet in Hamburg. The exact format is less defined and structured than previous years, probably we will just be hacking from Monday to Wednesday, have talks on Thursday and a nice day trip on Friday.

Please read https://wiki.debian.org/DebianEvents/de/2021/DebianReunionHamburg and if you intend to attend, please register there. If additionally you would like to stay on site (in a single room or shared with one another person), please mail me.

I'm looking forward to this event, even though (or maybe also because) it will be much smaller than last years. I suppose this will lead to more personal interactions and more intense hacking, though of course it is to be seen how this works out exactly!

20 September, 2021 07:20AM

September 19, 2021

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, August 2021

In August I was assigned 13.25 hours of work by Freexian's Debian LTS initiative and carried over 6 hours from earlier months. I worked 1.25 hours and will carry over the remainder.

I attended an LTS team meeting, and wrote my report for July 2021, but did not work on any updates.

19 September, 2021 09:28PM

September 18, 2021

hackergotchi for Mike Gabriel

Mike Gabriel

X2Go, Remmina and X2GoKdrive

In this blog post, I will cover a few related but also different topics around X2Go - the GNU/Linux based remote computing framework.

Introduction and Catch Up

For those, who haven't come across X2Go, so far... With X2Go [0] you can log into remote GNU/Linux machines graphically and launch headless desktop environments, seamless/published applications or access an already running desktop session (on a local Xserver or running as a headless X2Go desktop session) via X2Go's session shadowing / mirroring feature.

Graphical backend: NXv3

For several years, there was only one graphical backend available in X2Go, the NXv3 software. In NXv3, you have a headless or nested (it can do both) Xserver that has some remote magic built-in and is able to transfer the Xserver's graphical data to a remote client (NX proxy). Over the wire, the NX protocol allows for data compression (JPEG, PNG, etc.) and combines it with bitmap caching, so that the overall result is a fast and responsive desktop experience even on low latency and low bandwidth connections. This especially applies to X desktop environments that use many native X protocol operations for drawing windows and widget onto the screen. The more bitmaps involved (e.g. in applications with client-side rendering of window controls and such), the worse the quality of a session experience.

The current main maintainer of NVv3 (aka nx-libs [1]) is Ulrich Sibiller. Uli has my and the X2Go community's full appreciation, admiration and gratitude for all the work he does on nx-libs, constantly improving NXv3 without breaking compatibility with legacy use cases (yes, FreeNX is still alive, by the way).

NEW: Alternative Graphical Backend: X2Go Kdrive

Over the past 1.5 years, Oleksandr Shneyder (Alex), co-founder of X2Go, has been working on a re-implementation of an alternative, less X11-dependent graphical backend. The underlying Xserver technology is the kdrive part of the X.org server project. People on GNU/Linux might have used kdrive technology already: The Xephyr nested Xserver uses the kdrive implementation.

The idea of the X2Go Kdrive [2] implementation in X2Go is providing a headless Xserver on the X2Go Server side for running X11 based desktop sessions inside while using an X11-agnostic data protocol for sending the graphical desktop data to the client-side for rendering. Whereas, with NXv3 technology, you need a local Xserver on the client side, with X2Go Kdrive you only need a client app(lication) that can draw bitmaps into some sort of framebuffer, such as a client-side X11 Xserver, a client-side Wayland compositor or (hold your breath) an HTMLv5 canvas in a web browser.

X2Go Kdrive Client Implementations

During first half of this year, I tested and DEB-packaged Alex's X2Go HTMLv5 client code [3] and it has been available for testing in the X2Go nightly builds archive for a while now.

Of course, the native X2Go Client application has X2Go Kdrive support for a while, too, but it requires a Qt5 application in the background, the x2gokdriveclient (which is still only available in X2Go nightly builds or from X2Go Git [4]).

X2Go and Remmina

As currently posted by the Remmina community [5], one of my employees has been working on finalizing an already existing draft of mine for the last couple of months: Remmina Plugin X2Go. This project has been contracted by BAUR-ITCS UG (haftungsbeschränkt) already a while back and has been financed via X2Go funding from one of their customers. Unfortunately, I never got around really to finalizing the project. Apologies for this.

Daniel Teichmann, who has been in the company for a while now, but just recently switched to an employment model with considerably more work hours per week, now picked up this project two months ago and achieved awesome things on the way.

Daniel Teichmann and Antenore Gatta (Remmina core developer, aka tmow) have been cooperating intensely on this, recently, with the objective of getting the X2Go plugin code merged into Remmina asap. We are pretty close to the first touchdown (i.e. code merge) of this endeavour.

Thanks to Antenore for his support on this. This is much appreciated.

Remmina Plugin X2Go - Current Challenges

The X2Go Plugin for Remmina implementation uses Python X2Go (PyHoca-CLI) under the bonnet and basically does a system call to pyhoca-cli according to the session settings configured in the Remmina session profile UI. When using NXv3 based sessions, the session window appears on the client-side Xserver and immediately gets caught by Remmina and embedded into the Remmina frame (via Xembed protocol) where its remote sessions are supposed to appear. (Thanks that GtkSocket is still around in GTK-3). The knowing GTK-3 experts among you may have noticed: GtkSocket is obsolete and has been removed from GTK-4. Also, GtkSocket support is only available in GTK-3 when using its X11 rendering backend.

For the X2Go Kdrive implementation, we tested a similar approach (embedding the x2gokdriveclient Qt5 window via Xembed/GtkSocket), but it seems that GtkSocket and Qt5 applications don't work well together and we did not succeed in embedding the Qt5 window of the x2gokdriveclient application into Remmina, so far. Also, this would be a work-around for the bigger problem: We want, long-term, provide X2Go Kdrive support in Remmina, not only for Remmina running with GTK-3/X11, but also when Remmina is used natively on top of Wayland.

So, the more sustainable approach for showing an X2Go Kdrive based X2Go session in Remmina would be a GTK-3/4 or a Glib-2.0 + Cairo based rendering client provided as a shared library. This then could be used by Remmina for drawing the session bitmaps into the Remmina session frame.

This would require a port of the x2gokdriveclient Qt code into a non-Qt implementation. However, we are running out of funding to make this happen at the moment.

More Funding Needed for this Journey

As you might guess, such a project as proposed is a project that some people do in their spare time, others do it for a living.

I'd love to continue this project and have Daniel Teichmann continue his work on this, so that Remmina might soon be able to provide native X2Go Kdrive Client support.

If people read this and are interested in supporting such a project, please get in touch [6]. Thanks so much!

light+love
Mike (aka sunweaver)

[0] https://wiki.x2go.org/
[1] https://github.com/ArcticaProject/nx-libs
[2] https://code.x2go.org/gitweb?p=x2gokdrive.git;a=tree
[3] https://code.x2go.org/gitweb?p=x2gohtmlclient.git;a=tree
[4] https://code.x2go.org/gitweb?p=x2gokdriveclient.git;a=tree
[5] https://remmina.org/x2go/
[6] https://das-netzwerkteam.de/

18 September, 2021 05:04PM by sunweaver

September 16, 2021

hackergotchi for Chris Lamb

Chris Lamb

On Colson Whitehead's Harlem Shuffle

Colson Whitehead's latest novel, Harlem Shuffle, was always going to be widely reviewed, if only because his last two books won Pulitzer prizes. Still, after enjoying both The Underground Railroad and The Nickel Boys, I was certainly going to read his next book, regardless of what the critics were saying — indeed, it was actually quite agreeable to float above the manufactured energy of the book's launch.

Saying that, I was encouraged to listen to an interview with the author by Ezra Klein. Now I had heard Whitehead speak once before when he accepted the Orwell Prize in 2020, and once again he came across as a pretty down-to-earth guy. Or if I were to emulate the detached and cynical tone Whitehead embodied in The Nickel Boys, after winning so many literary prizes in the past few years, he has clearly rehearsed how to respond to the cliched questions authors must be asked in every interview. With the obligatory throat-clearing of 'so, how did you get into writing?', for instance, Whitehead replies with his part of the catechism that 'It seemed like being a writer could be a cool job. You could work from home and not talk to people.' The response is the right combination of cute and self-effacing... and with its slight tone-deafness towards enforced isolation, it was no doubt honed before Covid-19.

§

Harlem Shuffle tells three separate stories about Ray Carney, a furniture salesman and 'fence' for stolen goods in New York in the 1960s. Carney doesn't consider himself a genuine criminal though, and there's a certain logic to his relativistic morality. After all, everyone in New York City is on the take in some way, and if some 'lightly used items' in Carney's shop happened to have had 'previous owners', well, that's not quite his problem. 'Nothing solid in the city but the bedrock,' as one character dryly observes. Yet as Ezra pounces on in his NYT interview mentioned abov, the focus on the Harlem underworld means there are very few women in the book, and Whitehead's circular response — ah well, it's a book about the criminals at that time! — was a little unsatisfying. Not only did it feel uncharacteristically slippery of someone justly lauded for his unflinching power of observation (after all, it was the author who decided what to write about in the first place), it foreclosed on the opportunity to delve into why the heist and caper genres (from The Killing, The Feather Thief, Ocean's 11, etc.) have historically been a 'male' mode of storytelling.

Perhaps knowing this to be the case, the conversation quickly steered towards Ray Carney's wife, Elizabeth, the only woman in the book who could be said possesses some plausible interiority. The following off-hand remark from Whitehead caught my attention:

My wife is convinced that [Elizabeth] knows everything about Carney's criminal life, and is sort of giving him a pass. And I'm not sure if that's true. I have to have to figure out exactly what she knows and when she knows it and how she feels about it.

I was quite taken by this, although not simply due to its effect on the story it self. As in, it immediately conjured up a charming picture of Whitehead's domestic arrangements: not only does Whitehead's wife feel free to disagree with what one of Whitehead's 'own' characters knows or believes, but that Colson has no problem whatsoever sharing that disagreement with the public at large. (It feels somehow natural that Whitehead's wife believes her counterpart knows more than she lets on, whilst Whitehead himself imbues the protagonist's wife with a kind of neo-Victorian innocence.) I'm minded to agree with Whitehead's partner myself, if only due to the passages where Elizabeth is studiously ignoring Carney's otherwise unexplained freak-outs.

But all of these meta-thoughts simply underline just how emancipatory the Death of the Author can be. This product of academic literary criticism (the term was coined by Roland Barthes' 1967 essay of the same name) holds that the original author's intentions, ideas or biographical background carry no especial weight in determining how others should interpret their work. It is usually understood as meaning that a writer's own views are no more valid or 'correct' than the views held by someone else. (As an aside, I've found that most readers who encounter this concept for the first time have been reading books in this way since they were young. But the opposite is invariably true with cinephiles, who often have a bizarre obsession with researching or deciphering the 'true' interpretation of a film.) And with all that in mind, can you think of a more wry example of how freeing (and fun) nature of the Death of the Author than an author's own partner dissenting with their (Pulitzer Prize-winning) husband on the position of a lynchpin character?

§

The 1964 Harlem riot began after James Powell, a 15-year-old African American, was shot and killed by Thomas Gilligan, an NYPD police officer in front of 10s of witnesses. Gilligan was subsequently cleared by a grand jury.

As it turns out, the reviews for Harlem Shuffle have been almost universally positive, and after reading it in the two days after its release, I would certainly agree it is an above-average book. But it didn't quite take hold of me in the way that The Underground Railroad or The Nickel Boys did, especially the later chapters of The Nickel Boys that were set in contemporary New York and could thus make some (admittedly fairly explicit) connections from the 1960s to the present day — that kind of connection is not there in Harlem Shuffle, or at least I did not pick up on it during my reading.

I can see why one might take exception to that, though. For instance, it is certainly true that the week-long Harlem Riot forms a significant part of the plot, and some events in particular are entirely contingent on the ramifications of this momentous event. But it's difficult to argue the riot's impact are truly integral to the story, so not only is this uprising against police brutality almost regarded as a background event, any contemporary allusion to the murder of George Floyd is subsequently watered down. It's nowhere near the historical rubbernecking of Forrest Gump (1994), of course, but that's not a battle you should ever be fighting.

Indeed, whilst a certain smoothness of affect is to be priced into the Whitehead reading experience, my initial overall reaction to Harlem Shuffle was fairly flat, despite all the action and intrigue on the page. The book perhaps belies its origins as a work conceived during quarantine — after all, the book is essentially comprised of three loosely connected novellas, almost as if the unreality and mental turbulence of lockdown prevented the author from performing the psychological 'deep work' of producing a novel-length text with his usual depth of craft. A few other elements chimed with this being a 'lockdown novel' as well, particularly the book's preoccupation with the sheer physicality of the city compared to the usual complex interplay between its architecture and its inhabitants. This felt like it had been directly absorbed into the book from the author walking around his deserted city, and thus being able to take in details for the first time:

The doorways were entrances into different cities—no, different entrances into one vast, secret city. Ever close, adjacent to all you know, just underneath. If you know where to look.

And I can't fail to mention that you can almost touch Whitehead's sublimated hunger to eat out again as well:

Stickups were chops—they cook fast and hot, you’re in and out. A stakeout was ribs—fire down low, slow, taking your time.

[…]

Sometimes when Carney jumped into the Hudson when he was a kid, some of that stuff got into his mouth. The Big Apple Diner served it up and called it coffee.

More seriously, however, the relatively thin personalities of minor characters then reminded me of the simulacrum of Zoom-based relationships, and the essentially unsatisfactory endings to the novellas felt reminiscent of lockdown pseudo-events that simply fizzle out without a bang. One of the stories ties up loose ends with: 'These things were usually enough to terminate a mob war, and they appeared to end the hostilities in this case as well.' They did? Well, okay, I guess.

§

The corner of 125th Street and Morningside Avenue in 2019, the purported location of Carney's fictional furniture store. Signage plays a prominent role in Harlem Shuffle, possibly due to the author's quarantine walks.

Still, it would be unfair to characterise myself as 'disappointed' with the novel, and none of this piece should be taken as really deep criticism. The book certainly was entertaining enough, and pretty funny in places as well:

Carney didn’t have an etiquette book in front of him, but he was sure it was bad manners to sit on a man’s safe.

[…]

The manager of the laundromat was a scrawny man in a saggy undershirt painted with sweat stains. Launderer, heal thyself.

Yet I can't shake the feeling that every book you write is a book that you don't, and so we might need to hold out a little longer for Whitehead's 'George Floyd novel'. (Although it is for others to say how much of this sentiment is the expectations of a White Reader for The Black Author to ventriloquise the pain of 'their' community.)

Some room for personal critique is surely permitted. I dearly missed the junk food energy of the dry and acerbic observations that run through Whitehead's previous work. At one point he had a good line on the model tokenisation that lurks behind 'The First Negro to...' labels, but the callbacks to this idea ceased without any payoff. Similar things happened with the not-so-subtle critiques of the American Dream:

“Entrepreneur?” Pepper said the last part like manure. “That’s just a hustler who pays taxes.”

[…]

One thing I’ve learned in my job is that life is cheap, and when things start getting expensive, it gets cheaper still.

Ultimately, though, I think I just wanted more. I wanted a deeper exploration of how the real power in New York is not wielded by individual street hoodlums or even the cops but in the form of real estate, essentially serving as a synecdoche for Capital as a whole. (A recent take of this can be felt in Jed Rothstein's 2021 documentary, WeWork: Or the Making and Breaking of a $47 Billion Unicorn… and it is perhaps pertinent to remember that the US President at the time this novel was written was affecting to be a real estate tycoon.). Indeed, just like the concluding scenes of J. J. Connolly's Layer Cake, although you can certainly pull off a cool heist against the Man, power ultimately resides in those who control the means of production... and a homespun furniture salesman on the corner of 125 & Morningside just ain't that. There are some nods to kind of analysis in the conclusion of the final story ('Their heist unwound as if it had never happened, and Van Wyck kept throwing up buildings.'), but, again, I would have simply liked more.

And when I attempted then file this book away into the broader media landscape, given the current cultural visibility of 1960s pop culture (e.g. One Night in Miami (2020), Judas and the Black Messiah (2021), Summer of Soul (2021), etc.), Harlem Shuffle also seemed like a missed opportunity to critically analyse our (highly-qualified) longing for the civil rights era. I can certainly understand why we might look fondly on the cultural products from a period when politics was less alienated, when society was less atomised, and when it was still possible to imagine meaningful change, but in this dimension at least, Harlem Shuffle seems to merely contribute to this nostalgic escapism.

16 September, 2021 04:10PM