July 04, 2020

Russ Allbery

Review: The Light Brigade

Review: The Light Brigade, by Kameron Hurley

Publisher: Saga
Copyright: 2019
ISBN: 1-4814-4798-X
Format: Kindle
Pages: 355

In the wake of the Blink, which left a giant crater where São Paulo was, Dietz signed up for the military. To be a hero. To satisfy an oath of vengeance. To kill aliens.

Corporations have consumed the governments that used to run Earth and have divided the world between them. Dietz's family, before the Blink, were ghouls in Tene-Silva territory, non-citizens who scavenged a precarious life on the margins. Citizenship is a reward for loyalty and a mechanism of control. The only people who don't fit into the corporate framework are the Martians, former colonists who went dark for ten years and re-emerged as a splinter group offering to use their superior technology to repair environmental damage to the northern hemisphere caused by corporate wars. When the Blink happens, apparently done with technology far beyond what the corporations have, corporate war with the Martians is the unsurprising result.

Long-time SF readers will immediately recognize The Light Brigade as a response to Starship Troopers with far more cynical world-building. For the first few chapters, the parallelism is very strong, down to the destruction of a large South American city (São Paulo instead of Buenos Aires), a naive military volunteer, and horrific basic training. But, rather than dropships, the soldiers in Dietz's world are sent into battle via, essentially, Star Trek transporters. These still very experimental transporters send Dietz to a different mission than the one in the briefing.

Advance warning that I'm going to talk about what's happening with Dietz's drops below. It's a spoiler, but you would find out not far into the book and I don't think it ruins anything important. (On the contrary, it may give you an incentive to stick through the slow and unappealing first few chapters.)

I had so many suspension of disbelief problems with this book. So many.

This starts with the technology. The core piece of world-building is Star Trek transporters, so fine, we're not talking about hard physics. Every SF story gets one or two free bits of impossible technology, and Hurley does a good job showing the transporters through a jaundiced military eye. But, late in the book, this technology devolves into one of my least-favorite bits of SF hand-waving that, for me, destroyed that gritty edge.

Technology problems go beyond the transporters. One of the bits of horror in basic training is, essentially, torture simulators, whose goal is apparently to teach soldiers to dissociate (not that the book calls it that). One problem is that I never understood why a military would want to teach dissociation to so many people, but a deeper problem is that the mechanics of this simulation made no sense. Dietz's training in this simulator is a significant ongoing plot point, and it kept feeling like it was cribbed from The Matrix rather than something translatable into how computers work.

Technology was the more minor suspension of disbelief problem, though. The larger problem was the political and social world-building.

Hurley constructs a grim, totalitarian future, which is a fine world-building choice although I think it robs some nuance from the story she is telling about how militaries lie to soldiers. But the totalitarian model she uses is one of near-total information control. People believe what the corporations tell them to believe, or at least are indifferent to it. Huge world events (with major plot significance) are distorted or outright lies, and those lies are apparently believed by everyone. The skepticism that exists is limited to grumbling about leadership competence and cynicism about motives, not disagreement with the provided history. This is critical to the story; it's a driver behind Dietz's character growth and is required to set up the story's conclusion.

This is a model of totalitarianism that's familiar from Orwell's Nineteen Eighty-Four. The problem: The Internet broke this model. You now need North Korean levels of isolation to pull off total message control, which is incompatible with the social structure or technology level that Hurley shows.

You may be objecting that the modern world is full of people who believe outrageous propaganda against all evidence. But the world-building problem is not that some people believe the corporate propaganda. It's that everyone does. Modern totalitarians have stopped trying to achieve uniformity (because it stopped working) and instead make the disagreement part of the appeal. You no longer get half a country to believe a lie by ensuring they never hear the truth. Instead, you equate belief in the lie with loyalty to a social or political group, and belief in the truth with affiliation with some enemy. This goes hand in hand with "flooding the zone" with disinformation and fakes and wild stories until people's belief in the accessibility of objective truth is worn down and all facts become ideological statements. This does work, all too well, but it relies on more information, not less. (See Zeynep Tufekci's excellent Twitter and Tear Gas if you're unfamiliar with this analysis.) In that world, Dietz would have heard the official history, the true history, and all sorts of wild alternative histories, making correct belief a matter of political loyalty. There is no sign of that.

Hurley does gesture towards some technology to try to explain this surprising corporate effectiveness. All the soldiers have implants, and military censors can supposedly listen in at any time. But, in the story, this censorship is primarily aimed at grumbling and local disloyalty. There is no sign that it's being used to keep knowledge of significant facts from spreading, nor is there any sign of the same control among the general population. It's stated in the story that the censors can't even keep up with soldiers; one would have to get unlucky to be caught. And yet the corporation maintains preternatural information control.

The place this bugged me the most is around knowledge of the current date. For reasons that will be obvious in a moment, Dietz has reasons to badly want to know what month and year it is and is unable to find this information anywhere. This appears to be intentional; Tene-Silva has a good (albeit not that urgent) reason to keep soldiers from knowing the date. But I don't think Hurley realizes just how hard that is.

Take a look around the computer you're using to read this and think about how many places the date shows up. Apart from the ubiquitous clock and calendar app, there are dates on every file, dates on every news story, dates on search results, dates in instant messages, dates on email messages and voice mail... they're everywhere. And it's not just the computer. The soldiers can easily smuggle prohibited outside goods into the base; knowledge of the date would be much easier. And even if Dietz doesn't want to ask anyone, there are opportunities to go off base during missions. Somehow every newspaper and every news bulletin has its dates suppressed? It's not credible, and it threw me straight out of the story.

These world-building problems are unfortunate, since at the heart of The Light Brigade is a (spoiler alert) well-constructed time travel story that I would have otherwise enjoyed. Dietz is being tossed around in time with each jump. And, unlike some of these stories, Hurley does not take the escape hatch of alternate worlds or possible futures. There is a single coherent timeline that Dietz and the reader experience in one order and the rest of the world experiences in a different order.

The construction of this timeline is incredibly well-done. Time can only disconnect at jump and return points, and Hurley maintains tight control over the number of unresolved connections. At every point in the story, I could list all of the unresolved discontinuities and enjoy their complexity and implications without feeling overwhelmed by them. Dietz gains some foreknowledge, but in a way that's wildly erratic and hard to piece together fast enough for a single soldier to do anything about the plot. The world spins out of control with foreshadowing of grimmer and grimmer events, and then Hurley pulls it back together in a thoroughly satisfying interweaving of long-anticipated scenes and major surprises.

I'm not usually a fan of time travel stories, but this is one of the best I've read. It also has a satisfying emotional conclusion (albeit marred for me by some unbelievable mystical technobabble), which is impressive given how awful and nasty Hurley makes this world. Dietz is a great first-person narrator, believably naive and cynical by turns, and piecing together the story structure alongside the protagonist built my emotional attachment to Dietz's character arc. Hurley writes the emotional dynamics of soldiers thoughtfully and well: shit-talking, fights, sudden moments of connection, shared cynicism over degenerating conditions, and the underlying growth of squad loyalty that takes over other motivations and becomes the reason to keep on fighting.

Hurley also pulled off a neat homage to (and improvement on) Starship Troopers that caught me entirely by surprise and that I've hopefully not spoiled.

This is a solid science fiction novel if you can handle the world-building. I couldn't, but I understand why it was nominated for the Hugo and Clarke awards. Recommended if you're less picky about technological and social believability than I am, although content warning for a lot of bloody violence and death (including against children) and a horrifically depressing world.

Rating: 6 out of 10

04 July, 2020 03:49AM

July 03, 2020

hackergotchi for Michael Prokop

Michael Prokop

Grml 2020.06 – Codename Ausgehfuahangl

We did it againâ„¢, at the end of June we released Grml 2020.06, codename Ausgehfuahangl. This Grml release (a Linux live system for system administrators) is based on Debian/testing (AKA bullseye) and provides current software packages as of June, incorporates up to date hardware support and fixes known issues from previous Grml releases.

I am especially fond of our cloud-init and qemu-guest-agent integration, which makes usage and automation in virtual environments like Proxmox VE much more comfortable.

Once as the Qemu Guest Agent setting is enabled in the VM options (also see Proxmox wiki), you’ll see IP address information in the VM summary:

Screenshot of qemu guest agent integration

Using a cloud-init drive allows using an SSH key for login as user "grml", and you can control network settings as well:

Screenshot of cloud-init integration

It was fun to focus and work on this new Grml release together with Darsha, and we hope you enjoy the new Grml release as much as we do!

03 July, 2020 02:32PM by mika

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma Status Update 2020-07-04

Great timing for 4th of July, here is another status update of KDE/Plasma for Debian. Short summary: everything is now available for Debian sid and testing, for both i386 and am64 architectures!

With Qt 5.14 arriving in Debian/testing, and some tweaks here and there, we finally have all the packages (2 additional deps, 82 frameworks, 47 Plasma, 216 Apps) built on both Debian unstable and Debian testing, for both amd64 and i386 architectures. Again, big thanks to OBS!

Repositories:
For Unstable:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Unstable/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./

For Testing:

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/other-deps/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/frameworks/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/plasma519/Debian_Testing/ ./
deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Testing/ ./

As usual, don’t forget that you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Enjoy.

03 July, 2020 02:06PM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#28: Welcome RSPM and test-drive with Bionic and Focal

Welcome to the 28th post in the relatively random R recommendations series, or R4 for short. Our last post was a “double entry” in this R4 series and the newer T4 video series and covered a topic touched upon in this R4 series multiple times: easy binary install, especially on Ubuntu.

That post already previewed the newest kid on the block: RStudio’s RSPM, now formally announced. In the post we were only able to show Ubuntu 18.04 aka bionic. With the formal release of RSPM support has been added for Ubuntu 20.04 aka focal—and we are happy to announce that of course we added a corresponding Rocker r-rspm container. So you can now take full advantage of RSPM either via docker pull rocker/r-rspm:18.04 or via docker pull rocker/r-rspm:20.04 covering the two most recent LTS releases.

RSPM is a nice accomplishment. Covering multiple Linux distributions is an excellent achievement. Allowing users to reason in terms of the CRAN packages (i.e. installing xml2, not r-cran-xml2) eases use. Doing it from via the standard R command install.packages() (or wrapper around it like our install.r from littler package) is very good too and an excellent technical achievement.

There is, as best as I can tell, only one shortcoming, along with one small bit of false advertising. The shortcoming is technical. By bringing the package installation into the user application domain, it is separated from the system and lacks integration with system libraries. What do I mean here? If you were to add R to a plain Ubuntu container, say 18.04 or 20.04, then added the few lines to support RSPM and install xml2 it would install. And fail. Why? Because the system library libxml2 does not get installed with the RSPM package—whereas the .deb from the distribution or PPAs does. So to help with some popular packages I added libxml2, libunits and a few more for geospatial work to the rocker/r-rspm containers. Being already present ensures packages xml2 and units can run immediately. Please file issue tickets at the Rocker repo if you come across other missing libraries we could preload. (A related minor nag is incomplete coverage. At least one of my CRAN packages does not (yet?) come as a RSPM binary. Then again, CRAN has 16k packages, and the RSPM coverage is much wider than the PPA one. But completeness would be neat. The final nag is lack of Debian support which seems, well, odd.)

So what about the small bit of false advertising? Well it is claimed that RSPM makes installation “so much faster on Linux”. True, faster than the slowest possible installation from source. Also easier. But we had numerous posts on this blog showing other speed gains: Using ccache. And, of course, using binaries. And as the initial video mentioned above showed, installing from the PPAs is also faster than via RSPM. That is easy to replicate. Just set up the rocker/r-ubuntu:20.04 (or 18.04) container alongside the rocker/r-rspm:20.04 (or also 18.04) container. And then time install.r rstan (or install.r tinyverse) in the RSPM one against apt -y update; apt install -y r-cran-rstan (or ... r-cran-tinyverse). In every case I tried, the installation using binaries from the PPA was still faster by a few seconds. Not that it matters greatly: both are very, very quick compared to source installation (as e.g. shown here in 2017 (!!)) but the standard Ubuntu .deb installation is simply faster than using RSPM. (Likely due to better CDN usage so this may change over time. Neither method appears to do downloads in parallel so there is scope for both for doing better.)

So in sum: Welcome to RSPM, and nice new tool—and feel free to “drive” it using rocker/r-rspm:18.04 or rocker/r-rspm:20.04.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

03 July, 2020 01:05PM

July 02, 2020

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and worked all 20 hours this month.

I sent a final request for testing for the next update to Linux 3.16 in jessie. I also prepared an update to Linux 4.9, included in both jessie and stretch. I completed backporting of kernel changes related to CVE-2020-0543, which was still under embargo, to Linux 3.16.

Finally I uploaded the updates for Linux 3.16 and 4.9, and issued DLA-2241 and DLA-2242.

The end of June marked the end of long-term support for Debian 8 "jessie" and for Linux 3.16. I am no longer maintaining any stable kernel branches, but will continue contributing to them as part of my work on Debian 9 "stretch" LTS and other Debian releases.

02 July, 2020 07:25PM

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS (June 2020)

In June 2020, I have worked on the Debian LTS project for 8 hours (of 8 hours planned).

LTS Work

  • frontdesk: CVE bug triaging for Debian jessie LTS: mailman, alpine, python3.4, redis, pound, pcre3, ngircd, mutt, lynis, libvncserver, cinder, bison, batik.
  • upload to jessie-security: libvncserver (DLA-2264-1 [1], 9 CVEs)
  • upload to jessie-security: mailman (DLA-2265-1 [2], 1 CVE)
  • upload to jessie-security: mutt (DLA-2268-1 [3] and DLA-2268-2 [4]), 2 CVEs)

Other security related work for Debian

  • make sure all security fixes for php-horde-* are also in Debian unstable
  • upload freerdp2 2.1.2+dfsg-1 to unstable (9 CVEs)

References

02 July, 2020 02:20PM by sunweaver

Russell Coker

Desklab Portable USB-C Monitor

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

02 July, 2020 10:42AM by etbe

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Test a webcam from the command line on Linux with VLC

Since this info was too well hidden on the internet, here is the information:
cvlc v4l2:///dev/video0
and there you go.
If you have multiple cameras connected, you can try /dev/video0 up to /dev/video5

02 July, 2020 07:30AM by Emmanuel Kasper (noreply@blogger.com)

hackergotchi for Evgeni Golov

Evgeni Golov

Automatically renaming the default git branch to "devel"

It seems GitHub is planning to rename the default brach for newly created repositories from "master" to "main". It's incredible how much positive PR you can get with a one line configuration change, while still working together with the ICE.

However, this post is not about bashing GitHub.

Changing the default branch for newly created repositories is good. And you also should do that for the ones you create with git init locally. But what about all the repositories out there? GitHub surely won't force-rename those branches, but we can!

Ian will do this as he touches the individual repositories, but I tend to forget things unless I do them immediately…

Oh, so this is another "automate everything with an API" post? Yes, yes it is!

And yes, I am going to use GitHub here, but something similar should be implementable on any git hosting platform that has an API.

Of course, if you have SSH access to the repositories, you can also just edit HEAD in an for loop in bash, but that would be boring ;-)

I'm going with devel btw, as I'm already used to develop in the Foreman project and devel in Ansible.

acquire credentials

My GitHub account is 2FA enabled, so I can't just use my username and password in a basic HTTP API client. So the first step is to acquire a personal access token, that can be used instead. Of course I could also have implemented OAuth2 in my lousy script, but ain't nobody have time for that.

The token will require the "repo" permission to be able to change repositories.

And we'll need some boilerplate code (I'm using Python3 and requests, but anything else will work too):

#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcdef'
headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True

This will store our username, token, and create a requests.Session so that we don't have to pass the same data all the time.

get a list of repositories to change

I want to change all my own repos that are not archived, not forks, and actually have the default branch set to master, YMMV.

As we're authenticated, we can just list the repositories of the currently authenticated user, and limit them to "owner" only.

GitHub uses pagination for their API, so we'll have to loop until we get to the end of the repository list.

repos_to_change = []
url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None

create a new devel branch and mark it as default

Now that we know which repos to change, we need to fetch the SHA of the current master, create a new devel branch pointing at the same commit and then set that new branch as the default branch.

for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

I've also opted in to actually delete the old master, as I think that's the safest way to let the users know that it's gone. Letting it rot in the repository would mean people can still pull and won't notice that there are no changes anymore as the default branch moved to devel.

So…

announcement

I've updated all my (those in the evgeni namespace) non-archived repositories to have devel instead of master as the default branch.

Have fun updating!

code

#!/usr/bin/env python3
import requests
BASE='https://api.github.com'
USER='evgeni'
TOKEN='abcd'
headers = {'User-Agent': '@{}'.format(USER)}
auth = (USER, TOKEN)
session = requests.Session()
session.auth = auth
session.headers.update(headers)
session.verify = True
repos_to_change = []
url = '{}/user/repos?type=owner'.format(BASE)
while url:
    r = session.get(url)
    if r.ok:
        repos = r.json()
        for repo in repos:
            if not repo['archived'] and not repo['fork'] and repo['default_branch'] == 'master':
                repos_to_change.append(repo['name'])
        if 'next' in r.links:
            url = r.links['next']['url']
        else:
            url = None
    else:
        url = None
for repo in repos_to_change:
    master_data = session.get('{}/repos/evgeni/{}/git/ref/heads/master'.format(BASE, repo)).json()
    data = {'ref': 'refs/heads/devel', 'sha': master_data['object']['sha']}
    session.post('{}/repos/{}/{}/git/refs'.format(BASE, USER, repo), json=data)
    default_branch_data = {'default_branch': 'devel'}
    session.patch('{}/repos/{}/{}'.format(BASE, USER, repo), json=default_branch_data)
    session.delete('{}/repos/{}/{}/git/refs/heads/{}'.format(BASE, USER, repo, 'master'))

02 July, 2020 07:12AM by evgeni

Russell Coker

Isolating PHP Web Sites

If you have multiple PHP web sites on a server in a default configuration they will all be able to read each other’s files in a default configuration. If you have multiple PHP web sites that have stored data or passwords for databases in configuration files then there are significant problems if they aren’t all trusted. Even if the sites are all trusted (IE the same person configures them all) if there is a security problem in one site it’s ideal to prevent that being used to immediately attack all sites.

mpm_itk

The first thing I tried was mpm_itk [1]. This is a version of the traditional “prefork” module for Apache that has one process for each HTTP connection. When it’s installed you just put the directive “AssignUserID USER GROUP” in your VirtualHost section and that virtual host runs as the user:group in question. It will work with any Apache module that works with mpm_prefork. In my experiment with mpm_itk I first tried running with a different UID for each site, but that conflicted with the pagespeed module [2]. The pagespeed module optimises HTML and CSS files to improve performance and it has a directory tree where it stores cached versions of some of the files. It doesn’t like working with copies of itself under different UIDs writing to that tree. This isn’t a real problem, setting up the different PHP files with database passwords to be read by the desired group is easy enough. So I just ran each site with a different GID but used the same UID for all of them.

The first problem with mpm_itk is that the mpm_prefork code that it’s based on is the slowest mpm that is available and which is also incompatible with HTTP/2. A minor issue of mpm_itk is that it makes Apache take ages to stop or restart, I don’t know why and can’t be certain it’s not a configuration error on my part. As an aside here is a site for testing your server’s support for HTTP/2 [3]. To enable HTTP/2 you have to be running mpm_event and enable the “http2” module. Then for every virtual host that is to support it (generally all https virtual hosts) put the line “Protocols h2 h2c http/1.1” in the virtual host configuration.

A good feature of mpm_itk is that it has everything for the site running under the same UID, all Apache modules and Apache itself. So there’s no issue of one thing getting access to a file and another not getting access.

After a trial I decided not to keep using mpm_itk because I want HTTP/2 support.

php-fpm Pools

The Apache PHP module depends on mpm_prefork so it also has the issues of not working with HTTP/2 and of causing the web server to be slow. The solution is php-fpm, a separate server for running PHP code that uses the fastcgi protocol to talk to Apache. Here’s a link to the upstream documentation for php-fpm [4]. In Debian this is in the php7.3-fpm package.

In Debian the directory /etc/php/7.3/fpm/pool.d has the configuration for “pools”. Below is an example of a configuration file for a pool:

# cat /etc/php/7.3/fpm/pool.d/example.com.conf
[example.com]
user = example.com
group = example.com
listen = /run/php/php7.3-example.com.sock
listen.owner = www-data
listen.group = www-data
pm = dynamic
pm.max_children = 5
pm.start_servers = 2
pm.min_spare_servers = 1
pm.max_spare_servers = 3

Here is the upstream documentation for fpm configuration [5].

Then for the Apache configuration for the site in question you could have something like the following:

ProxyPassMatch "^/(.*\.php(/.*)?)$" "unix:/run/php/php7.3-example.com.sock|fcgi://localhost/usr/share/wordpress/"

The “|fcgi://localhost” part is just part of the way of specifying a Unix domain socket. From the Apache Wiki it appears that the method for configuring the TCP connections is more obvious [6]. I chose Unix domain sockets because it allows putting the domain name in the socket address. Matching domains for the web server to port numbers is something that’s likely to be error prone while matching based on domain names is easier to check and also easier to put in Apache configuration macros.

There was some additional hassle with getting Apache to read the files created by PHP processes (the options include running PHP scripts with the www-data group, having SETGID directories for storing files, and having world-readable files). But this got things basically working.

Nginx

My Google searches for running multiple PHP sites under different UIDs didn’t turn up any good hits. It was only after I found the DigitalOcean page on doing this with Nginx [7] that I knew what to search for to find the way of doing it in Apache.

02 July, 2020 02:12AM by etbe

July 01, 2020

François Marier

Automated MythTV-related maintenance tasks

Here is the daily/weekly cronjob I put together over the years to perform MythTV-related maintenance tasks on my backend server.

The first part performs a database backup:

5 1 * * *  mythtv  /usr/share/mythtv/mythconverg_backup.pl

which I previously configured by putting the following in /home/mythtv/.mythtv/backuprc:

DBBackupDirectory=/var/backups/mythtv

and creating a new directory for it:

mkdir /var/backups/mythtv
chown mythtv:mythtv /var/backups/mythtv

The second part of /etc/cron.d/mythtv-maintenance runs a contrib script to optimize the database tables:

10 1 * * *  mythtv  /usr/bin/chronic /usr/share/doc/mythtv-backend/contrib/maintenance/optimize_mythdb.pl

once a day. It requires the libmythtv-perl and libxml-simple-perl packages to be installed on Debian-based systems.

It is quickly followed by a check of the recordings and automatic repair of the seektable (when possible):

20 1 * * *  mythtv  /usr/bin/chronic /usr/bin/mythutil --checkrecordings --fixseektable

Next, I force a scan of the music and video databases to pick up anything new that may have been added externally via NFS mounts:

30 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanvideos
31 1 * * *  mythtv  /usr/bin/mythutil --quiet --scanmusic

Finally, I defragment the XFS partition for two hours every day except Friday:

45 1 * * 1-4,6-7  root  /usr/sbin/xfs_fsr

and resync the RAID-1 arrays once a week to ensure that they stay consistent and error-free:

15 3 * * 2  root  /usr/local/sbin/raid_parity_check md0
15 3 * * 4  root  /usr/local/sbin/raid_parity_check md2

using a trivial script.

In addition to that cronjob, I also have smartmontools run daily short and weekly long SMART tests via this blurb in /etc/smartd.conf:

/dev/sda -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)
/dev/sdb -a -d ata -o on -S on -s (S/../.././04|L/../../6/05)

If there are any other automated maintenance tasks you do on your MythTV server, please leave a comment!

01 July, 2020 06:07PM

hackergotchi for Joachim Breitner

Joachim Breitner

Template Haskell recompilation

I was wondering: What happens if I have a Haskell module with Template Haskell that embeds some information from the environment (time, environment variables). Will such a module be reliable recompiled? And what if it gets recompiled, but the source code produced by Template Haskell is actually unchanged (e.g., because the environment variable has not changed), will all depending modules be recompiled (which would be bad)?

Here is a quick experiment, using GHC-8.8:

/tmp/th-recom-test $ cat Foo.hs
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -fforce-recomp #-}
module Foo where

import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import System.Process

theMinute :: String
theMinute = $(runIO (readProcess "date" ["+%M"] "") >>= stringE)
[jojo@kirk:2] Mi, der 01.07.2020 um 17:18 Uhr ☺
/tmp/th-recom-test $ cat Main.hs
import Foo
main = putStrLn theMinute

Note that I had to set {-# OPTIONS_GHC -fforce-recomp #-} – by default, GHC will not recompile a module, even if it uses Template Haskell and runIO. If you are reading from a file you can use addDependentFile to tell the compiler about that depenency, but that does not help with reading from the environment.

So here is the test, and we get the desired behaviour: The Foo module is recompiled every time, but unless the minute has changed (see my prompt), Main is not recomipled:

/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
Linking test ...
[jojo@kirk:2] Mi, der 01.07.2020 um 17:20 Uhr ☺
/tmp/th-recom-test $ ghc --make -O2 Main.hs -o test
[1 of 2] Compiling Foo              ( Foo.hs, Foo.o )
[2 of 2] Compiling Main             ( Main.hs, Main.o ) [Foo changed]
Linking test ...

So all well!

Update: It seems that while this works with ghc --make, the -fforce-recomp does not cause cabal build to rebuild the module. That’s unfortunate.

01 July, 2020 03:16PM by Joachim Breitner (mail@joachim-breitner.de)

Sylvain Beucler

Debian LTS and ELTS - June 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In June, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 5.25h for ELTS (out of 20 max; all done).

While LTS is part of the Debian project, fellow contributors sometimes surprise me: suggestion to vote for sponsors-funded projects with concorcet was only met with overhead concerns, and there were requests for executive / business owner decisions (we're currently heading towards consultative vote); I heard concerns about discussing non-technical issues publicly (IRC team meetings are public though); the private mail infrastructure was moved from self-hosting straight to Google; when some got an issue with Debian Social for our first video conference, there were immediate suggestions to move to Zoom...
Well, we do need some people to make those LTS firmware updates in non-free :)

Also this was the last month before shifting suites: goodbye to Jessie LTS and Wheezy ELTS, welcome Stretch LTS and Jessie ELTS.

ELTS - Wheezy

  • mysql-connector-java: improve testsuite setup; prepare wheezy/jessie/stretch triple builds; coordinate versioning scheme with security-team; security upload ELA 234-1
  • ntp: wheezy+jessie triage: 1 ignored (too intrusive to backport); 1 postponed (hard to exploit, no patch)
  • Clean-up (ditch) wheezy VMs :)

LTS - Jessie

  • mysql-connector-java: see common work in ELTS
  • mysql-connector-java: security uploads DLA 2245-1 (LTS) and DSA 4703 (oldstable)
  • ntp: wheezy+jessie triage (see ELTS)
  • rails: global triage, backport 2 patches, security upload DLA 2251-1
  • rails: global security: prepare stretch/oldstable update
  • rails: new important CVE on unmaintained 4.x, fixes introduce several regressions, propose new fix to upstream, update stretch proposed update [and jessie, but rails will turn out unsupported in ELTS]
  • python3.4: prepare update to fix all pending non-criticial issues, 5/6 ready
  • private video^W^Wpublic IRC team meeting

Documentation/Scripts

01 July, 2020 02:19PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Already July and still stuck at home.

Already July and still stuck at home.

01 July, 2020 08:58AM by Junichi Uekawa

Paul Wise

FLOSS Activities June 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian BTS: usertags QA
  • Debian IRC channels: fixed a channel mode lock
  • Debian wiki: unblock IP addresses, approve accounts, ping folks with bouncing email

Communication

  • Respond to queries from Debian users and developers on the mailing lists and IRC

Sponsors

The ifenslave and apt-listchanges work was sponsored by my employer. All other work was done on a volunteer basis.

01 July, 2020 01:17AM

June 30, 2020

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in June 2020

Here is my monthly update covering what I have been doing in the free software world during June 2020 (previous month):

  • Opened two pull requests against the Ghostwriter distraction-free Markdown editor to:

    • Persist whether "focus mode" is enabled across between sessions. (#522)
    • Correct the ordering of the MarkdownAST::toString() debugging output. (#520)
  • Will McGugan's "Rich" is a Python library to output formatted text, tables, syntax etc. to the terminal. I filed a pull request in order to allow for easy enabling and disabling of displaying the file path in Rich's logging handler. (#115)

  • As part of my duties of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc.

  • Filed a pull request against the PyQtGraph Scientific Graphics and graphical user interface library to make the documentation build reproducibly. (#1265)

  • Reviewed and merged a large number of changes by Pavel Dolecek to my Strava Enhancement Suite, a Chrome extension to improve the user experience on the Strava athletic tracker.

For Lintian, the static analysis tool for Debian packages:

  • Don't emit breakout-link for architecture-independent .jar files under /usr/lib. (#963939)
  • Correct a reference to override_dh_ in the long description of the excessive-debhelper-overrides tag. [...]
  • Update data/fields/perl-provides for Perl 5.030003. [...]
  • Check for execute_after and execute_before spelling mistakes just like override_*. [...]

§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


Elsewhere in our tooling, I made the following changes to diffoscope including preparing and uploading versions 147, 148 and 149 to Debian:

  • New features:

    • Add output from strings(1) to ELF binaries. (#148)
    • Allow user to mask/filter diff output via --diff-mask=REGEX. (!51)
    • Dump PE32+ executables (such as EFI applications) using objdump(1). (#181)
    • Add support for Zsh shell completion. (#158)
  • Bug fixes:

    • Prevent a traceback when comparing PDF documents that did not contain metadata (ie. a PDF /Info stanza). (#150)
    • Fix compatibility with jsondiff version 1.2.0. (#159)
    • Fix an issue in GnuPG keybox file handling that left filenames in the diff. [...]
    • Correct detection of JSON files due to missing call to File.recognizes that checks candidates against file(1). [...]
  • Output improvements:

    • Use the CSS word-break property over manually adding U+200B zero-width spaces as these were making copy-pasting cumbersome. (!53)
    • Downgrade the tlsh warning message to an "info" level warning. (#29)
  • Logging improvements:

  • Testsuite improvements:

    • Update tests for file(1) version 5.39. (#179)
    • Drop accidentally-duplicated copy of the --diff-mask tests. [...]
    • Don't mask an existing test. [...]
  • Codebase improvements:

    • Replace obscure references to "WF" with "Wagner-Fischer" for clarity. [...]
    • Use a semantic AbstractMissingType type instead of remembering to check for both types of "missing" files. [...]
    • Add a comment regarding potential security issue in the .changes, .dsc and .buildinfo comparators. [...]
    • Drop a large number of unused imports. [...][...][...][...][...]
    • Make many code sections more Pythonic. [...][...][...][...]
    • Prevent some variable aliasing issues. [...][...][...]
    • Use some tactical f-strings to tidy up code [...][...] and remove explicit u"unicode" strings [...].
    • Refactor a large number of routines for clarity. [...][...][...][...]

trydiffoscope is the web-based version of diffoscope. This month, I specified a location for the celerybeat scheduler to ensure that the clean/tidy tasks are actually called which had caused an accidental resource exhaustion. (#12)


§


Debian

I filed three bugs against:

  • cmark: Please update the homepage URI. (#962576)
  • petitboot: Please update Vcs-Git urls. (#963123)
  • python-pauvre: FTBFS if the DISPLAY environment variable is exported. (#962698)

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 5¼ hours on its sister Extended LTS project.

  • Investigated and triaged angular.js [...],icinga2 [...], intel-microcode [...], jquery [...], pdns-recursor [...], unbound [...] & wordpress [...].

  • Frontdesk duties, including responding to user/developer questions, reviewing others' packages, participating in mailing list discussions as well as attending our contributor meeting.

  • Issued DLA 2233-1 to fix two issues in the Django web development framework in order to fix a potential data leakage via malformed memcached keys (CVE-2020-13254) and to prevent a cross-site scripting attack in the Django administration system (CVE-2020-13596). This was followed by DLA 2233-2 to address a regression as well as uploads to Debian stretch (1.10.7-2+deb9u9) and buster (1.11.29-1~deb10u1). (More info)

  • Issued DLA 2235-1 to prevent a file descriptor leak in the D-Bus message bus (CVE-2020-12049).

  • Issued DLA 2239-1 for a security module for using the TACACS+ authentication service to prevent an issue where shared secrets such as private server keys were being added in plaintext to various logs.

  • Issued DLA 2244-1 to address an escaping issue in PHPMailer, an email generation utility class for the PHP programming language.

  • Issued DLA 2252-1 for the ngircd IRC server as it was discovered that there was an out-of-bounds access vulnerability in the server-to-server protocol.

  • Issued DLA 2253-1 to resolve a vulnerability in the Lynis a security auditing tool because a shared secret could be obtained by simple observation of the process list when a data upload is being performed.

You can find out more about the project via the following video:

Uploads

30 June, 2020 10:18PM

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Learning openshift: a good moment to revisit awk too

I can’t believe I spent all these years using only grep.

Most of us know how to use awk to print the nth column of a file:

$ awk '{print $1}' /etc/hosts

will print all IP addresses from /etc/hosts

But you can also do filtering before printing the chosen column:

$ awk '$5 >= 2 {print $2}' /path/to/file

will print the second column of all lines, where the 5th column is greater than 2. That would have been hard with grep.

Now I can use that to find out all deployments on my openshift cluster, where the number of current replicas is greater than 2.

$ oc get deployments --all-namespaces | awk '$5 >= 2 {print $2}'
NAME
oauth-openshift
console
downloads
router-default
etcd-quorum-guard
prometheus-adapter
thanos-querier
packageserver

I know that openshift/kubernetes both have a powerful query selector syntax, but for the moment awk will do.

30 June, 2020 11:29AM by Emmanuel Kasper (noreply@blogger.com)

Russ Allbery

Review: The Fifth Risk

Review: The Fifth Risk, by Michael Lewis

Publisher: W.W. Norton
Copyright: 2018
Printing: 2019
ISBN: 0-393-35745-7
Format: Kindle
Pages: 254

The Fifth Risk starts with the presidential transition. Max Stier, the first person profiled by Lewis in this book, is the founder of the Partnership for Public Service. That foundation helped push through laws to provide more resources and structure for the transition of the United States executive branch from one president to the next. The goal was to fight wasted effort, unnecessary churn, and pointless disruption in the face of each administration's skepticism about everyone who worked for the previous administration.

"It's Groundhog Day," said Max. "The new people come in and think that the previous administration and the civil service are lazy or stupid. Then they actually get to know the place they are managing. And when they leave, they say, 'This was a really hard job, and those are the best people I've ever worked with.' This happens over and over and over."

By 2016, Stier saw vast improvements, despite his frustration with other actions of the Obama administration. He believed their transition briefings were one of the best courses ever produced on how the federal government works. Then that transition process ran into Donald Trump.

Or, to be more accurate, that transition did not run into Donald Trump, because neither he nor anyone who worked for him were there. We'll never know how good the transition information was because no one ever listened to or read it. Meetings were never scheduled. No one showed up.

This book is not truly about the presidential transition, though, despite its presence as a continuing theme. The Fifth Risk is, at its heart, an examination of government work, the people who do it, why it matters, and why you should care about it. It's a study of the surprising and misunderstood responsibilities of the departments of the United States federal government. And it's a series of profiles of the people who choose this work as a career, not in the upper offices of political appointees, but deep in the civil service, attempting to keep that system running.

I will warn now that I am far too happy that this book exists to be entirely objective about it. The United States desperately needs basic education about the government at all levels, but particularly the federal civil service. The public impression of government employees is skewed heavily towards the small number of public-facing positions and towards paperwork frustrations, over which the agency usually has no control because they have been sabotaged by Congress (mostly by Republicans, although the Democrats get involved occasionally). Mental images of who works for the government are weirdly selective. The Coast Guard could say "I'm from the government and I'm here to help" every day, to the immense gratitude of the people they rescue, but Reagan was still able to use that as a cheap applause line in his attack on government programs.

Other countries have more functional and realistic social attitudes towards their government workers. The United States is trapped in a politically-fueled cycle of contempt and ignorance. It has to stop. And one way to help stop it is someone with Michael Lewis's story-telling skills writing a different narrative.

The Fifth Risk is divided into a prologue about presidential transitions, three main parts, and an afterword (added in current editions) about a remarkable government worker whom you likely otherwise would never hear about. Each of the main parts talks about a different federal department: the Department of Energy, the Department of Agriculture, and the Department of Commerce. In keeping with the theme of the book, the people Lewis profiles do not do what you might expect from the names of those departments.

Lewis's title comes from his discussion with John MacWilliams, a former Goldman Sachs banker who quit the industry in search of more personally meaningful work and became the chief risk officer for the Department of Energy. Lewis asks him for the top five risks he sees, and if you know that the DOE is responsible for safeguarding nuclear weapons, you will be able to guess several of them: nuclear weapons accidents, North Korea, and Iran. If you work in computer security, you may share his worry about the safety of the electrical grid. But his fifth risk was project management. Can the government follow through on long-term hazardous waste safety and cleanup projects, despite constant political turnover? Can it attract new scientists to the work of nuclear non-proliferation before everyone with the needed skills retires? Can it continue to lay the groundwork with basic science for innovation that we'll need in twenty or fifty years? This is what the Department of Energy is trying to do.

Lewis's profiles of other departments are similarly illuminating. The Department of Agriculture is responsible for food stamps, the most effective anti-poverty program in the United States with the possible exception of Social Security. The section on the Department of Commerce is about weather forecasting, specifically about NOAA (the National Oceanic and Atmospheric Administration). If you didn't know that all of the raw data and many of the forecasts you get from weather apps and web sites are the work of government employees, and that AccuWeather has lobbied Congress persistently for years to prohibit the NOAA from making their weather forecasts public so that AccuWeather can charge you more for data your taxes already paid for, you should read this book. The story of American contempt for government work is partly about ignorance, but it's also partly about corporations who claim all of the credit while selling taxpayer-funded resources back to you at absurd markups.

The afterword I'll leave for you to read for yourself, but it's the story of Art Allen, a government employee you likely have never heard of but whose work for the Coast Guard has saved more lives than we are able to measure. I found it deeply moving.

If you, like I, are a regular reader of long-form journalism and watch for new Michael Lewis essays in particular, you've probably already read long sections of this book. By the time I sat down with it, I think I'd read about a third in other forms on-line. But the profiles that I had already read were so good that I was happy to read them again, and the additional stories and elaboration around previously published material was more than worth the cost and time investment in the full book.

It was never obvious to me that anyone would want to read what had interested me about the United States government. Doug Stumpf, my magazine editor for the past decade, persuaded me that, at this strange moment in American history, others might share my enthusiasm.

I'll join Michael Lewis in thanking Doug Stumpf.

The Fifth Risk is not a proposal for how to fix government, or politics, or polarization. It's not even truly a book about the Trump presidency or about the transition. Lewis's goal is more basic: The United States government is full of hard-working people who are doing good and important work. They have effectively no public relations department. Achievements that would result in internal and external press releases in corporations, not to mention bonuses and promotions, go unnoticed and uncelebrated. If you are a United States citizen, this is your government and it does important work that you should care about. It deserves the respect of understanding and thoughtful engagement, both from the citizenry and from the politicians we elect.

Rating: 10 out of 10

30 June, 2020 06:02AM

Craig Sanders

Fuck Grey Text

fuck grey text on white backgrounds
fuck grey text on black backgrounds
fuck thin, spindly fonts
fuck 10px text
fuck any size of anything in px
fuck font-weight 300
fuck unreadable web pages
fuck themes that implement this unreadable idiocy
fuck sites that don’t work without javascript
fuck reactjs and everything like it

thank fuck for Stylus. and uBlock Origin. and uMatrix.

Fuck Grey Text is a post from: Errata

30 June, 2020 02:33AM by cas

hackergotchi for Norbert Preining

Norbert Preining

TeX Live Debian update 20200629

More than a month has passed since the last update of TeX Live packages in Debian, so here is a new checkout!

All arch all packages have been updated to the tlnet state as of 2020-06-29, see the detailed update list below.

Enjoy.

New packages

akshar, beamertheme-pure-minimalistic, biblatex-unified, biblatex-vancouver, bookshelf, commutative-diagrams, conditext, courierten, ektype-tanka, hvarabic, kpfonts-otf, marathi, menucard, namedef, pgf-pie, pwebmac, qrbill, semantex, shtthesis, tikz-lake-fig, tile-graphic, utf8add.

Updated packages

abnt, achemso, algolrevived, amiri, amscls, animate, antanilipsum, apa7, babel, bangtex, baskervillef, beamerappendixnote, beamerswitch, beamertheme-focus, bengali, bib2gls, biblatex-apa, biblatex-philosophy, biblatex-phys, biblatex-software, biblatex-swiss-legal, bibleref, bookshelf, bxjscls, caption, ccool, cellprops, changes, chemfig, circuitikz, cloze, cnltx, cochineal, commutative-diagrams, comprehensive, context, context-vim, cquthesis, crop, crossword, ctex, cweb, denisbdoc, dijkstra, doclicense, domitian, dps, draftwatermark, dvipdfmx, ebong, ellipsis, emoji, endofproofwd, eqexam, erewhon, erewhon-math, erw-l3, etbb, euflag, examplep, fancyvrb, fbb, fbox, fei, fira, fontools, fontsetup, fontsize, forest-quickstart, gbt7714, genealogytree, haranoaji, haranoaji-extra, hitszthesis, hvarabic, hyperxmp, icon-appr, kpfonts, kpfonts-otf, l3backend, l3build, l3experimental, l3kernel, latex-amsmath-dev, latexbangla, latex-base-dev, latexdemo, latexdiff, latex-graphics-dev, latexindent, latex-make, latexmp, latex-mr, latex-tools-dev, libertinus-fonts, libertinust1math, lion-msc, listings, logix, lshort-czech, lshort-german, lshort-polish, lshort-portuguese, lshort-russian, lshort-slovenian, lshort-thai, lshort-ukr, lshort-vietnamese, luamesh, lua-uca, luavlna, lwarp, marathi, memoir, mnras, moderntimeline, na-position, newcomputermodern, newpx, nicematrix, nodetree, ocgx2, oldstandard, optex, parskip, pdfcrop, pdfpc, pdftexcmds, pdfxup, pgf, pgfornament, pgf-pie, pgf-umlcd, pgf-umlsd, pict2e, plautopatch, poemscol, pst-circ, pst-eucl, pst-func, pstricks, pwebmac, pxjahyper, quran, rec-thy, reledmac, rest-api, sanskrit, sanskrit-t1, scholax, semantex, showexpl, shtthesis, suftesi, svg, tcolorbox, tex4ht, texinfo, thesis-ekf, thuthesis, tkz-doc, tlshell, toptesi, tuda-ci, tudscr, twemoji-colr, univie-ling, updmap-map, vancouver, velthuis, witharrows, wtref, xecjk, xepersian-hm, xetex-itrans, xfakebold, xindex, xindy, xltabular, yathesis, ydoc, yquant, zref.

30 June, 2020 01:24AM by Norbert Preining

June 29, 2020

Cinnamon 4.6 for Debian

After a few rounds of testing in experimental, I have uploaded Cinnamon 4.6 packages to Debian/unstable. Nothing spectacular new besides the usual stream of fixes. Enjoy the new Cinnamon!

29 June, 2020 01:37AM by Norbert Preining

hackergotchi for Holger Levsen

Holger Levsen

20200629-developers-reference-challenge

developer-reference challenge: get the bug count down to 0

As of now, the BTS counts 52 bugs for src:developers-reference and I'd like to get this down to zero, because src:developers-reference is just documenting best practices and this should be easy and done well.

So, I've been trying to keep the habit of fixing at least one bug per month and I also try to do one upload per month. And I very much welcome your patches, commits, MRs and bug reports, because else it will take more than 5 years. Though I'm hopeful, src:developers-reference is maintained by the debian group on salsa, which is like a thousand people.

Please help and join the fun and responsibility. Even if only once every three months (or less) ;-) And thank you, it's awesome to be in this together.

29 June, 2020 01:13AM

June 28, 2020

Russ Allbery

PGP::Sign 1.00

This is the first new release of PGP::Sign in 13 years, so it's long-overdue. I have finally updated it in preparation for creating a new, more modern signing key for the Big Eight Usenet hierarchies and issuing control messages with both the old and new keys, using GnuPG v2 for the new key.

The biggest change in this release is that it drops support for all OpenPGP implementations other than GnuPG, and adds support for GnuPG v2. I think some of the other PGP implementations are still around, but I haven't seen them in years and have no way to test against them, so it didn't seem worthwhile to continue to support them. GnuPG v2 support is obviously long-overdue, given that we're getting close to the point where GnuPG v1 will start disappearing from distributions. The default backend is now GnuPG v2, although the module can be configured to use GnuPG v1 instead.

This release also adds a new object-oriented API. When I first wrote this module, it was common in the Perl community to have functional APIs configured with global variables. Subsequently we've learned this is a bad idea for a host of reasons, and I finally got around to redoing the API. It's still not perfect (in particular, the return value of the verify method is still a little silly), but it's much nicer. The old API is still supported, implemented as a shim in front of the new API.

A few other, more minor changes in this release:

  • Status output from GnuPG is now kept separate from human-readable log and error output for more reliable parsing.

  • Pass --allow-weak-digest-algos to GnuPG so it can use old keys and verify signatures from old keys, such as those created with PGP 2.6.2.

  • pgp_sign, when called in array context, now always returns "GnuPG" as the version string, and the version passed into pgp_verify is always ignored. Including the OpenPGP implementation version information in signatures is obsolete; GnuPG no longer does it by default and it serves no useful purpose.

  • When calling pgp_sign multiple times in the same process with whitespace munging enabled, trailing whitespace without a newline could have leaked into the next invocation of pgp_sign, resulting in an invalid signature. Clear any remembered whitespace between pgp_sign invocations.

It also now uses IPC::Run and File::Temp instead of hand-rolling equivalent functionality, and the module build system is now Module::Build.

You can get the latest version from CPAN or from the PGP::Sign distribution page.

28 June, 2020 02:00AM

June 27, 2020

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Two Chinese video encoders

ENC1 and UHE265-1-Mini

I started a project recently (now on hold) that would involve streaming from mobile cameras over 4G. Since I wanted something with better picture quality than mobile phones on gimbals, the obvious question became: How can you take in HDMI (or SDI) and stream it out over SRT?

We looked at various SBCs coupled with video capture cards, but it didn't really work out; H.264 software encoding of 720p60/1080p60 (or even 1080i60 or 1080p30) is a tad too intensive for even the fastest ARM SoCs currently, USB3 is a pain on many of them, and the x86 SoCs are pretty expensive. Many of them have hardware encoding, but it's amazingly crappy quality-wise; borderline unusable even at 5 Mbit/sec. (Intel's Quick Sync is heaven compared to what the Raspberry Pi 4 has!)

So I started looking for all-in-one cheap encoder boxes; ideally with 4G or 802.11 out, but an Ethernet jack is OK to couple with a phone and an USB OTG adapter. I found three manufacturers that all make cheap gear, namely LinkPi, URayTech and Unisheen. Long story short, I ended up ordering one from each of the two former; namely their base HEVC models LinkPi ENC1 and URayTech's UHE265-1-Mini. (I wanted HEVC to offset some of the quality loss of going hardware encoding; my general sentiment is that the advantages of HEVC are somewhat more pronounced in cheap hardware than when realtime-encoding in software.)

The ENC1 is ostensibly 599 yuan (~$85, ~€75), but good luck actually getting it for that price. I spent an hour or so fiddling with their official store at Taobao (sort of like Aliexpress for Chinese domestic users), only to figure out in the very last step that they didn't ship from there outside of China. They have an international reselller, but they charge $130 plus shipping, on very shaky grounds (“we need to employ more people late at night to be able to respond to international support inquiries fast enough”). It's market segmentation, of course. There are supposedly companies that will help you buy from Taobao and bounce the goods on to you, but I didn't try them; I eventually went to eBay and got one for €120 (+VAT) with free (very slow!) shipping.

UHE265-1-Mini was pricier but easier; I bought one directly off of URayTech's Aliexpress store for $184.80, and it was shipped by FedEx in about five days.

Externally, they look fairly similar; the ENC1 (on top) is a bit thicker but generally smaller. The ENC1 also has a handy little OLED display that shows IP address, video input and a few other things (it's a few degrees uneven, though; true Chinese quality!). There are also two buttons, but they are fairly useless (you can't map them to anything except start/stop streaming and recording).

Both boxes can do up to 1080p60 input, but somehow, the ENC1 can only do 1080p30 or 720p60 encoding; it needs to either scale or decimate. (If you ask it to do 1080p60, it will make a valiant attempt even though it throws up a warning box, but it ends up lagging behind by a lot, so it seems to be useless.) The URayTech box, however, happily trods along with no issue. This is pretty bizarre when you consider that both are based on the same HiSilicon SoC (the HI3520DV400). On pure visual inspection on very limited test data, the HEVC output from the URayTech looks a bit better than the ENC1, too, but I might just be fooling myself here.

Both have web interfaces; LinkPi's looks a bit more polished, while URayTech's has a tiny bit more low-level control, such as the ability to upload an EDID. (In particular, after a bug fix, I can set SRT stream IDs on the URayTech, which is fairly important to me.) They let you change codecs, stream endpoints, protocols, bit rates, set up multiple independent encodes etc. easily enough.

Both of them also have Telnet interfaces (and the UHE265-1-Mini has SSH, too); neither included any information on what the username and password was, though. I asked URayTech's support what the password was, and their answer was that they didn't know (!) but I could try “unisheen” (!!). And lo and behold, “unisheen” didn't work, but “neworangetech”, Unisheen's default password, worked and gave me a HiLinux root shell with kernel 3.8.20. No matter the reason, I suppose this means I can assume the Unisheen devices are fairly similar…

LinkPi's support was less helpful; they refused to give me the password and suggested I talk to the seller. However, one of their update packages (which requires registration on Gitee, a Chinese GitHub clone—but at least in the open, unlike URayTech's, which you can only get from their support) contained an /etc/passwd with some DES password hashes. Three of them were simply the empty password, while the root password fell to some Hashcat brute-forcing after a few hours. It's “linkpi.c” (DES passwords are truncated after 8 characters, so I suppose they intended to have “linkpi.cn”.) It also presents a HiLinux root shell with kernel 3.8.20, but rather different userspace and 256 MB flash instead of 32 MB or so.

GPL violations are rampant, of course. Both ship Linux with no license text (even though UHE265-1-Mini comes with a paper manual) and no source in sight. Both ship FFmpeg, although URayTech has configured it in LGPL mode, unlike LinkPi, which has a full-on GPL build and links to it from various proprietary libraries. (URayTech ships busybox; I don't think LinkPi does, but I could be mistaken.) LinkPi's FFmpeg build also includes FDK-AAC, which is GPL-incompatible, so the entire shebang is non-redistributable. So both are bad, but the ENC1 is definitely worse. I've sent a request to support for source code, but I have zero belief that I'll receive any.

Both manufacturers have fancier, more expensive boxes, but they seem to diverge a bit. LinkPi is more oriented towards rack gear and also development boards (their main business seems to be selling chips to camera manufacturers; the ENC1 and similar looks most of all like a demo). URayTech, on the other hand, sells a bewildering amount of upgrades; they have variants that are battery-powered (using standard Sony camera battery form factors) and coldshoe mount, variants with Wi-Fi, variants with LTE, dual-battery options… (They also have some rack options.) I believe Unisheen also goes inthis direction. OTOH, the ENC1 has an USB port, which can seemingly be used fairly flexibly; you can attach a hard drive for recording (or video playback), a specific Wi-Fi dongle and even the ubiquitous Huawei E3276 LTE stick for 4G connectivity. I haven't tried any of this, but it's certainly a nice touch. It also has some IP input support, for transcoding, and HDMI out. I don't think there's much IPv6 support on either, sadly, although the kernel does support it on both, and perhaps some push support leaks in through FFmpeg (I haven't tested much).

Finally, both support various overlays, mixing of multiple sources and such. I haven't looked a lot at this.

So, which one is better? I guess that on paper, the ENC1 is smaller, cheaper (especially if you can get it at the Chinese price!), has the nice OLED display and more features. But if the price doesn't matter as much, the UHE265-1-Mini wins with the 1080p60 support, and I really ought to look more closely at what's going on with that quality difference (ie., whether it exists at all). It somehow just gives me better vibes, and I'm not really sure why; perhaps it's just because I received it so much earlier. Both are very interesting devices, and I'd love to be able to actually take them into the field some day.

27 June, 2020 11:02PM

Russell Coker

June 26, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.11: docopt updates

max-heap image

The twelveth release of littler as a CRAN package is now available, following in the fourteen-ish year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only started to do in recent years.

littler lives on Linux and Unix, has its difficulties on macOS due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems as a default where a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH.

A few examples are highlighted at the Github repo, as well as in the examples vignette.

This release mostly responds to the recent docopt release 0.7.0 which brought a breaking change for quoted arguments. In short, it is for the better because an option --as-cran is now available parsed as opt$as_cran which is easier than the earlier form where we needed to back-tick protect as-cran containing an underscore. We also added a new portmanteau-ish option to roxy.r.

The NEWS file entry is below.

Changes in littler version 0.3.11 (2020-06-26)

  • Changes in examples

    • Scripts check.r and rcc.r updated to reflect updated docopt 0.7.0 behaviour of quoted arguments

    • The roxy.r script has a new ease-of-use option -f | --full regrouping two other options.

CRANberries provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 June, 2020 10:00PM

hackergotchi for Chris Lamb

Chris Lamb

On the pleasure of hating

People love to tell you that they "don't watch sports" but the story of Lance Armstrong provides a fascinating lens through which to observe our culture at large.

For example, even granting all that he did and all the context in which he did it, why do sports cheats act like a lightning rod for such an instinctive hatred? After all, the sheer level of distaste directed at people such as Lance eludes countless other criminals in our society, many of whom have taken a lot more with far fewer scruples. The question is not one of logic or rationality, but of proportionality.

In some ways it should be unsurprising. In all areas of life, we instinctively prefer binary judgements to moral ambiguities and the sports cheat is a cliché of moral bankruptcy — cheating at something so seemingly trivial as a sport actually makes it more, not less, offensive to us. But we then find ourselves strangely enthralled by them, drawn together in admiration of their outlaw-like tenacity, placing them strangely close to criminal folk heroes. Clearly, sport is not as unimportant as we like to claim it is. In Lance's case in particular though, there is undeniably a Shakespearean quality to the story and we are forced to let go of our strict ideas of right and wrong and appreciate all the nuance.


§


There is a lot of this nuance in Marina Zenovich's new documentary. In fact, there's a lot of everything. At just under four hours, ESPN's Lance combines the duration of a Tour de France stage with the depth of the peloton — an endurance event compared to the bite-sized hagiography of Michael Jordan's The Last Dance.

Even for those who follow Armstrong's story like a mini-sport in itself, Lance reveals new sides to this man for all seasons. For me, not only was this captured in his clumsy approximations at being a father figure but also in him being asked something I had not read in countless tell-all books: did his earlier experiments in drug-taking contribute to his cancer?

But even in 2020 there are questions that remain unanswered. By needlessly returning to the sport in 2009, did Lance subconsciously want to get caught? Why does he not admit he confessed to Betsy Andreu back in 1999 but will happily apologise to her today for slurring her publicly on this very point? And why does he remain so vindictive towards former-teammate Floyd Landis? In all of Armstrong's evasions and masterful control of the narrative, there is the gnawing feeling that we don't even know what questions we should be even asking. As ever, the questions are more interesting than the answers.


§


Lance also reminded me of how professional cycling's obsession with national identity. Although I was intuitively aware of it to some degree, I had not fully grasped how much this kind of stereotyping runs through the veins of the sport itself, just like the drugs themselves. Journalist Daniel Friebe first offers us the portrait of:

Spaniards tend to be modest, very humble. Very unpretentious. And the Italians are loud, vain and outrageous showmen.

Former directeur sportif Johan Bruyneel then asserts that "Belgians are hard workers... they are ambitious to a certain point, but not overly ambitious", and cyclist Jörg Jaksche concludes with:

The Germans are very organised and very structured. And then the French, now I have to be very careful because I am German, but the French are slightly superior.

This kind of lazy caricature is nothing new, especially for those brought up on a solid diet of Tintin and Asterix, but although all these examples are seemingly harmless, why does the underlying idea of ascribing moral, social or political significance to genetic lineage remain so durable in today's age of anti-racism? To be sure, culture is not quite the same thing as race, but being judged by the character of one's ancestors rather than the actions of an individual is, at its core, one of the many conflations at the heart of racism. There is certainly a large amount of cognitive dissonance at work, especially when Friebe elaborates:

East German athletes were like incredible robotic figures, fallen off a production line somewhere behind the Iron Curtain...

... but then Übermensch Jan Ullrich is immediately described as "emotional" and "struggled to live the life of a professional cyclist 365 days a year". We see the habit to stereotype is so ingrained that even in the face of this obvious contradiction, Friebe unironically excuses Ullrich's failure to live up his German roots due to him actually being "Mediterranean".


§


I mention all this as I am known within my circles for remarking on these national characters, even collecting stereotypical examples of Italians 'being Italian' and the French 'being French' at times. Contrary to evidence, I don't believe in this kind of innate quality but what I do suspect is that people generally behave how they think they ought to behave, perhaps out of sheer imitation or the simple pleasure of conformity. As the novelist Will Self put it:

It's quite a complicated collective imposture, people pretending to be British and people pretending to be French, and then they get really angry with each other over what they're pretending to be.

The really remarkable thing about this tendency is that even if we consciously notice it there is no seemingly no escape — even I could not smirk when I considered that a brash Texan winning the Tour de France actually combines two of America's cherished obsessions: winning... and annoying the French.

26 June, 2020 08:58PM

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma 5.19.2 for Debian

After the long wait (or should I say The Long Dark) finally Qt 5.14 has arrived in Debian/unstable, and thus the doors for KDE/Plasma 5.19 are finally open.

I have been preparing this release for quite some time, but due to Qt 5.12 I could only test it in a virtual machine using Debian/experimental. But now, finally, a full upgrade to Plasma 5.19(.2) has arrived.

Unfortunately, it turned out that the OBS build servers are either overloaded, incapable, or broken, but they do not properly build the necessary packages. Thus, I make the Plasma 5.19.2 (and framework) packages available via my server, for amd64. Please use the following apt line:

deb https://www.preining.info/debian/ unstable kde

(The source packages are also there, just in case you want to rebuild them for i386).

Not that this requires my GPG key to be imported, best is to put it into /etc/apt/trusted.gpg.d/preining-norbert.asc.

For KDE Applications please still use

deb https://download.opensuse.org/repositories/home:/npreining:/debian-kde:/apps/Debian_Unstable/ ./

but I might sooner or later move that to my server, too. OBS servers are too broken with respect to considerable big rebuild groups it seems.

So, enjoy your shiny new KDE/Plasma desktop on Debian, too!

26 June, 2020 01:04PM by Norbert Preining

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our Study

This is a fairly self-indulgent post, sorry!

Encouraged by Evgeni, Michael and others, given I'm spending a lot more time at my desk in my home office, here's a picture of it:

Fisheye shot of my home office

Fisheye shot of my home office

Near enough everything in the study is a work in progress.

The KALLAX behind my desk is a recent addition (under duress) because we have nowhere else to put it. Sadly I can't see it going away any time soon. On the up-side, since Christmas I've had my record player and collection in and on top of it, so I can listen to records whilst working.

The arm chair is a recent move from another room. It's a nice place to take some work calls, serving as a change of scene, and I hope to add a reading light sometime. The desk chair is some Ikea model I can't recall, which is sufficient, and just fits into the desk cavity. (I'm fairly sure my Aeron, inaccessible elsewhere, would not fit.)

I've had this old mahogany, leather-topped desk since I was a teenager and it's a blessing and a curse. Mostly a blessing: It's a lovely big desk. The main drawback is it's very much not height adjustable. At the back is a custom made, full-width monitor stand/shelf, a recent gift produced to specification by my Dad, a retired carpenter.

On the top: my work Thinkpad T470s laptop, geared more towards portable than powerful (my normal preference), although for the forseeable future it's going to remain in the exact same place; an Ikea desk lamp (I can't recall the model); a 27" 4K LG monitor, the cheapest such I could find when I bought it; an old LCD Analog TV, fantastic for vintage consoles and the like.

Underneath: An Alesis Micron 2½ octave analog-modelling synthesizer; various hubs and similar things; My Amiga 500.

Like Evgeni, my normal keyboard is a ThinkPad Compact USB Keyboard with TrackPoint. I've been using different generations of these styles of keyboards for a long time now, initially because I loved the trackpoint pointer. I'm very curious about trying out mechanical keyboards, as I have very fond memories of my first IBM Model M buckled-spring keyboard, but I haven't dipped my toes into that money-trap just yet. The Thinkpad keyboard is rubber-dome, but it's a good one.

Wedged between the right-hand bookcases are a stack of IT-related things: new printer; deprecated printer; old/spare/play laptops, docks and chargers; managed network switch; NAS.

26 June, 2020 09:32AM

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.3 released

I released version 2.3 of ledger2beancount, a ledger to beancount converter.

There are three notable changes with this release:

  1. Performance has significantly improved. One large, real-world test case has gone from around 160 seconds to 33 seconds. A smaller test case has gone from 11 seconds to ~3.5 seconds.
  2. The documentation is available online now (via Read the Docs).
  3. The repository has moved to the beancount GitHub organization.

Here are the changes in 2.3:

  • Improve speed of ledger2beancount significantly
  • Improve parsing of postings for accuracy and speed
  • Improve support for inline math
  • Handle lots without cost
  • Fix parsing of lot notes followed by a virtual price
  • Add support for lot value expressions
  • Make parsing of numbers more strict
  • Fix behaviour of dates without year
  • Accept default ledger date formats without configuration
  • Fix implicit conversions with negative prices
  • Convert implicit conversions in a more idiomatic way
  • Avoid introducing trailing whitespace with hledger input
  • Fix loading of config file
  • Skip ledger directive import
  • Convert documentation to mkdocs

Thanks to Colin Dean for some feedback. Thanks to Stefano Zacchiroli for prompting me into investigating performance issues (and thanks to the developers of the Devel::NYTProf profiler).

You can get ledger2beancount from GitHub

26 June, 2020 03:45AM by Martin Michlmayr

June 25, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.6: New Upstream, New Features!

A very exciting RcppSimdJson release with the updated upstream simdjson release 0.4.0 as well as a first set of new JSON parsing functions just hit CRAN. RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The very recent 0.4.0 release further improves the already impressive speed.

And this release brings a first set of actually user-facing functions thanks to Brendan which put in a series of PRs! The full NEWS entry follows.

Changes in version 0.0.6 (2020-06-25)

  • Created C++ integer-handling utilities for safe downcasting and integer return (Brendan in #16 closing #13).

  • New JSON functions .deserialize_json and .load_json (Brendan in #16, #17, #20, #21).

  • Upgrade Travis CI to 'bionic', extract package and version from DESCRIPTION (Dirk in #23).

  • Upgraded to simdjson 0.4.0 (Dirk in #25 closing #24).

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 June, 2020 10:33PM

Russell Coker

How Will the Pandemic Change Things?

The Bulwark has an interesting article on why they can’t “Reopen America” [1]. I wonder how many changes will be long term. According to the Wikipedia List of Epidemics [2] Covid-19 so far hasn’t had a high death toll when compared to other pandemics of the last 100 years. People’s reactions to this vary from doing nothing to significant isolation, the question is what changes in attitudes will be significant enough to change society.

Transport

One thing that has been happening recently is a transition in transport. It’s obvious that we need to reduce CO2 and while electric cars will address the transport part of the problem in the long term changing to electric public transport is the cheaper and faster way to do it in the short term. Before Covid-19 the peak hour public transport in my city was ridiculously overcrowded, having people unable to board trams due to overcrowding was really common. If the economy returns to it’s previous state then I predict less people on public transport, more traffic jams, and many more cars idling and polluting the atmosphere.

Can we have mass public transport that doesn’t give a significant disease risk? Maybe if we had significantly more trains and trams and better ventilation with more airflow designed to suck contaminated air out. But that would require significant engineering work to design new trams, trains, and buses as well as expense in refitting or replacing old ones.

Uber and similar companies have been taking over from taxi companies, one major feature of those companies is that the vehicles are not dedicated as taxis. Dedicated taxis could easily be designed to reduce the spread of disease, the famed Black Cab AKA Hackney Carriage [3] design in the UK has a separate compartment for passengers with little air flow to/from the driver compartment. It would be easy to design such taxis to have entirely separate airflow and if setup to only take EFTPOS and credit card payment could avoid all contact between the driver and passengers. I would prefer to have a Hackney Carriage design of vehicle instead of a regular taxi or Uber.

Autonomous cars have been shown to basically work. There are some concerns about safety issues as there are currently corner cases that car computers don’t handle as well as people, but of course there are also things computers do better than people. Having an autonomous taxi would be a benefit for anyone who wants to avoid other people. Maybe approval could be rushed through for autonomous cars that are limited to 40Km/h (the maximum collision speed at which a pedestrian is unlikely to die), in central city areas and inner suburbs you aren’t likely to drive much faster than that anyway.

Car share services have been becoming popular, for many people they are significantly cheaper than owning a car due to the costs of regular maintenance, insurance, and depreciation. As the full costs of car ownership aren’t obvious people may focus on the disease risk and keep buying cars.

Passenger jets are ridiculously cheap. But this relies on the airline companies being able to consistently fill the planes. If they were to add measures to reduce cross contamination between passengers which slightly reduces the capacity of planes then they need to increase ticket prices accordingly which then reduces demand. If passengers are just scared of flying in close proximity and they can’t fill planes then they will have to increase prices which again reduces demand and could lead to a death spiral. If in the long term there aren’t enough passengers to sustain the current number of planes in service then airline companies will have significant financial problems, planes are expensive assets that are expected to last for a long time, if they can’t use them all and can’t sell them then airline companies will go bankrupt.

It’s not reasonable to expect that the same number of people will be travelling internationally for years (if ever). Due to relying on economies of scale to provide low prices I don’t think it’s possible to keep prices the same no matter what they do. A new economic balance of flights costing 2-3 times more than we are used to while having significantly less passengers seems likely. Governments need to spend significant amounts of money to improve trains to take over from flights that are cancelled or too expensive.

Entertainment

The article on The Bulwark mentions Las Vegas as a city that will be hurt a lot by reductions in travel and crowds, the same thing will happen to tourist regions all around the world. Australia has a significant tourist industry that will be hurt a lot. But the mention of Las Vegas makes me wonder what will happen to the gambling in general. Will people avoid casinos and play poker with friends and relatives at home? It seems that small stakes poker games among friends will be much less socially damaging than casinos, will this be good for society?

The article also mentions cinemas which have been on the way out since the video rental stores all closed down. There’s lots of prime real estate used for cinemas and little potential for them to make enough money to cover the rent. Should we just assume that most uses of cinemas will be replaced by Netflix and other streaming services? What about teenage dates, will kissing in the back rows of cinemas be replaced by “Netflix and chill”? What will happen to all the prime real estate used by cinemas?

Professional sporting matches have been played for a TV-only audience during the pandemic. There’s no reason that they couldn’t make a return to live stadium audiences when there is a vaccine for the disease or the disease has been extinguished by social distancing. But I wonder if some fans will start to appreciate the merits of small groups watching large TVs and not want to go back to stadiums, can this change the typical behaviour of groups?

Restaurants and cafes are going to do really badly. I previously wrote about my experience running an Internet Cafe and why reopening businesses soon is a bad idea [4]. The question is how long this will go for and whether social norms about personal space will change things. If in the long term people expect 25% more space in a cafe or restaurant that’s enough to make a significant impact on profitability for many small businesses.

When I was young the standard thing was for people to have dinner at friends homes. Meeting friends for dinner at a restaurant was uncommon. Recently it seemed to be the most common practice for people to meet friends at a restaurant. There are real benefits to meeting at a restaurant in terms of effort and location. Maybe meeting friends at their home for a delivered dinner will become a common compromise, avoiding the effort of cooking while avoiding the extra expense and disease risk of eating out. Food delivery services will do well in the long term, it’s one of the few industry segments which might do better after the pandemic than before.

Work

Many companies are discovering the benefits of teleworking, getting it going effectively has required investing in faster Internet connections and hardware for employees. When we have a vaccine the equipment needed for teleworking will still be there and we will have a discussion about whether it should be used on a more routine basis. When employees spend more than 2 hours per day travelling to and from work (which is very common for people who work in major cities) that will obviously limit the amount of time per day that they can spend working. For the more enthusiastic permanent employees there seems to be a benefit to the employer to allow working from home. It’s obvious that some portion of the companies that were forced to try teleworking will find it effective enough to continue in some degree.

One company that I work for has quit their coworking space in part because they were concerned that the coworking company might go bankrupt due to the pandemic. They seem to have become a 100% work from home company for the office part of the work (only on site installation and stock management is done at corporate locations). Companies running coworking spaces and other shared offices will suffer first as their clients have short term leases. But all companies renting out office space in major cities will suffer due to teleworking. I wonder how this will affect the companies providing services to the office workers, the cafes and restaurants etc. Will there end up being so much unused space in central city areas that it’s not worth converting the city cinemas into useful space?

There’s been a lot of news about Zoom and similar technologies. Lots of other companies are trying to get into that business. One thing that isn’t getting much notice is remote access technologies for desktop support. If the IT people can’t visit your desk because you are working from home then they need to be able to remotely access it to fix things. When people make working from home a large part of their work time the issue of who owns peripherals and how they are tracked will get interesting. In a previous blog post I suggested that keyboards and mice not be treated as assets [5]. But what about monitors, 4G/Wifi access points, etc?

Some people have suggested that there will be business sectors benefiting from the pandemic, such as telecoms and e-commerce. If you have a bunch of people forced to stay home who aren’t broke (IE a large portion of the middle class in Australia) they will probably order delivery of stuff for entertainment. But in the long term e-commerce seems unlikely to change much, people will spend less due to economic uncertainty so while they may shift some purchasing to e-commerce apart from home delivery of groceries e-commerce probably won’t go up overall. Generally telecoms won’t gain anything from teleworking, the Internet access you need for good Netflix viewing is generally greater than that needed for good video-conferencing.

Money

I previously wrote about a Basic Income for Australia [6]. One of the most cited reasons for a Basic Income is to deal with robots replacing people. Now we are at the start of what could be a long term economic contraction caused by the pandemic which could reduce the scale of the economy by a similar degree while also improving the economic case for a robotic workforce. We should implement a Universal Basic Income now.

I previously wrote about the make-work jobs and how we could optimise society to achieve the worthwhile things with less work [7]. My ideas about optimising public transport and using more car share services may not work so well after the pandemic, but the rest should work well.

Business

There are a number of big companies that are not aiming for profitability in the short term. WeWork and Uber are well documented examples. Some of those companies will hopefully go bankrupt and make room for more responsible companies.

The co-working thing was always a precarious business. The companies renting out office space usually did so on a monthly basis as flexibility was one of their selling points, but they presumably rented buildings on an annual basis. As the profit margins weren’t particularly high having to pay rent on mostly empty buildings for a few months will hurt them badly. The long term trend in co-working spaces might be some sort of collaborative arrangement between the people who run them and the landlords similar to the way some of the hotel chains have profit sharing agreements with land owners to avoid both the capital outlay for buying land and the risk involved in renting. Also city hotels are very well equipped to run office space, they have the staff and the procedures for running such a business, most hotels also make significant profits from conventions and conferences.

The way the economy has been working in first world countries has been about being as competitive as possible. Just in time delivery to avoid using storage space and machines to package things in exactly the way that customers need and no more machines than needed for regular capacity. This means that there’s no spare capacity when things go wrong. A few years ago a company making bolts for the car industry went bankrupt because the car companies forced the prices down, then car manufacture stopped due to lack of bolts – this could have been a wake up call but was ignored. Now we have had problems with toilet paper shortages due to it being packaged in wholesale quantities for offices and schools not retail quantities for home use. Food was destroyed because it was created for restaurant packaging and couldn’t be packaged for home use in a reasonable amount of time.

Farmer’s markets alleviate some of the problems with packaging food etc. But they aren’t a good option when there’s a pandemic as disease risk makes them less appealing to customers and therefore less profitable for vendors.

Religion

Many religious groups have supported social distancing. Could this be the start of more decentralised religion? Maybe have people read the holy book of their religion and pray at home instead of being programmed at church? We can always hope.

25 June, 2020 03:27AM by etbe

June 24, 2020

Ian Jackson

Renaming the primary git branch to "trunk"

I have been convinced by the arguments that it's not nice to keep using the word master for the default git branch. Regardless of the etymology (which is unclear), some people say they have negative associations for this word, Changing this upstream in git is complicated on a technical level and, sadly, contested.

But git is flexible enough that I can make this change in my own repositories. Doing so is not even so difficult.

So:

Announcement

I intend to rename master to trunk in all repositories owned by my personal hat. To avoid making things very complicated for myself I will just delete refs/heads/master when I make this change. So there may be a little disruption to downstreams.

I intend make this change everywhere eventually. But rather than front-loading the effort, I'm going to do this to repositories as I come across them anyway. That will allow me to update all the docs references, any automation, etc., at a point when I have those things in mind anyway. Also, doing it this way will allow me to focus my effort on the most active projects, and avoids me committing to a sudden large pile of fiddly clerical work. But: if you have an interest in any repository in particular that you want updated, please let me know so I can prioritise it.

Bikeshed

Why "trunk"? "Main" has been suggested elswewhere, and it is often a good replacement for "master" (for example, we can talk very sensibly about a disk's Main Boot Record, MBR). But "main" isn't quite right for the VCS case; for example a "main" branch ought to have better quality than is typical for the primary development branch.

Conversely, there is much precedent for "trunk". "Trunk" was used to refer to this concept by at least SVN, CVS, RCS and CSSC (and therefore probably SCCS) - at least in the documentation, although in some of these cases the command line API didn't have a name for it.

So "trunk" it is.

Aside: two other words - passlist, blocklist

People are (finally!) starting to replace "blacklist" and "whitelist". Seriously, why has it taken everyone this long?

I have been using "blocklist" and "passlist" for these concepts for some time. They are drop-in replacements. I have also heard "allowlist" and "denylist" suggested, but they are cumbersome and cacophonous.

Also "allow" and "deny" seem to more strongly imply an access control function than merely "pass" and "block", and the usefulness of passlists and blocklists extends well beyond access control: protocol compatibility and ABI filtering are a couple of other use cases.



comment count unavailable comments

24 June, 2020 05:38PM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, May 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).

Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

24 June, 2020 01:03PM by Raphaël Hertzog

June 23, 2020

Russell Coker

Squirrelmail vs Roundcube

For some years I’ve had SquirrelMail running on one of my servers for the people who like such things. It seems that the upstream support for SquirrelMail has ended (according to the SquirrelMail Wikipedia page there will be no new releases just Subversion updates to fix bugs). One problem with SquirrelMail that seems unlikely to get fixed is the lack of support for base64 encoded From and Subject fields which are becoming increasingly popular nowadays as people who’s names don’t fit US-ASCII are encoding them in their preferred manner.

I’ve recently installed Roundcube to provide an alternative. Of course one of the few important users of webmail didn’t like it (apparently it doesn’t display well on a recent Samsung Galaxy Note), so now I have to support two webmail systems.

Below is a little Perl script to convert a SquirrelMail abook file into the csv format used for importing a RoundCube contact list.

#!/usr/bin/perl

print "First Name,Last Name,Display Name,E-mail Address\n";
while(<STDIN>)
{
  chomp;
  my @fields = split(/\|/, $_);
  printf("%s,%s,%s %s,%s\n", $fields[1], $fields[2], $fields[0], $fields[4], $fields[3]);
}

23 June, 2020 02:52AM by etbe

June 22, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#27: R and CRAN Binaries for Ubuntu

Welcome to the 27th post in the rationally regularized R revelations series, or R4 for short. This is a edited / updated version of yesterday’s T^4 post #7 as it really fits the R4 series as well as it fits the T4 series.

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

Thanks to Iñaki Ucar who followed up on twitter with a Fedora version. We exchanged some more message, and concluded that complete comparison (from an empty Ubuntu or Fedora container) to a full system takes about the same time on either system.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 June, 2020 11:44PM

Andrew Cater

And a nice quick plug - Apple WWDC keynote

Demonstrating virtualisation on the new Apple to be powered by Apple silicon and ARM. Parallels virtualisation: Linux gets mentioned - two clear shots, one of Debian 9 and one of Debian 10. That's good publicity, any which way you cut it, with a worldwide tech audience hanging on every word from the presenters.

Now - developer hardware is due out this week - Apple A12 chip inside Mac mini format. Can we run Debian natively there?


22 June, 2020 07:40PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Evgeni Golov

Evgeni Golov

mass-migrating modules inside an Ansible Collection

Im the Foreman project, we've been maintaining a collection of Ansible modules to manage Foreman installations since 2017. That is, 2 years before Ansible had the concept of collections at all.

For that you had to set library (and later module_utils and doc_fragment_plugins) in ansible.cfg and effectively inject our modules, their helpers and documentation fragments into the main Ansible namespace. Not the cleanest solution, but it worked quiet well for us.

When Ansible started introducing Collections, we quickly joined, as the idea of namespaced, easily distributable and usable content units was great and exactly matched what we had in mind.

However, collections are only usable in Ansible 2.8, or actually 2.9 as 2.8 can consume them, but tooling around building and installing them is lacking. Because of that we've been keeping our modules usable outside of a collection.

Until recently, when we decided it's time to move on, drop that compatibility (which costed a few headaches over the time) and release a shiny 1.0.0.

One of the changes we wanted for 1.0.0 is renaming a few modules. Historically we had the module names prefixed with foreman_ and katello_, depending whether they were designed to work with Foreman (and plugins) or Katello (which is technically a Foreman plugin, but has a way more complicated deployment and currently can't be easily added to an existing Foreman setup). This made sense as long as we were injecting into the main Ansible namespace, but with collections the names be became theforemam.foreman.foreman_ <something> and while we all love Foreman, that was a bit too much. So we wanted to drop that prefix. And while at it, also change some other names (like ptable, which became partition_table) to be more readable.

But how? There is no tooling that would rename all files accordingly, adjust examples and tests. Well, bash to the rescue! I'm usually not a big fan of bash scripts, but renaming files, searching and replacing strings? That perfectly fits!

First of all we need a way map the old name to the new name. In most cases it's just "drop the prefix", for the others you can have some if/elif/fi:

prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
if [[ ${old_name} == 'foreman_environment' ]]; then
  new_name='puppet_environment'
elif [[ ${old_name} == 'katello_sync' ]]; then
  new_name='repository_sync'
elif [[ ${old_name} == 'katello_upload' ]]; then
  new_name='content_upload'
elif [[ ${old_name} == 'foreman_ptable' ]]; then
  new_name='partition_table'
elif [[ ${old_name} == 'foreman_search_facts' ]]; then
  new_name='resource_info'
elif [[ ${old_name} == 'katello_manifest' ]]; then
  new_name='subscription_manifest'
elif [[ ${old_name} == 'foreman_model' ]]; then
  new_name='hardware_model'
else
  new_name=${prefixless_name}
fi

That defined, we need to actually have a ${old_name}. Well, that's a for loop over the modules, right?

for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  …
done

While we're looping over files, let's rename them and all the files that are associated with the module:

# rename the module
git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py
# rename the tests and test fixtures
git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
  git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
done

Now comes the really tricky part: search and replace. Let's see where we need to replace first:

  1. in the module file
    1. module key of the DOCUMENTATION stanza (e.g. module: foreman_example)
    2. all examples (e.g. foreman_example: …)
  2. in all test playbooks (e.g. foreman_example: …)
  3. in pytest's conftest.py and other files related to test execution
  4. in documentation
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py
sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py
sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md

You've probably noticed I used ${BASE} and ${TESTS} and never defined them… Lazy me.

But here is the full script, defining the variables and looping over all the modules.

#!/bin/bash
BASE=plugins/modules
TESTS=tests/test_playbooks
RUNTIME=meta/runtime.yml
echo "plugin_routing:" > ${RUNTIME}
echo "  modules:" >> ${RUNTIME}
for module in ${BASE}/foreman_*py ${BASE}/katello_*py; do
  old_name=$(basename ${module} .py)
  prefixless_name=$(echo ${old_name}| sed -E 's/^(foreman|katello)_//')
  if [[ ${old_name} == 'foreman_environment' ]]; then
    new_name='puppet_environment'
  elif [[ ${old_name} == 'katello_sync' ]]; then
    new_name='repository_sync'
  elif [[ ${old_name} == 'katello_upload' ]]; then
    new_name='content_upload'
  elif [[ ${old_name} == 'foreman_ptable' ]]; then
    new_name='partition_table'
  elif [[ ${old_name} == 'foreman_search_facts' ]]; then
    new_name='resource_info'
  elif [[ ${old_name} == 'katello_manifest' ]]; then
    new_name='subscription_manifest'
  elif [[ ${old_name} == 'foreman_model' ]]; then
    new_name='hardware_model'
  else
    new_name=${prefixless_name}
  fi
  echo "renaming ${old_name} to ${new_name}"
  git mv ${BASE}/${old_name}.py ${BASE}/${new_name}.py
  git mv ${TESTS}/${old_name}.yml ${TESTS}/${new_name}.yml
  git mv tests/fixtures/apidoc/${old_name}.json tests/fixtures/apidoc/${new_name}.json
  for testfile in ${TESTS}/fixtures/${old_name}-*.yml; do
    git mv ${testfile} $(echo ${testfile}| sed "s/${old_name}/${new_name}/")
  done
  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" ${BASE}/*.py
  sed -E -i "/^(\s+${old_name}|module):/ s/${old_name}/${new_name}/g" tests/test_playbooks/tasks/*.yml tests/test_playbooks/*.yml
  sed -E -i "/'${old_name}'/ s/${old_name}/${new_name}/" tests/conftest.py tests/test_crud.py
  sed -E -i "/`${old_name}`/ s/${old_name}/${new_name}/g' README.md docs/*.md
  echo "    ${old_name}:" >> ${RUNTIME}
  echo "      redirect: ${new_name}" >> ${RUNTIME}
  git commit -m "rename ${old_name} to ${new_name}" ${BASE} tests/ README.md docs/ ${RUNTIME}
done

As a bonus, the script will also generate a meta/runtime.yml which can be used by Ansible 2.10+ to automatically use the new module names if the playbook contains the old ones.

Oh, and yes, this is probably not the nicest script you'll read this year. Maybe not even today. But it got the job nicely done and I don't intend to need it again anyways.

22 June, 2020 07:31PM by evgeni

June 21, 2020

Enrico Zini

Forced italianisation of Südtirol

Italianization (Italian: Italianizzazione; Croatian: talijanizacija; Slovene: poitaljančevanje; German: Italianisierung; Greek: Ιταλοποίηση) is the spread of Italian culture and language, either by integration or assimilation.[1][2]
In 1919, at the time of its annexation, the middle part of the County of Tyrol which is today called South Tyrol (in Italian Alto Adige) was inhabited by almost 90% German speakers.[1] Under the 1939 South Tyrol Option Agreement, Adolf Hitler and Benito Mussolini determined the status of the German and Ladin (Rhaeto-Romanic) ethnic groups living in the region. They could emigrate to Germany, or stay in Italy and accept their complete Italianization. As a consequence of this, the society of South Tyrol was deeply riven. Those who wanted to stay, the so-called Dableiber, were condemned as traitors while those who left (Optanten) were defamed as Nazis. Because of the outbreak of World War II, this agreement was never fully implemented. Illegal Katakombenschulen ("Catacomb schools") were set up to teach children the German language.
The Prontuario dei nomi locali dell'Alto Adige (Italian for Reference Work of Place Names of Alto Adige) is a list of Italianized toponyms for mostly German place names in South Tyrol (Alto Adige in Italian) which was published in 1916 by the Royal Italian Geographic Society (Reale Società Geografica Italiana). The list was called the Prontuario in short and later formed an important part of the Italianization campaign initiated by the fascist regime, as it became the basis for the official place and district names in the Italian-annexed southern part of the County of Tyrol.
Ettore Tolomei (16 August 1865, in Rovereto – 25 May 1952, in Rome) was an Italian nationalist and fascist. He was designated a Member of the Italian Senate in 1923, and ennobled as Conte della Vetta in 1937.
The South Tyrol Option Agreement (German: Option in Südtirol; Italian: Opzioni in Alto Adige) was an agreement in effect between 1939 and 1943, when the native German speaking people in South Tyrol and three communes in the province of Belluno were given the option of either emigrating to neighboring Nazi Germany (of which Austria was a part after the 1938 Anschluss) or remaining in Fascist Italy and being forcibly integrated into the mainstream Italian culture, losing their language and cultural heritage. Over 80% opted to move to Germany.

21 June, 2020 10:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

T^4 #7 and R^4 #5: R and CRAN Binaries for Ubuntu

A new video in both our T^4 series of video lightning talks with tips, tricks, tools, and toys is also a video in the R^4 series as it revisits a topic previously covered in the latter: how to (more easily) get (binary) packages onto your Ubuntu system. In fact, we show it in three different ways.

The slides are here.

This repo at GitHub support the series: use it to open issues for comments, criticism, suggestions, or feedback.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 June, 2020 09:48PM

Molly de Blanc

Fire

The world is on fire.

I know many of you are either my parents friends or here for the free software thoughts, but rather than listen to me, I want you to listed to Black voices in these fields.

If you’re not Black, it’s your job to educate yourself on issues that affect Black people all over the world and the way systems are designed to benefit White Supremacy. It is our job to acknowledge that racism is a problem — whether it appears as White Supremacy, Colonialism, or something as seemingly banal as pay gaps.

We must make space for Black voices. We must make space for Black Women. We must make space for Black trans lives. We must do this in technology. We must build equity. We must listen.

I know I have a platform. It’s one I value highly because I’ve worked so hard for the past twelve years to build it against sexist attitudes in my field (and the world). However, it is time for me to use that platform for the voices of others.

Please pay attention to Black voices in tech and FOSS. Do not just expect them to explain diversity and inclusion, but go to them for their expertise. Pay them respectful salaries. Mentor Black youth and Black people who want to be involved. Volunteer or donate to groups like Black Girls Code, The Last Mile, and Resilient Coders.

If you’re looking for mentorship, especially around things like writing, speaking, community organizing, or getting your career going in open source, don’t hesitate to reach out to me. Mentorship can be a lasting relationship, or a brief exchange around a specific issue of event. If I can’t help you, I’ll try to connect you to someone who can.

We cannot build the techno-utopia unless everyone is involved.

21 June, 2020 04:32PM by mollydb

June 20, 2020

François Marier

Backing up to a GnuBee PC 2

After installing Debian buster on my GnuBee, I set it up for receiving backups from my other computers.

Software setup

I started by configuring it like a typical server but without a few packages that either take a lot of memory or CPU:

I changed the default hostname:

  • /etc/hostname: foobar
  • /etc/mailname: foobar.example.com
  • /etc/hosts: 127.0.0.1 foobar.example.com foobar localhost

and then installed the avahi-daemon package to be able to reach this box using foobar.local.

I noticed the presence of a world-writable directory and so I tightened the security of some of the default mount points by putting the following in /etc/rc.local:

chmod 755 /etc/network
exit 0

Hardware setup

My OS drive (/dev/sda) is a small SSD so that the GnuBee can run silently when the spinning disks aren't needed. To hold the backup data on the other hand, I got three 4-TB drives drives which I setup in a RAID-5 array. If the data were valuable, I'd use RAID-6 instead since it can survive two drives failing at the same time, but in this case since it's only holding backups, I'd have to lose the original machine at the same time as two of the 3 drives, a very unlikely scenario.

I created new gpt partition tables on /dev/sdb, /dev/sdbc, /dev/sdd and used fdisk to create a single partition of type 29 (Linux RAID) on each of them.

Then I created the RAID array:

mdadm /dev/md127 --create -n 3 --level=raid5 -a /dev/sdb1 /dev/sdc1 /dev/sdd1

and waited more than 24 hours for that operation to finish. Next, I formatted the array:

mkfs.ext4 -m 0 /dev/md127

and added the following to /etc/fstab:

/dev/md127 /mnt/data/ ext4 noatime,nodiratime 0 2

To reduce unnecessary noise and reduce power consumption, I also installed hdparm:

apt install hdparm

and configured all spinning drives to spin down after being idle for 10 minutes by putting the following in /etc/hdparm.conf:

/dev/sdb {
       spindown_time = 120
}

/dev/sdc {
       spindown_time = 120
}

/dev/sdd {
       spindown_time = 120
}

and then reloaded the configuration:

 /usr/lib/pm-utils/power.d/95hdparm-apm resume

Finally I setup smartmontools by putting the following in /etc/smartd.conf:

/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdb -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdc -a -o on -S on -s (S/../.././02|L/../../6/03)
/dev/sdd -a -o on -S on -s (S/../.././02|L/../../6/03)

and restarting the daemon:

systemctl restart smartd.service

Backup setup

I started by using duplicity since I have been using that tool for many years, but a 190GB backup took around 15 hours on the GnuBee with gigabit ethernet.

After a friend suggested it, I took a look at restic and I have to say that I am impressed. The same backup finished in about half the time.

User and ssh setup

After hardening the ssh setup as I usually do, I created a user account for each machine needing to backup onto the GnuBee:

adduser machine1
adduser machine1 sshuser
adduser machine1 sftponly
chsh machine1 -s /bin/false

and then matching directories under /mnt/data/home/:

mkdir /mnt/data/home/machine1
chown machine1:machine1 /mnt/data/home/machine1
chmod 700 /mnt/data/home/machine1

Then I created a custom ssh key for each machine:

ssh-keygen -f /root/.ssh/foobar_backups -t ed25519

and placed it in /home/machine1/.ssh/authorized_keys on the GnuBee.

On each machine, I added the following to /root/.ssh/config:

Host foobar.local
    User machine1
    Compression no
    Ciphers aes128-ctr
    IdentityFile /root/backup/foobar_backups
    IdentitiesOnly yes
    ServerAliveInterval 60
    ServerAliveCountMax 240

The reason for setting the ssh cipher and disabling compression is to speed up the ssh connection as much as possible given that the GnuBee has a very small RAM bandwidth.

Another performance-related change I made on the GnuBee was switching to the internal sftp server by putting the following in /etc/ssh/sshd_config:

Subsystem      sftp    internal-sftp

Restic script

After reading through the excellent restic documentation, I wrote the following backup script, based on my old duplicity script, to reuse on all of my computers:

# Configure for each host
PASSWORD="XXXX"  # use `pwgen -s 64` to generate a good random password
BACKUP_HOME="/root/backup"
REMOTE_URL="sftp:foobar.local:"
RETENTION_POLICY="--keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 2"

# Internal variables
SSH_IDENTITY="IdentityFile=$BACKUP_HOME/foobar_backups"
EXCLUDE_FILE="$BACKUP_HOME/exclude"
PKG_FILE="$BACKUP_HOME/dpkg-selections"
PARTITION_FILE="$BACKUP_HOME/partitions"

# If the list of files has been requested, only do that
if [ "$1" = "--list-current-files" ]; then
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL ls latest
    exit 0

# Show list of available snapshots
elif [ "$1" = "--list-snapshots" ]; then
    RESTIC_PASSWORD=$GPG_PASSWORD restic --quiet -r $REMOTE_URL snapshots
    exit 0

# Restore the given file
elif [ "$1" = "--file-to-restore" ]; then
    if [ "$2" = "" ]; then
        echo "You must specify a file to restore"
        exit 2
    fi
    RESTORE_DIR="$(mktemp -d ./restored_XXXXXXXX)"
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL restore latest --target "$RESTORE_DIR" --include "$2" || exit 1
    echo "$2 was restored to $RESTORE_DIR"
    exit 0

# Delete old backups
elif [ "$1" = "--prune" ]; then
    # Expire old backups
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL forget $RETENTION_POLICY

    # Delete files which are no longer necessary (slow)
    RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL prune
    exit 0

# Catch invalid arguments
elif [ "$1" != "" ]; then
    echo "Invalid argument: $1"
    exit 1
fi

# Check the integrity of existing backups
RESTIC_PASSWORD=$PASSWORD restic --quiet -r $REMOTE_URL check || exit 1

# Dump list of Debian packages
dpkg --get-selections > $PKG_FILE

# Dump partition tables from harddrives
/sbin/fdisk -l /dev/sda > $PARTITION_FILE
/sbin/fdisk -l /dev/sdb > $PARTITION_FILE

# Do the actual backup
RESTIC_PASSWORD=$PASSWORD restic --quiet --cleanup-cache -r $REMOTE_URL backup / --exclude-file $EXCLUDE_FILE

I run it with the following cronjob in /etc/cron.d/backups:

30 8 * * *    root  ionice nice nocache /root/backup/backup-machine1-to-foobar
30 2 * * Sun  root  ionice nice nocache /root/backup/backup-machine1-to-foobar --prune

in a way that doesn't impact the rest of the system too much.

Finally, I printed a copy of each of my backup script, using enscript, to stash in a safe place:

enscript --highlight=bash --style=emacs --output=- backup-machine1-to-foobar | ps2pdf - > foobar.pdf

This is actually a pretty important step since without the password, you won't be able to decrypt and restore what's on the GnuBee.

20 June, 2020 08:46PM

Dima Kogan

OpenCV C API transition. A rant.

I just went through a debugging exercise that was so ridiculous, I just had to write it up. Some of this probably should go into a bug report instead of a rant, but I'm tired. And clearly I don't care anymore.

OK, so I'm doing computer vision work. OpenCV has been providing basic functions in this area, so I have been using them for a while. Just for really, really basic stuff, like projection. The C API was kinda weird, and their error handling is a bit ridiculous (if you give it arguments it doesn't like, it asserts!), but it has been working fine for a while.

At some point (around OpenCV 3.0) somebody over there decided that they didn't like their C API, and that this was now a C++ library. Except the docs still documented the C API, and the website said it supported C, and the code wasn't actually removed. They just kinda stopped testing it and thinking about it. So it would mostly continue to work, except some poor saps would see weird failures; like this and this, for instance. OpenCV 3.2 was the last version where it was mostly possible to keep using the old C code, even when compiling without optimizations. So I was doing that for years.

So now, in 2020, Debian is finally shipping a version of OpenCV that definitively does not work with the old code, so I had to do something. Over time I stopped using everything about OpenCV, except a few cvProjectPoints2() calls. So I decided to just write a small C++ shim to call the new version of that function, expose that with =extern "C"= to the rest of my world, and I'd be done. And normally I would be, but this is OpenCV we're talking about. I wrote the shim, and it didn't work. The code built and ran, but the results were wrong. After some pointless debugging, I boiled the problem down to this test program:

#include <opencv2/calib3d.hpp>
#include <stdio.h>

int main(void)
{
    double fx = 1000.0;
    double fy = 1000.0;
    double cx = 1000.0;
    double cy = 1000.0;
    double _camera_matrix[] =
        { fx,  0, cx,
          0,  fy, cy,
          0,   0,  1 };
    cv::Mat camera_matrix(3,3, CV_64FC1, _camera_matrix);

    double pp[3] = {1., 2., 10.};
    double qq[2] = {444, 555};

    int N=1;
    cv::Mat object_points(N,3, CV_64FC1, pp);
    cv::Mat image_points (N,2, CV_64FC1, qq);

    // rvec,tvec
    double _zero3[3] = {};
    cv::Mat zero3(1,3,CV_64FC1, _zero3);

    cv::projectPoints( object_points,
                       zero3,zero3,
                       camera_matrix,
                       cv::noArray(),
                       image_points,
                       cv::noArray(), 0.0);

    fprintf(stderr, "manually-projected no-distortion: %f %f\n",
            pp[0]/pp[2] * fx + cx,
            pp[1]/pp[2] * fy + cy);
    fprintf(stderr, "opencv says: %f %f\n", qq[0], qq[1]);

    return 0;
}

This is as trivial as it gets. I project one point through a pinhole camera, and print out the right answer (that I can easily compute, since this is trivial), and what OpenCV reports:

$ g++ -I/usr/include/opencv4 -o tst tst.cc -lopencv_calib3d -lopencv_core && ./tst

manually-projected no-distortion: 1100.000000 1200.000000
opencv says: 444.000000 555.000000

Well that's no good. The answer is wrong, but it looks like it didn't even write anything into the output array. Since this is supposed to be a thin shim to C code, I want this thing to be filling in C arrays, which is what I'm doing here:

double qq[2] = {444, 555};
int N=1;
cv::Mat image_points (N,2, CV_64FC1, qq);

This is how the C API has worked forever, and their C++ API works the same way, I thought. Nothing barfed, not at build time, or run time. Fine. So I went to figure this out. In the true spirit of C++, the new API is inscrutable. I'm passing in cv::Mat, but the API wants cv::InputArray for some arguments and cv::OutputArray for others. Clearly cv::Mat can be coerced into either of those types (and that's what you're supposed to do), but the details are not meant to be understood. You can read the snazzy C++-style documentation. Clicking on "OutputArray" in the doxygen gets you here. Then I guess you're supposed to click on "_OutputArray", and you get here. Understand what's going on now? Me neither.

Stepping through the code revealed the problem. cv::projectPoints() looks like this:

void cv::projectPoints( InputArray _opoints,
                        InputArray _rvec,
                        InputArray _tvec,
                        InputArray _cameraMatrix,
                        InputArray _distCoeffs,
                        OutputArray _ipoints,
                        OutputArray _jacobian,
                        double aspectRatio )
{
    ....
    _ipoints.create(npoints, 1, CV_MAKETYPE(depth, 2), -1, true);
    ....

I.e. they're allocating a new data buffer for the output, and giving it back to me via the OutputArray object. This object already had a buffer, and that's where I was expecting the output to go. Instead it went to the brand-new buffer I didn't want. Issues:

  • The OutputArray object knows it already has a buffer, and they could just use it instead of allocating a new one
  • If for some reason my buffer smells bad, they could complain to tell me they're ignoring it to save me the trouble of debugging, and then bitching about it on the internet
  • I think dynamic memory allocation smells bad
  • Doing it this way means the new function isn't a drop-in replacement for the old function

Well that's just super. I can call the C++ function, copy the data into the place it's supposed to go to, and then deallocate the extra buffer. Or I can pull out the meat of the function I want into my project, and then I can drop the OpenCV dependency entirely. Clearly that's the way to go.

So I go poking back into their code to grab what I need, and here's what I see:

static void cvProjectPoints2Internal( const CvMat* objectPoints,
                  const CvMat* r_vec,
                  const CvMat* t_vec,
                  const CvMat* A,
                  const CvMat* distCoeffs,
                  CvMat* imagePoints, CvMat* dpdr CV_DEFAULT(NULL),
                  CvMat* dpdt CV_DEFAULT(NULL), CvMat* dpdf CV_DEFAULT(NULL),
                  CvMat* dpdc CV_DEFAULT(NULL), CvMat* dpdk CV_DEFAULT(NULL),
                  CvMat* dpdo CV_DEFAULT(NULL),
                  double aspectRatio CV_DEFAULT(0) )
{
...
}

Looks familiar? It should. Because this is the original C-API function they replaced. So in their quest to move to C++, they left the original code intact, C API and everything, un-exposed it so you couldn't call it anymore, and made a new, shitty C++ wrapper for people to call instead. CvMat is still there. I have no words.

Yes, this is a massive library, and maybe other parts of it indeed did make some sort of non-token transition, but this thing is ridiculous. In the end, here's the function I ended up with (licensed as OpenCV; see the comment)

// The implementation of project_opencv is based on opencv. The sources have
// been heavily modified, but the opencv logic remains. This function is a
// cut-down cvProjectPoints2Internal() to keep only the functionality I want and
// to use my interfaces. Putting this here allows me to drop the C dependency on
// opencv. Which is a good thing, since opencv dropped their C API
//
// from opencv-4.2.0+dfsg/modules/calib3d/src/calibration.cpp
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
typedef union
{
    struct
    {
        double x,y;
    };
    double xy[2];
} point2_t;
typedef union
{
    struct
    {
        double x,y,z;
    };
    double xyz[3];
} point3_t;
void project_opencv( // outputs
                     point2_t* q,
                     point3_t* dq_dp,               // may be NULL
                     double* dq_dintrinsics_nocore, // may be NULL

                     // inputs
                     const point3_t* p,
                     int N,
                     const double* intrinsics,
                     int Nintrinsics)
{
    const double fx = intrinsics[0];
    const double fy = intrinsics[1];
    const double cx = intrinsics[2];
    const double cy = intrinsics[3];

    double k[12] = {};
    for(int i=0; i<Nintrinsics-4; i++)
        k[i] = intrinsics[i+4];

    for( int i = 0; i < N; i++ )
    {
        double z_recip = 1./p[i].z;
        double x = p[i].x * z_recip;
        double y = p[i].y * z_recip;

        double r2      = x*x + y*y;
        double r4      = r2*r2;
        double r6      = r4*r2;
        double a1      = 2*x*y;
        double a2      = r2 + 2*x*x;
        double a3      = r2 + 2*y*y;
        double cdist   = 1 + k[0]*r2 + k[1]*r4 + k[4]*r6;
        double icdist2 = 1./(1 + k[5]*r2 + k[6]*r4 + k[7]*r6);
        double xd      = x*cdist*icdist2 + k[2]*a1 + k[3]*a2 + k[8]*r2+k[9]*r4;
        double yd      = y*cdist*icdist2 + k[2]*a3 + k[3]*a1 + k[10]*r2+k[11]*r4;

        q[i].x = xd*fx + cx;
        q[i].y = yd*fy + cy;


        if( dq_dp )
        {
            double dx_dp[] = { z_recip, 0,       -x*z_recip };
            double dy_dp[] = { 0,       z_recip, -y*z_recip };
            for( int j = 0; j < 3; j++ )
            {
                double dr2_dp = 2*x*dx_dp[j] + 2*y*dy_dp[j];
                double dcdist_dp = k[0]*dr2_dp + 2*k[1]*r2*dr2_dp + 3*k[4]*r4*dr2_dp;
                double dicdist2_dp = -icdist2*icdist2*(k[5]*dr2_dp + 2*k[6]*r2*dr2_dp + 3*k[7]*r4*dr2_dp);
                double da1_dp = 2*(x*dy_dp[j] + y*dx_dp[j]);
                double dmx_dp = (dx_dp[j]*cdist*icdist2 + x*dcdist_dp*icdist2 + x*cdist*dicdist2_dp +
                                k[2]*da1_dp + k[3]*(dr2_dp + 4*x*dx_dp[j]) + k[8]*dr2_dp + 2*r2*k[9]*dr2_dp);
                double dmy_dp = (dy_dp[j]*cdist*icdist2 + y*dcdist_dp*icdist2 + y*cdist*dicdist2_dp +
                                k[2]*(dr2_dp + 4*y*dy_dp[j]) + k[3]*da1_dp + k[10]*dr2_dp + 2*r2*k[11]*dr2_dp);
                dq_dp[i*2 + 0].xyz[j] = fx*dmx_dp;
                dq_dp[i*2 + 1].xyz[j] = fy*dmy_dp;
            }
        }
        if( dq_dintrinsics_nocore )
        {
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 0] = fx*x*icdist2*r2;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 0] = fy*(y*icdist2*r2);

            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 1] = fx*x*icdist2*r4;
            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 1] = fy*y*icdist2*r4;

            if( Nintrinsics-4 > 2 )
            {
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 2] = fx*a1;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 2] = fy*a3;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 3] = fx*a2;
                dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 3] = fy*a1;
                if( Nintrinsics-4 > 4 )
                {
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 4] = fx*x*icdist2*r6;
                    dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 4] = fy*y*icdist2*r6;

                    if( Nintrinsics-4 > 5 )
                    {
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 5] = fx*x*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 5] = fy*y*cdist*(-icdist2)*icdist2*r2;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 6] = fx*x*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 6] = fy*y*cdist*(-icdist2)*icdist2*r4;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 7] = fx*x*cdist*(-icdist2)*icdist2*r6;
                        dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 7] = fy*y*cdist*(-icdist2)*icdist2*r6;
                        if( Nintrinsics-4 > 8 )
                        {
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 8] = fx*r2; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 8] = fy*0; //s1
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 9] = fx*r4; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 9] = fy*0; //s2
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 10] = fx*0;//s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 10] = fy*r2; //s3
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 0) + 11] = fx*0;//s4
                            dq_dintrinsics_nocore[(Nintrinsics-4)*(2*i + 1) + 11] = fy*r4; //s4
                        }
                    }
                }
            }
        }
    }
}

This does only the stuff I need: projection only (no geometric transformation), and gradients in respect to the point coordinates and distortions only. Gradients in respect to fxy and cxy are trivial, and I don't bother reporting them.

So now I don't compile or link against OpenCV, my code builds and runs on Debian/sid and (surprisingly) it runs much faster than before. Apparently there was a lot of pointless overhead happening.

Alright. Rant over.

20 June, 2020 07:42AM by Dima Kogan

June 19, 2020

Ingo Juergensmann

Jitsi Meet and ejabberd

Since the more or less global lockdown caused by Covid-19 there was a lot talk about video conferencing solutions that can be used for e.g. those people that try to coordinate with coworkers while in home office. One of the solutions is Jitsi Meet, which is NOT packaged in Debian. But there are Debian packages provided by Jitsi itself.

Jitsi relies on an XMPP server. You can see the network overview in the docs. While Jitsi itself uses Prosody as XMPP and their docs only covers that one. But it's basically irrelevant which XMPP you want to use. Only thing is that you can't follow the official Jitsi documentation when you are not using Prosody but instead e.g. ejabberd. As always, it's sometimes difficult to find the correct/best non-official documentation or how-to, so I try to describe what helped me in configuring Jitsi Meet with ejabberd as XMPP server and my own coturn STUN/TURN server...

This is not a step-by-step description, but anyway... here we go with some links:

One of the first issues I stumpled across was that my Java was too old, but this can be quickly solved by update-alternatives:

There are 3 choices for the alternative java (providing /usr/bin/java).

Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/jre-7-oracle-x64/bin/java 316 manual mode

It was set to jre-7, but I guess this was from years ago when I ran OpenFire as XMPP server.

If something is not working with Jitsi Meet, it helps to not watch the log files only, but also to open the Debug Console in your web browser. That way I catched some XMPP Failures and saw that it tries to connect to some components where the DNS records were missing:

meet IN A yourIP
chat.meet IN A yourIP
focus.meet IN A yourIP
pubsub.meet IN A yourIP

Of course you also need to add some config to your ejabberd.yml:

host_config:
domain.net:
auth_password_format: scram
meet.domain.net:
## Disable s2s to prevent spam
s2s_access: none
auth_method: anonymous
allow_multiple_connections: true
anonymous_protocol: both
modules:
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
#mod_disco: {}
mod_stun_disco:
secret: "YOURSECRETTURNCREDENTIALS"
services:
-
host: yourIP # Your coturn's public address.
type: stun
transport: udp
-
host: yourIP # Your coturn's public address.
type: stun
transport: tcp
-
host: yourIP # Your coturn's public address.
type: turn
transport: udp
mod_muc:
access: all
access_create: local
access_persistent: local
access_admin: admin
host: "chat.@"
mod_muc_admin: {}
mod_ping: {}
mod_pubsub:
access_createnode: local
db_type: sql
host: "pubsub.@"
ignore_pep_from_offline: false
last_item_cache: true
max_items_node: 5000 # For Jappix this must be set to 1000000
plugins:
- "flat"
- "pep" # requires mod_caps
force_node_config:
"eu.siacs.conversations.axolotl.*":
access_model: open
## Avoid buggy clients to make their bookmarks public
"storage:bookmarks":
access_model: whitelist

There is more config that needs to be done, but one of the XMPP Failures I spotted from debug console in Firefox was that it tried to connect to conference.domain.net while I prefer to use chat.domain.net for its brevity. If you prefer conference instead of chat, then you shouldn't be affected by this. Just make just that your config is consistent with what Jitsi Meet wants to connect to. Maybe not all of the above lines are necessary, but this works for me.

Jitsi config is like this for me with short domains (/etc/jitsi/meet/meet.domain.net-config.js):

var config = {

hosts: {
domain: 'domain.net',
anonymousdomain: 'meet.domain.net',
authdomain: 'meet.domain.net',
focus: 'focus.meet.domain.net',
muc: 'chat.hookipa.net'
},

bosh: '//meet.domain.net:5280/http-bind',
//websocket: 'wss://meet.domain.net/ws',
clientNode: 'http://jitsi.org/jitsimeet',
focusUserJid: 'focus@domain.net',

useStunTurn: true,

p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,

// Use XEP-0215 to fetch STUN and TURN servers.
useStunTurn: true,

// The STUN servers that will be used in the peer to peer connections
stunServers: [
//{ urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' },
//{ urls: 'stun:stun.l.google.com:19302' },
//{ urls: 'stun:stun1.l.google.com:19302' },
//{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:turn.domain.net:5349' },
{ urls: 'stun:turn.domain.net:3478' }
],

....

In the above config snippet one of the issues solved was to add the port to the bosh setting. Of course you can also take care that your bosh requests get forwarded by your webserver as reverse proxy. Using the port in the config might be a quick way to test whether or not your config is working. It's easier to solve one issue after the other and make one config change at a time instead of needing to make changes in several places.

/etc/jitsi/jicofo/sip-communicator.properties:

org.jitsi.jicofo.auth.URL=XMPP:meet.domain.net
org.jitsi.jicofo.BRIDGE_MUC=jvbbrewery@chat.meet.domain.net

 /etc/jitsi/videobridge/sip-communicator.properties:

org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
org.jitsi.videobridge.STATISTICS_INTERVAL=5000

org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=localhost
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=domain.net
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=SECRET
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@chat.meet.domain.net
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=videobridge1

org.jitsi.videobridge.DISABLE_TCP_HARVESTER=true
org.jitsi.videobridge.TCP_HARVESTER_PORT=4443
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=yourIP
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=yourIP
org.ice4j.ice.harvest.DISABLE_AWS_HARVESTER=true
org.ice4j.ice.harvest.STUN_MAPPING_HARVESTER_ADDRESSES=turn.domain.net:3478
org.ice4j.ice.harvest.ALLOWED_INTERFACES=eth0

Sometimes there might be stupid errors like (in my case) wrong hostnames like "chat.meet..domain.net" (a double dot in the domain), but you can spot those easily in the debug console of your browser.

There tons of config options where you can easily make mistakes, but watching your logs and your debug console should really help you in sorting out these kind of errors. The other URLs above are helpful as well and more elaborate then my few lines here. Especially Mike Kuketz has some advanced configuration tips like disabling third party requests with "disableThirdPartyRequests: true" or limiting the number of video streams and such.

Hope this helps...

Kategorie: 

19 June, 2020 05:26PM by ij

June 18, 2020

Enrico Zini

Missing Qt5 designer library in cross-build development

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The problem

While testing the cross-compiler, we noticed that the designer library was not being built.

The designer library is needed to build designer plugins, which allow loading, dynamically at runtime, .ui interface files that use custom widgets.

The error the customer got at runtime is: QFormBuilder was unable to create a custom widget of the class '…'; defaulting to base class 'QWidget'.

The library with the custom widget implementation was correctly linked, and indeed the same custom widget was used by the application in other parts of its interface not loaded via .ui files.

It turns out that it is not sufficient, and to load custom widgets automatically, QUiLoader wants to read their metadata from plugin libraries containing objects that implement the QDesignerCustomWidgetInterface interface.

Sadly, building such a library requires using QT += designer, and the designer library, that was not being built by Qt5's build system. This looks very much like a Qt5 bug.

A work around would be to subclass QUiLoader extending createWidget to teach it how to create the custom widgets we need. Unfortunately, the customer has many custom widgets.

The investigation

To find out why designer was not being built, I added -d to the qmake invocation at the end of qtbase/configure, and trawled through the 3.1G build output.

The needle in the haystack seems to be here:

DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:18: SUBDIRS := uiplugin uitools lib components designer plugins
DEBUG 1: /home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/src.pro:23: calling qtNomakeTools(lib components designer plugins)

As far as I can understand, qtNomakeTools seems to be intended to disable building those components if QT_BUILD_PARTS doesn't contain tools. For cross-building, QT_BUILD_PARTS is libs examples, so designer does not get built.

However, designer contains the library part needed for QDesignerCustomWidgetInterface and that really needs to be built. I assume that part should really be built as part of libs, not tools.

The fixes/workarounds

I tried removing designer from the qtNomakeTools invocation at qttools/src/designer/src/src.pro:23, to see if qttools/src/designer/src/designer/ would get built.

It did get built, but then build failed with designer/src/designer and designer/src/uitools both claiming the designer plugin.

I tried editing qttools/src/designer/src/uitools/uitools.pro not to claim the designer plugin when tools is not a build part.

I added the tweaks to the Qt5 build system as debian/patches.

2 hours of build time later...

make check is broken:

make[6]: Leaving directory '/home/build/armhf/qt-everywhere-src-5.15.0/qttools/src/designer/src/uitools'
make[5]: *** No rule to make target 'sub-components-check', needed by 'sub-designer-check'.  Stop.

But since make check doesn't do anything in this build, we can simply override dh_auto_test to skip that step.

Finally, this patch builds a new executable, of an architecture that makes dh_shlibdeps struggle:

dpkg-shlibdeps: error: cannot find library libQt5DesignerComponentssystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')
dpkg-shlibdeps: error: cannot find library libQt5Designersystem.so.5 needed by debian/qtbase5system-armhf-dev/opt/qt5system-armhf/bin/designer (ELF format: 'elf32-little' abi: '0101002800000000'; RPATH: '')

And we can just skip running dh_shlibdeps on the designer executable.

The result is in the qt5custom git repository.

18 June, 2020 01:30PM

Ian Jackson

BountySource have turned evil - alternatives ?

I need an alternative to BountySource, who have done an evil thing. Please post recommendations in the comments.

From: Ian Jackson <*****>
To: support@bountysource.com
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 16:26:46 +0100

Bountysource writes ("Update to our Terms of Service"):
> You are receiving this email because we are updating the Bountysource Terms of
> Service, effective 1st July 2020.
>
> What's changing?
> We have added a Time-Out clause to the Bounties section of the agreement:
>
> 2.13 Bounty Time-Out.
> If no Solution is accepted within two years after a Bounty is posted, then the
> Bounty will be withdrawn and the amount posted for the Bounty will be retained
> by Bountysource. For Bounties posted before June 30, 2018, the Backer may
> redeploy their Bounty to a new Issue by contacting support@bountysource.com
> before July 1, 2020. If the Backer does not redeploy their Bounty by the
> deadline, the Bounty will be withdrawn and the amount posted for the Bounty
> will be retained by Bountysource.
>
> You can read the full Terms of Service here
>
> What do I need to do?
> If you agree to the new terms, you don't have to do anything.
>
> If you have a bounty posted prior to June 30, 2018 that is not currently being
> solved, email us at support@bountysource.com to redeploy your bounty.  Or, if
> you do not agree with the new terms, please discontinue using Bountysource.

I do not agree to this change to the Terms and Conditions.
Accordingly, I will not post any more bounties on BountySource.

I currently have one outstanding bounty of $200 on
   https://www.bountysource.com/issues/86138921-rfe-add-a-frontend-for-the-rust-programming-language

That was posted in December 2019.  It is not clear now whether that
bounty will be claimed within your 2-year timeout period.

Since I have not accepted the T&C change, please can you confirm that

(i) My bounty will not be retained by BountySource even if no solution
    is accepted by December 2021.

(ii) As a backer, you will permit me to vote on acceptance of that
    bounty should a solution be proposed before then.

I suspect that you intend to rely on the term in the previous T&C
giving you unlimited ability to modify the terms and conditions.  Of
course such a term is an unfair contract term, because if it were
effective it would give you the power to do whatever you like.  So it
is not binding on me.

I look forward to hearing from you by the 8th of July.  If I do not
hear from you I will take the matter up with my credit card company.

Thank you for your attention.

Ian.

They will try to say "oh it's all governed by US law" but of course section 75 of the Consumer Credit Act makes the card company jointly liable for Bountysource's breach of contract and a UK court will apply UK consumer protection law even to a contract which says it is to be governed by US law - because you can't contract out of consumer protection. So the card company are on the hook and I can use them as a lever.

Update - BountySource have changed their mind

From: Bountysource <support@bountysource.com>
To: *****
Subject: Re: Update to our Terms of Service
Date: Wed, 17 Jun 2020 18:51:11 -0700

Hi Ian

The new terms of service has with withdrawn.

This is not the end of the matter, I'm sure. They will want to get long-unclaimed bounties off their books (and having the cash sat forever at BountySource is not ideal for backers either). Hopefully they can engage in a dialogue and find a way that is fair, and that doesn't incentivise BountySource to sabotage bounty claims(!) I think that means that whatever it is, BountySource mustn't keep the money. There are established ways of dealing with similar problems (eg ancient charitable trusts; unclaimed bank accounts).

I remain wary. That BountySource is now owned by cryptocurrency company is not encouraging. That they would even try what they just did is a really bad sign.

Edited 2020-06-17 16:28 for a typo in Bountysource's email address
Update added 2020-06-18 11:40 for BountySource's change of mind.



comment count unavailable comments

18 June, 2020 10:44AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

On masters and slaves, whitelists and blacklists...

LWN published today yet another great piece of writing, Loaded terms in free software. I am sorry, the content will not be immediately available to anybody following at home, as LWN is based on a subscription model — But a week from now, the article will be open for anybody to read. Or you can ask me (you most likely can find my contact addresses, as they are basically everywhere) for a subscriber link, I will happily provide it.

In consonance with the current mood that started with the killing of George Floyd and sparked worldwide revolts against police brutality, racism (mostly related to police and law enforcement forces, but social as well) and the like, the debate that already started some months ago in technical communities has re-sparked:

We have many terms that come with long histories attached to them, and we are usually oblivious to their obvious meaning. We? Yes, we, the main users and creators of technology. I never felt using master and slave to refer to different points of a protocol, bus, clock or whatever (do refer to the Wikipedia article for a fuller explanation) had any negative connotations — but then again, those terms have never tainted my personal family. That is, I understand I speak from a position of privilege.

A similar –although less heated– issue goes around the blacklist and whitelist terms, or other uses that use white to refer to good, law-abiding citizens, and black to refer to somewhat antisocial uses (i.e. the white hat and black hat hackers).

For several years, this debate has been sparking and dying off. Some important changes have been made — Particularly, in 2017 the Internet Software Consortium started recommending Primary and Secondary, Python dropped master/slave pairs after a quite thorough and deep review throughout 2018, GitHub changed the default branch from master to main earlier this week. The Internet Engineering Task Force has a draft (that lapsed and thus sadly didn’t become an RFC, but still, is archived), Terminology, Power and Oppressive Language that lists suggested alternatives:

There are also many other relationships that can be used as metaphors, Eglash’s research calls into question the accuracy of the master-slave metaphor. Fortunately, there are ample alternatives for the master-slave relationship. Several options are suggested here and should be chosen based on the pairing that is most clear in context:

  • Primary-secondary
  • Leader-follower
  • Active-standby
  • Primary-replica
  • Writer-reader
  • Coordinator-worker
  • Parent-helper

I’ll add that I think we Spanish-speakers are not fully aware of the issue’s importance, because the most common translation I have seen for master/slave is maestro/esclavo: Maestro is the word for teacher (although we do keep our slaves in place). But think whether it sounds any worse if you refer to device pairs, or members of a database high-availability cluster, or whatever as Amo and Esclavo. It does sound much worse…

I cannot add much of value to this debate. I am just happy issues like this are being recognized and dealt with. If the topic interests you, do refer to the LWN article! Some excrepts:

I consider the following to be the core of Jonathan Corbet’s writeup:

Recent events, though, have made it clear — even to those of us who were happy to not question this view — that the story of slavery and the wider racist systems around it is not yet finished. There are many people who are still living in the middle of it, and it is not a nice place to be. We are not so enlightened as we like to think we are.

If there is no other lesson from the events of the last few weeks, we should certainly take to heart the point that we need to be listening to the people who have been saying, for many years, that they are still suffering. If there are people who are telling us that terms like “slave” or “blacklist” are a hurtful reminder of the inequities that persist in our society, we need to accept that as the truth and act upon it. Etymological discussions on what, say, “master” really means may be interesting, but they miss the point and are irrelevant to this discussion.

Part of a comment by user yokem_55:

Often, it seems to me that the replacement words are much more descriptive and precise than the old language. ‘Allowlist’ is far more obviously a list of explicitly authorized entities than ‘whitelist’. ‘Mainline’ has a more obvious meaning of a core stream of development than ‘master’.

The benefit of moving past this language is more than just changing cultural norms, it’s better, more precise communication across the board.

Another spot-on comment, by user alan:

From my perspective as a Black American male, I think that it’s nice to see people willing to see and address racism in various spheres. I am concerned that some of these steps will be more performative than substantial. Terminology changes in software so as to be more welcoming is a nice thing. Ensuring that oppressed minorities have access to the tools and resources to help reduce inequity and ensuring equal protection under the laws is better. We’ll get there one day I’m sure. The current ask is much simpler, its just to stop randomly killing and terrorizing us. Please and thank you.

So… Maybe the protests of this year caught special notoriety because the society is reacting after (or during, for many of us) the lockdown. In any case, I hope for their success in changing the planet’s culture of oppression.

Comments

Tomas Janousek 2020-06-19 10:04:32 +0200

In the blog post “On masters and slaves, whitelists and blacklists…” you claim that “GitHub changed the default branch from master to main earlier this week” but I don’t think that change is in effect yet. When you create a repo, the default branch is still named “master”.

Gunnar Wolf 2020-06-19 11:52:30 -0500

Umh, seems you are right. Well, what can I say? I’m reporting only what I have been able to find / read…

Now, given that said master branch does not carry any Git-specific meaning and is just a commonly used configuration… I hope people start picking it up.

No, I have not renamed master branches in any of my repos… but intend to do so soonish.

Tomas Janousek 2020-06-19 20:01:52 +0200

Yeah, don’t worry. I just find it sad that so much inaccurate news is spreading from a single CEO tweet, and I wanted to help stop that. I’m sure some change will happen eventually, but until it does, we shouldn’t speak about it in the past tense. :-)

18 June, 2020 07:00AM

Antoine Beaupré

Font changes

I have worked a bit on the fonts I use recently. From the main font I use every day in my text editor and terminals to this very website, I did a major and (hopefully) thoughtful overhaul of my typography, in the hope of making things easier to use and, to be honest, just prettier.

Editor and Terminal: Fira mono

This all started when I found out about the Jetbrains Mono font. I found the idea of ligatures fascinating: the result is truly beautiful. So I do what I often do (sometimes to the despair of some fellow Debian members) and filed a RFP to document my research.

As it turns out, Jetbrains Mono is not free enough to be packaged in Debian, because it requires proprietary tools to build. I nevertheless figured I could try another font so I looked at other monospace alternatives. I found the following packages in debian:

Those are also "programmer fonts" that caught my interest but somehow didn't land in Debian yet:

Because Fira code had ligatures, i ended up giving it a shot. I really like the originality of the font. See, for example, how the @ sign looks when compared to my previous font, Liberation Mono:

A dark terminal window showing the prompt '[SIGPIPE]anarcat@angela:~(master)$' in the Liberation Mono font
Liberation Mono

A dark terminal window showing the prompt '[SIGPIPE]anarcat@angela:~(master)$' in the Fira mono font
Fira Mono

Interestingly, a colleague (thanks ahf!) pointed me to the Practical Typography post "Ligatures in programming fonts: hell no", which makes the very convincing argument that ligatures are downright dangerous to programming environment. In my experiences with the fonts, it was also not always giving the result I would expect. I also remembered that the Emacs Haskell mode would have this tendency of inserting crazy syntactic sugar like this in source code without being asked, which I found extremely frustrating.

Besides, Emacs doesn't support ligatures, unless you count such horrendous hacks which hack at the display time. That's because Emacs' display layer is not based on a modern rendering library like Pango but some scary legacy code that very few people understand. On top of the complexity of the codebase, there is also resistance in including a modern font.

So I ended up using Fira mono everywhere I use fixed-width fonts, even though it's not packaged in Debian. That involves the following configuration in my .Xresources (no, I haven't switched to Wayland):

! font settings
Emacs*font: Fira mono
rofi*font: Fira mono 12
! Symbola is to get emojis to display correctly, apt install fonts-symbola
!URxvt*font: xft:Monospace,xft:Symbola
URxvt*font: xft:Fira mono,xft:Symbola

I also dropped this script in ~/.local/share/fonts/download.sh, for lack of a better way, to download all the fonts I'm interested in:

#!/bin/sh

set -e
set -u

wget --no-clobber --continue https://practicaltypography.com/fonts/charter.zip
unzip -u charter.zip
wget --no-clobber --continue https://github.com/adobe-fonts/source-serif-pro/releases/download/3.001R/source-serif-pro-3.001R.zip
unzip -u source-serif-pro-3.001R.zip
wget --no-clobber --continue https://github.com/adobe-fonts/source-sans-pro/releases/download/3.006R/source-sans-pro-3.006R.zip
unzip -u source-sans-pro-3.006R.zip
wget --no-clobber --continue https://github.com/adobe-fonts/source-code-pro/releases/download/2.030R-ro%2F1.050R-it/source-code-pro-2.030R-ro-1.050R-it.zip
unzip -u source-code-pro-2.030R-ro-1.050R-it.zip
wget --no-clobber --continue -O Fira-4.202.zip https://github.com/mozilla/Fira/archive/4.202.zip || true
unzip -u Fira-4.202.zip

Update: I forgot to mention one motivation behind this was to work around a change in the freetype interpreter, discussed in bug 866685 and my upgrades documentation.

Website: Charter

That "hell no" article got me interested in the Practical Typography web book, which I read over the weekend. It was an eye opener and I realized I had already some of those concepts implanted; in fact it's probably after reading the Typography in ten minutes guide that I ended up trying Fira sans a few years ago. I have removed that font since then, however, after realising it was taking up an order of magnitude more bandwidth space than the actual page content.

I really loved the book, so much that I actually bought it. I liked the concept of it, the look, and the fact that it's a living document. There's a lot of typography work I would need to do on this site to catchup with the recommendations from Matthew Butterick. Switching fonts is only one part of this, but it's something I was excited to work on. So I sat down and reviewed the free fonts Butterick recommends and tried out a few. I ended up settling on Charter, a relatively old (in terms of computing) font designed by Matthew Carter (of Verdana fame) in 1987.

Charter really looks great and is surprisingly small. While a single version of Fira varies between 96KiB (Fira Sans Condensed) and 308KiB (Fira Sans Medium Italic), Charter is only 28KiB! While it's still about as large as most of my articles, I found it was a better compromise and decided to make the jump. This site is now in Serif, which is a huge change for me.

The change was done with the following CSS:

h1, h2, h3, h4, h5, body {
    /* 
     * Charter: Butterick's favorite, freely available, found on https://practicaltypography.com/free-fonts.html
     * Palatino: Mac OS
     * Palatino Linotype: Windows
     * Noto serif: Android, packaged in Debian
     * Liberation serif: Linux fallback
     * Serif: fallback
     */
    font-family: Charter, Palatino, "Palatino Linotype", "Noto serif", "Liberation serif", serif;
    /* Charter is available from https://practicaltypography.com/charter.html under the liberal Bitstream license */
}

I have also decided to outline headings by making them slanted, using the beautiful italic version of Charter:

h1, h2, h3, h4, h5 {
    font-style: italic;
}

I also made the point size larger, if the display is large enough:

/* enlarge body point size for charter for larger displays */
@media (min-device-width: 750px) {
    body {
        font-size: 20px;
        line-height: 1.3; /* default in FF is ~1.48 */
    }
}

Modern display (including fonts) have a much higher resolution and the point size on my website was really getting too small to be readable. This, in turn, required a change to the max-width:

#content {
    /* about 2.5 alphabets with charter */
    max-width: 35em;
}

I have kept the footers and "UI" elements in sans-serif though, and kept those aligned on operating system defaults or "system fonts":

/* some hacking at typefaces to get some fresh zest in here
 * fallbacks from:
 * https://en.wikipedia.org/wiki/List_of_typefaces_included_with_Microsoft_Windows
 * https://en.wikipedia.org/wiki/List_of_typefaces_included_with_macOS
 */
.navbar, .footer {
    /*
     * Avenir: Mac, quite different but should still be pretty
     * Gill sans: Mac, Windows, somewhat similar to Avenir (indulge me)
     * Noto sans: Android, packaged in Debian
     * Open sans: Texlive extras AKA Linux, packaged in Debian
     * Fira sans: Mozilla's Firefox OS
     * Liberation sans: Linux fallback
     * Helvetica: general fallback
     * Sans-serif: fallback
     */
    font-family: Avenir, "Gill sans", "Noto sans", "Open sans", "Fira sans", "Liberation sans", Helvetica, sans-serif;
    /* Fira is available from https://github.com/mozilla/Fira/ under the SIL Open Font License */
    /* alternatively, just use system fonts for "controls" instead:
    font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", sans-serif;
    gitlab uses: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Noto Sans", Ubuntu, Cantarell, "Helvetica Neue", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji"
    */
}

I've only done some preliminary testing of how this will look like. Although I tested on a few devices (my phone, e-book tablet, an iPad, and of course my laptop), I fully expect things to break on your device. Do let me know if things look better or worse. For future comparison, my site is well indexed in the Internet Wayback Machine and can be used to look at the site before the change. For example, compare the previous article here with its earlier style.

The changes to the theme are of course available in my custom ikiwiki bootstrap theme (see in particular commits 0bca0fb7 and d1901fb8), as usual.

Enjoy, and let me know what you think!

PS: I considered just setting the Charter font in CSS and not adding it as a @font-face. I'm still considering that option and might do so if the performance cost is too big. The Fira mono font is actually set like this for the preformatted sections of the site, but because it's more common (and it's too big) I haven't added it as a font-face. You might want to download the font locally to benefit from the full experience as well.

PPS: As it turns out, an earlier version of this post featured exactly that: a non-webfont version of Charter, which works fine if you have a good Charter font available. But it looks absolutely terrible if, like many Linux users, you have the nasty bitmap font shipped with xfonts-100dpi and xfonts-75dpi. So I fixed the webfont and it's unlikely this site will be able to load reasonably well in Linux until those packages are removed or bitmap font rendering is disabled.

18 June, 2020 12:07AM

June 17, 2020

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

On Language

Language is a tool of power

In school, we read the philologist diary of Victor Klemperer about the changes in the German language during the Third Reich, LTI - Lingua Tertii Imperii, a book which makes it clear that the use of language is political, creates realities, and has reverse repercussions on concepts of an entire society. Language was one of the tools that supported Nazism in insiduously pervading all parts of society.

Language shapes our concepts of society

Around the same time, a friend of mine proposed to read Egalia's daughters by Gerd Brantenberg, a book in which gendered words were reversed: so that human becomes huwim, for example. This book made me take notice of gendered concepts that often go unnoticed.

Language shapes the way we think and feel

I spent a large part of my adult life in France, which confronted me with the realization that a language provides its speakers with certain concepts. If a concept does not exist in a language, people cannot easily feel or imagine this concept either.

Back then (roughly 20 years ago), even though I was aware of gender inequality, I hated using gender neutral language because in German and French it felt unnatural, and, or so I thought, we were all alike. One day, at a party, we played a game that consisted in guessing people's professions by asking them Yes/No questions. Turns out that we were unable to guess that the woman we were talking with was a doctor, because we could simply not imagine this profession for a young woman. In French, docteur is male and almost nobody would use the word doctoresse, ou femme docteur.

Unimaginable are also the concepts of words in German that have no equivalent in French or vice versa:

  • Sehnsucht composed of longing (sich sehnen) and obsession (Sucht). In English, this word is translated as longing. In French it is translated as nostalgie (nostalgia), but nostalgia is directed towards the past, while Sehnsucht in German can be used to designate a longing for people, places, food, even feelings, and can be used in all temporal directions. There are other approximate translations to French, for example aspiration.
  • Das Unheimliche is a German word and an essay by Sigmund Freud from 1919. The translation of the title had been subject to a lot of debate, before the text was published in French under the name "L'inquiétante étrangeté", something that would translate to English as worrisome unfamiliarity. In English, there is a word for unheimlich, which is uncanny, however canny does not transport the German concept of heimlich which is related to home, familiarity, and secrecy.
  • Dépaysement. This French word is a negation of feeling home like in one's country (pays): it describes the, generally positively connoted, feelings one experiences when changing habits, or environment.

Or, to make all this a bit less serious, Italian has the word gattara (female) or gattaro (male), which one could translate to English roughly as cat person, most often designating old women who feed stray cats.

But really, the way language shapes our concepts and ideas goes much further, as well explained by Lera Boroditsky in a talk in which she explains how language influences concepts of space, time, and blame, among other things.

Building new models

This quote by Buckminster Fuller is pinned on the wall over my desk:

You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.

A change in language is such a new model: it can make oppression and inequalities visible. Words do not only describe our world, they are a vehicle of ideas, and utopias. Analyzing and criticizing our use of language means paving the way for ideas and concepts of inclusion, equality, and unity.

You might be guessing at where am I getting at with this… Right: I am in favor of acknowledging past mistakes, and replacing oppressive metaphors in computing. As noted in the IETF draft about Terminology, Power and Oppressive Language, by Niels Ten Oever and Mallory Knodel, the metaphors "master/slave" and "blacklist/whitelist" associate "white with good and black with evil [which] is known as the 'bad is black effect'", all the while being technically inaccurate.

I acknowledge that this will take time. There is a lot of work to do.

17 June, 2020 03:30PM by ulrike

hackergotchi for Wouter Verhelst

Wouter Verhelst

Software available through Extrepo

Just over 7 months ago, I blogged about extrepo, my answer to the "how do you safely install software on Debian without downloading random scripts off the Internet and running them as root" question. I also held a talk during the recent "MiniDebConf Online" that was held, well, online.

The most important part of extrepo is "what can you install through it". If the number of available repositories is too low, there's really no reason to use it. So, I thought, let's look what we have after 7 months...

To cut to the chase, there's a bunch of interesting content there, although not all of it has a "main" policy. Each of these can be enabled by installing extrepo, and then running extrepo enable <reponame>, where <reponame> is the name of the repository.

Note that the list is not exhaustive, but I intend to show that even though we're nowhere near complete, extrepo is already quite useful in its current state:

Free software

  • The debian_official, debian_backports, and debian_experimental repositories contain Debian's official, backports, and experimental repositories, respectively. These shouldn't have to be managed through extrepo, but then again it might be useful for someone, so I decided to just add them anyway. The config here uses the deb.debian.org alias for CDN-backed package mirrors.
  • The belgium_eid repository contains the Belgian eID software. Obviously this is added, since I'm upstream for eID, and as such it was a large motivating factor for me to actually write extrepo in the first place.
  • elastic: the elasticsearch software.
  • Some repositories, such as dovecot, winehq and bareos contain upstream versions of their respective software. These two repositories contain software that is available in Debian, too; but their upstreams package their most recent release independently, and some people might prefer to run those instead.
  • The sury, fai, and postgresql repositories, as well as a number of repositories such as openstack_rocky, openstack_train, haproxy-1.5 and haproxy-2.0 (there are more) contain more recent versions of software packaged in Debian already by the same maintainer of that package repository. For the sury repository, that is PHP; for the others, the name should give it away.

    The difference between these repositories and the ones above is that it is the official Debian maintainer for the same software who maintains the repository, which is not the case for the others.

  • The vscodium repository contains the unencumbered version of Microsoft's Visual Studio Code; i.e., the codium version of Visual Studio Code is to code as the chromium browser is to chrome: it is a build of the same softare, but without the non-free bits that make code not entirely Free Software.
  • While Debian ships with at least two browsers (Firefox and Chromium), additional browsers are available through extrepo, too. The iridiumbrowser repository contains a Chromium-based browser that focuses on privacy.
  • Speaking of privacy, perhaps you might want to try out the torproject repository.
  • For those who want to do Cloud Computing on Debian in ways that isn't covered by Openstack, there is a kubernetes repository that contains the Kubernetes stack, the as well as the google_cloud one containing the Google Cloud SDK.

Non-free software

While these are available to be installed through extrepo, please note that non-free and contrib repositories are disabled by default. In order to enable these repositories, you must first enable them; this can be accomplished through /etc/extrepo/config.yaml.

  • In case you don't care about freedom and want the official build of Visual Studio Code, the vscode repository contains it.
  • While we're on the subject of Microsoft, there's also Microsoft Teams available in the msteams repository. And, hey, skype.
  • For those who are not satisfied with the free browsers in Debian or any of the free repositories, there's opera and google_chrome.
  • The docker-ce repository contains the official build of Docker CE. While this is the free "community edition" that should have free licenses, I could not find a licensing statement anywhere, and therefore I'm not 100% sure whether this repository is actually free software. For that reason, it is currently marked as a non-free one. Merge Requests for rectifying that from someone with more information on the actual licensing situation of Docker CE would be welcome...
  • For gamers, there's Valve's steam repository.

Again, the above lists are not meant to be exhaustive.

Special thanks go out to Russ Allbery, Kim Alvefur, Vincent Bernat, Nick Black, Arnaud Ferraris, Thorsten Glaser, Thomas Goirand, Juri Grabowski, Paolo Greppi, and Josh Triplett, for helping me build the current list of repositories.

Is your favourite repository not listed? Create a configuration based on template.yaml, and file a merge request!

17 June, 2020 10:02AM

Russ Allbery

Review: Network Effect

Review: Network Effect, by Martha Wells

Series: Murderbot Diaries #5
Publisher: Tor
Copyright: May 2020
ISBN: 1-250-22984-7
Format: Kindle
Pages: 351

Network Effect is the first Murderbot novel, although the fifth story of the series. The previous stories, beginning with All Systems Red, were novellas. Under no circumstances should you start reading the series here. Network Effect builds significantly on the story arc that ended with Exit Strategy and resolves some important loose ends from Artificial Condition. It's meant to be read in series order.

I believe this is the first time in my life that I've started reading a book on the night of its release. I was looking forward to this novel that much, and it does not disappoint.

I'll try not to spoil the previous books too much in this review, but at this point it's a challenge. Just go read them. They're great.

The big question I had about the first Murderbot novel was how would it change the plot dynamic of the series. All of the novellas followed roughly the same plot structure: Murderbot would encounter some humans who need help, somewhat grudgingly help them while pursuing its own agenda, snark heavily about human behavior in the process, once again prove its competence, and do a little bit of processing of its feelings and a lot of avoiding them. This formula works great at short length. Would Wells change it at novel length, or if not, would it get tedious or strained?

The answer is that Wells added in quite a bit more emotional processing and relationship management to flesh out the core of the book and created a plot with more layers and complexity than the novella plots, and the whole construction works wonderfully. This is exactly the book I was hoping for when I heard there would be a Murderbot novel. If you like the series, you'll like this, and should feel free to read it now without reading the rest of the review.

Overse added, "Just remember you're not alone here."

I never know what to say to that. I am actually alone in my head, and that's where 90 plus percent of my problems are.

Many of the loose ends in the novellas were tied up in the final one, Exit Strategy. The biggest one that wasn't, at least in my opinion, was ART, the research transport who helped Murderbot considerably in Artificial Condition and clearly was more than it appeared to be. That is exactly the loose end that Wells resolves here, to great effect. I liked the dynamic between ART and Murderbot before, but it's so much better with an audience to riff off of (and yet better still when there are two audiences, one who already knew Murderbot and one who already knew ART). I like ART almost as much as Murderbot, and that's saying a lot.

The emotional loose end of the whole series has been how Murderbot will decide to interact with other humans. I think that's not quite resolved by the end of the novel, but we and Murderbot have both learned considerably more. The novellas, except for the first, are mostly solo missions even when Murderbot is protecting clients. This is something more complicated; the interpersonal dynamics hearken back to the first novella and then go much deeper, particularly in the story-justified flashbacks. Wells uses Murderbot's irritated avoidance to keep some emotional dynamics underplayed and indirect, letting the reader discover them at opportune moments, and this worked beautifully for me. And Murderbot's dynamic with Amena is just wonderful, mostly because of how smart, matter-of-fact, trusting, and perceptive Amena is.

That's one place where the novel length helps: Wells has more room to expand the characterization of characters other than Murderbot, something that's usually limited in the novellas to a character or two. And these characters are great. Murderbot is clearly the center of the story, but the other characters aren't just furniture for it to react to. They have their own story arcs, they're thoughtful, they learn, and it's a delight to watch them slot Murderbot into various roles, change their minds, adjust, and occasionally surprise it in quite touching ways, all through Murderbot's eyes.

Thiago had said he felt like he should apologize and talk to me more about it. Ratthi had said, "I think you should let it go for a while, at least until we get ourselves out of this situation. SecUnit is a very private person, it doesn't like to discuss its feelings."

This is why Ratthi is my friend.

I have some minor quibbles. The targetSomething naming convention Murderbot comes up with and then is stuck with because it develops too much momentum is entertaining but confusing. A few of the action sequences were just a little on the long side; I find the emotional processing much more interesting. There's also a subplot with a character with memory holes and confusion that I thought dragged on too long, mostly because I found the character intensely irritating for some reason. But these are just quibbles. Network Effect is on par with the best of the novellas that precede it, and that's a high bar indeed.

In this series, Wells has merged the long-running science fiction thread of artificial intelligences and the humanity of robots with the sarcastic and introspective first-person narration of urban fantasy, gotten the internal sensation of emotional avoidance note-perfect without making it irritating (that's some deep magic right there), and added in some top-tier negotiation of friendship and relationships without losing the action and excitement of a great action movie. It's a truly impressive feat and the novel is the best installment so far. I will be stunned if Network Effect doesn't make most of the award lists next year.

Followed by Fugitive Telemetry, due out in April of 2021. You can believe that I have already preordered it.

Rating: 9 out of 10

17 June, 2020 04:22AM

June 16, 2020

Hideki Yamane

excitement kills thinking

"master is wrong word!!! Stop to use it in tech world!!!"

Oh, such activity reminds me of 无产阶级文化大革命. 

Just changing the words does not solve the problems in the real-world, IMHO (of course, it's my opinion, may it be different from yours).

16 June, 2020 10:47AM by Hideki Yamane (noreply@blogger.com)

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Kodi PS3 BD Remote

Setting up a Sony PS3 Blu-Ray Disc Remote Controller with Kodi

TLDR; Since most of the articles on the internet were either obsolete or broken, I’ve chosen to write these notes down in the form of a blog post so that it helps me now and in future, and hopefully others too.

Raspberry Pi

All this time, I have been using the Raspberry Pi for my HTPC needs. The first RPi I acquired was in 2014 and I have been very very happy with the amount of support in the community and quality of the HTPC offering it has. I also appreciate the RPi’s form factor and the power consumption limits. And then, to add more sugar to it, it uses a derivative of Debian, Raspbian, which was very familiar and feel good to me.

Raspberry Pi Issues

So primarily, I use my RPi with Kodi. There are a bunch of other (daemon) services but the primary use case is HTPC only. RPi + Kodi has a very very annoying issue wherein it loses its audio pitch during video playback. The loss is so bad that the audio is barely audible. The workaround is to seek the video playback either way and then it comes back to its actual audio level, just to fade again in a while.

My suspicion was that it may be a problem with Kodi. Or at least, Kodi would have a workaround in software. But unfortunately, I wasted a lot of time in dealing with my suspicion with no fruitful result.

This started becoming a PITA over time. And it seems the issue is with the hardware itself because after I moved my setup to a regular laptop, the audio loss is gone.

Laptop with Kodi

Since I had my old Lenovo Yoga 2 13 lying on all the time, it made sense to make some more use of it, using as the HTPC. This machine comes with a Micro-HDMI Out port, so it felt ideal for my High Definition video rendering needs.

It comes stock with just Intel HD Video with good driver support in Linux, so it was quite quick and easy getting Kodi charged up and running on it. And as I mentioned above, the sound issues are not seen on this setup.

Some added benefits are that I get to run stock Debian on this machine. And I must say a big THANK YOU to the Debian Multimedia Maintainers, who’ve done a pretty good job maintaining Kodi under Debian.

HDMI CEC

Only after I decommissioned my RPi, I came to notice how convenient the HDMI CEC functionality is. Turns out no standard laptops ship CEC functionality onto them. Even the case of my laptop, which has a Micro HDMI Out port, but still no CEC capabilities. As far as I know, the RPi came with the Pulse-Eight CEC module, so obvious first thought was to opt for a compatible external module of the same; but it comes with a nice price tag, me not willing to spend.

WiFi Remotes

Kodi has very well implemented network interface for almost all its features. One could take the Yatse or Music Pump Kodi Remote Android applications that work very very well with Kodi.

But wifi can be flaky some times. Especially, my experience with the Realtek network devices hasn’t been very good. The driver support in Linux is okay but there are many firmware bugs to deal with. In my case, the machine will lose wifi signal/network every once in a while. And it turns out, for this machine, with this network device type, I’m not the only one running into such problems.

And to add to that, this is an UltraBook, which means it doesn’t have an Ethernet port. So I’ve had not much choice other than to live and suffer deal with it.

The WiFi chip also provides the Bluetooth module, which so far I had not used much. From my /etc/modprobe.d/blacklist-memstick.conf, all relevant BT modules were added to the blacklist, all this time.

rrs@lenovo:~$ cat /etc/modprobe.d/blacklist-memstick.conf 
blacklist memstick
blacklist rtsx_usb_ms

# And bluetooth too
#blacklist btusb
#blacklist btrtl
#blacklist btbcm
#blacklist btintel
#blacklist bluetooth
21:21 ♒♒♒   ☺ 😄    

Also to keep in mind is that the driver for my card gives a very misleading kernel message, which is one of the many reasons for this blog post, so that I don’t forget it a couple of months later. The missing firmware error message is okay to ignore, as per this upstream comment.

Jun 14 17:17:08 lenovo kernel: usbcore: registered new interface driver btusb
Jun 14 17:17:08 lenovo systemd[1]: Mounted /boot/efi.
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: examining hci_ver=06 hci_rev=000b lmp_ver=06 lmp_subver=8723
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: rom_version status=0 version=1
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: loading rtl_bt/rtl8723b_config.bin
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: firmware: failed to load rtl_bt/rtl8723b_config.bin (-2)
Jun 14 17:17:08 lenovo kernel: firmware_class: See https://wiki.debian.org/Firmware for information about missing firmware
Jun 14 17:17:08 lenovo kernel: bluetooth hci0: Direct firmware load for rtl_bt/rtl8723b_config.bin failed with error -2
Jun 14 17:17:08 lenovo kernel: Bluetooth: hci0: RTL: cfg_sz -2, total sz 22496

This device’s network + bt are on the same chip.

01:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8723BE PCIe Wireless Network Adapter

And then, when the btusb module is initialed (along with the misleading driver message), you’ll get the following in your USB device listing

Bus 002 Device 005: ID 0bda:b728 Realtek Semiconductor Corp. Bluetooth Radio

Sony PlayStation 3 BD Remote

Almost 10 years ago, I bought the PS3 and many of its accessories. The remote has just been rotting in the shelf. It had rusted so bad that it is better described with these pics.

Rusted inside

Rusted inside

Rusted inside and cover

Rusted inside and cover

Rusted spring

Rusted spring

The rust was so much that the battery holding spring gave up.

A little bit scrubbing and cleaning has gotten it working. I hope it lasts for some time before I find time to open it up and give it a full clean-up.

Pairing the BD Remote to laptop

Honestly, with the condition of the hardware and software on both ends, I did not have much hopes of getting this to work. And in all the years on my computer usage, I hardly recollect much days when I’ve made use of BT. Probably, because the full BT stack wasn’t that well integrated in Linux, earlier. And I mostly used to disable them in hardware and software to save on battery.

All yielded results from the internet talked about tools/scripts that were either not working, pointing to broken links etc.

These days, bluez comes with a nice utility, bluetoothctl. It was a nice experience using it.

First, start your bluetooth service and ensure that the device talks well with the kernel

rrs@lenovo:~$ systemctl status bluetooth                                                                                                          
â—� bluetooth.service - Bluetooth service                                                                                                           
     Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; vendor preset: enabled)                                                      
     Active: active (running) since Mon 2020-06-15 12:54:58 IST; 3s ago                                                                           
       Docs: man:bluetoothd(8)                                                                                                                    
   Main PID: 310197 (bluetoothd)                                                                                                                  
     Status: "Running"                                                                                                                            
      Tasks: 1 (limit: 9424)                                                                                                                      
     Memory: 1.3M                                                                                                                                 
     CGroup: /system.slice/bluetooth.service                                                                                                      
             └─310197 /usr/lib/bluetooth/bluetoothd                                                                                               
                                                                                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Starting Bluetooth service...                                                                                  
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth daemon 5.50                                                                                  
Jun 15 12:54:58 lenovo systemd[1]: Started Bluetooth service.                                                                                     
Jun 15 12:54:58 lenovo bluetoothd[310197]: Starting SDP server                                                                                    
Jun 15 12:54:58 lenovo bluetoothd[310197]: Bluetooth management interface 1.15 initialized                                                        
Jun 15 12:54:58 lenovo bluetoothd[310197]: Sap driver initialization failed.                                                                      
Jun 15 12:54:58 lenovo bluetoothd[310197]: sap-server: Operation not permitted (1)                                                                
12:55 ♒♒♒   ☺ 😄                                                                                                                               

Next, then is to discover and connect to your device

rrs@lenovo:~$ bluetoothctl 
Agent registered
[bluetooth]# devices
Device E6:3A:32:A4:31:8F MI Band 2
Device D4:B8:FF:43:AB:47 MI RC
Device 00:1E:3D:10:29:0F BD Remote Control
[CHG] Device 00:1E:3D:10:29:0F Connected: yes

[BD Remote Control]# info 00:1E:3D:10:29:0F
Device 00:1E:3D:10:29:0F (public)
        Name: BD Remote Control
        Alias: BD Remote Control
        Class: 0x0000250c
        Paired: no
        Trusted: yes
        Blocked: no
        Connected: yes
        LegacyPairing: no
        UUID: Human Interface Device... (00001124-0000-1000-8000-00805f9b34fb)
        UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)
        Modalias: usb:v054Cp0306d0100
[bluetooth]# 

In case of the Sony BD Remote, there’s no need to pair. In fact, trying to pair fails. It prompts for the PIN code, but neither 0000 or 1234 are accepted.

So, the working steps so far are to Trust the device and then Connect the device.

For the sake of future use, I also populated /etc/bluetooth/input.conf based on suggestions on the internet. Note: The advertised keymappings in this config file do not work. Note: I’m only using it for the power saving measures in instructing the BT connection to sleep after 3 minutes.

rrs@priyasi:/tmp$ cat input.conf 
# Configuration file for the input service

# This section contains options which are not specific to any
# particular interface
[General]

# Set idle timeout (in minutes) before the connection will
# be disconnect (defaults to 0 for no timeout)
IdleTimeout=3

# Enable HID protocol handling in userspace input profile
# Defaults to false (HIDP handled in HIDP kernel module)
#UserspaceHID=true

# Limit HID connections to bonded devices
# The HID Profile does not specify that devices must be bonded, however some
# platforms may want to make sure that input connections only come from bonded
# device connections. Several older mice have been known for not supporting
# pairing/encryption.
# Defaults to false to maximize device compatibility.
#ClassicBondedOnly=true

# LE upgrade security
# Enables upgrades of security automatically if required.
# Defaults to true to maximize device compatibility.
#LEAutoSecurity=true
#

#[00:1E:3D:10:29:0F]
[2c:33:7a:8e:d6:30]

[PS3 Remote Map]
# When the 'OverlayBuiltin' option is TRUE (the default), the keymap uses
# the built-in keymap as a starting point.  When FALSE, an empty keymap is
# the starting point.
#OverlayBuiltin = TRUE
#buttoncode = keypress    # Button label = action with default key mappings
#OverlayBuiltin = FALSE
0x16 = KEY_ESC            # EJECT = exit
0x64 = KEY_MINUS          # AUDIO = cycle audio tracks
0x65 = KEY_W              # ANGLE = cycle zoom mode
0x63 = KEY_T              # SUBTITLE = toggle subtitles
0x0f = KEY_DELETE         # CLEAR = delete key
0x28 = KEY_F8             # /TIME = toggle through sleep
0x00 = KEY_1              # NUM-1
0x01 = KEY_2              # NUM-2
0x02 = KEY_3              # NUM-3
0x03 = KEY_4              # NUM-4
0x04 = KEY_5              # NUM-5
0x05 = KEY_6              # NUM-6
0x06 = KEY_7              # NUM-7
0x07 = KEY_8              # NUM-8
0x08 = KEY_9              # NUM-9
0x09 = KEY_0              # NUM-0
0x81 = KEY_F2             # RED = red
0x82 = KEY_F3             # GREEN = green
0x80 = KEY_F4             # BLUE = blue
0x83 = KEY_F5             # YELLOW = yellow
0x70 = KEY_I              # DISPLAY = show information
0x1a = KEY_S              # TOP MENU = show guide
0x40 = KEY_M              # POP UP/MENU = menu
0x0e = KEY_ESC            # RETURN = back/escape/cancel
0x5c = KEY_R              # TRIANGLE/OPTIONS = cycle through recording options
0x5d = KEY_ESC            # CIRCLE/BACK = back/escape/cancel
0x5f = KEY_A              # SQUARE/VIEW = Adjust Playback timestretch
0x5e = KEY_ENTER          # CROSS = select
0x54 = KEY_UP             # UP = Up/Skip forward 10 minutes
0x56 = KEY_DOWN           # DOWN = Down/Skip back 10 minutes
0x57 = KEY_LEFT           # LEFT = Left/Skip back 5 seconds
0x55 = KEY_RIGHT          # RIGHT = Right/Skip forward 30 seconds
0x0b = KEY_ENTER          # ENTER = select
0x5a = KEY_F10            # L1 = volume down
0x58 = KEY_J              # L2 = decrease the play speed
0x51 = KEY_HOME           # L3 = commercial skip previous
0x5b = KEY_F11            # R1 = volume up
0x59 = KEY_U              # R2 = increase the play speed
0x52 = KEY_END            # R3 = commercial skip next
0x43 = KEY_F9             # PS button = mute
0x50 = KEY_M              # SELECT = menu (as per PS convention)
0x53 = KEY_ENTER          # START = select / Enter (matches terminology in mythwelcome)
0x30 = KEY_PAGEUP         # PREV = jump back (default 10 minutes)
0x76 = KEY_J              # INSTANT BACK (newer RCs only) = decrease the play speed
0x75 = KEY_U              # INSTANT FORWARD (newer RCs only) = increase the play speed
0x31 = KEY_PAGEDOWN       # NEXT = jump forward (default 10 minutes)
0x33 = KEY_COMMA          # SCAN BACK =  decrease scan forward speed / play
0x32 = KEY_P              # PLAY = play/pause
0x34 = KEY_DOT            # SCAN FORWARD decrease scan backard speed / increase playback speed; 3x, 5, 10, 20, 30, 60, 120, 180
0x60 = KEY_LEFT           # FRAMEBACK = Left/Skip back 5 seconds/rewind one frame
0x39 = KEY_P              # PAUSE = play/pause
0x38 = KEY_P              # STOP = play/pause
0x61 = KEY_RIGHT          # FRAMEFORWARD = Right/Skip forward 30 seconds/advance one frame
0xff = KEY_MAX
21:48 ♒ � ♅ ⛢  ☺ 😄    

I have not spent much time finding out why not all the key presses work. Especially, given that most places on the internet mention these mappings. For me, some of the key scan codes aren’t even reported. For keys like L1, L2, L3, R1, R2, R3, Next_Item, Prev_Item, they generate no codes in the kernel.

If anyone has suggestions, ideas or fixes, I’d appreciate if you can drop a comment or email me privately.

But given my limited use to get a simple remote ready, to be usable with Kodi, I was apt with only some of the keys working.

Mapping the keys in Kodi

With the limited number of keys detected, mapping those keys to what Kodi could use was the next step. Kodi has a very nice and easy to use module, Keymap Editor. It is very simple to use and map detected keys to functionalities you want. With it, I was able to get a functioning remote to use with my Kodi HTPC setup.

Update: Wed Jun 17 11:38:20 2020

One annoying problem that breaks the overall experience is the following bug on the driver side, that results in connections not being established instantly.

Once the device goes into sleep mode, in random attempts, waking up and re-establishing a BT connection can be multi-poll affair. This can last from a couple of seconds to well over minute.

Random suggestions on the internet mention disabling the autosuspend functionality for the device in the driver with btusb.enable_autosuspend=n, but that did not help in this case.

Given that this device is enumberated over the USB Bus, it probably needs this feature applied to the whole USB tree of the device’s chain. Something to investigate over the weekend.

Jun 16 20:41:23 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 7
Jun 16 20:41:43 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 8
Jun 16 20:41:59 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 9
Jun 16 20:42:18 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:10/0005:054C:030>
Jun 16 20:42:18 lenovo kernel: sony 0005:054C:0306.0006: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 20:51:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:11/0005:054C:030>
Jun 16 20:51:59 lenovo kernel: sony 0005:054C:0306.0007: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 3 threads of 1 processes of 1 users.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Successfully made thread 32747 of process 1646 owned by '1000' RT at priority 5.
Jun 16 21:05:55 lenovo rtkit-daemon[1723]: Supervising 4 threads of 1 processes of 1 users.
Jun 16 21:05:56 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 12
Jun 16 21:06:12 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 1
Jun 16 21:06:34 lenovo kernel: Bluetooth: hci0: ACL packet for unknown connection handle 2
Jun 16 21:06:59 lenovo kernel: input: BD Remote Control as /devices/pci0000:00/0000:00:14.0/usb1/1-7/1-7:1.0/bluetooth/hci0/hci0:3/0005:054C:0306>
Jun 16 21:06:59 lenovo kernel: sony 0005:054C:0306.0008: input,hidraw1: BLUETOOTH HID v1.00 Gamepad [BD Remote Control] on 2c:33:7a:8e:d6:30

Others

There’s a package, kodi-eventclients-ps3, which can be used to talk to the BD Remote. Unfortunately, it isn’t up-to-date. When trying to make use of it, I ran into a couple of problems.

First, the easy one is:

rrs@lenovo:~/ps3pair$ kodi-ps3remote localhost 9777
usr/share/pixmaps/kodi//bluetooth.png
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 220, in <module>
  File "/usr/bin/kodi-ps3remote", line 208, in main
    xbmc.connect(host, port)
    packet = PacketHELO(self.name, self.icon_type, self.icon_file)
  File "/usr/lib/python3/dist-packages/kodi/xbmcclient.py", line 285, in __init__
    with open(icon_file, 'rb') as f:
11:16 ♒♒♒    ☹ 😟=> 1  

This one was simple as it was just a broken path.

The second issue with the tool is a leftover from python2 to python3 conversion.

rrs@lenovo:/etc/bluetooth$ kodi-ps3remote localhost
/usr/share/pixmaps/kodi//bluetooth.png
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Redmi (E8:5A:8B:73:57:44) in range
Living Room TV (E4:DB:6D:24:23:E9) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
Living Room TV (E4:DB:6D:24:23:E9) in range
Redmi (E8:5A:8B:73:57:44) in range
Could not find BD Remote Control. Trying again...
Searching for BD Remote Control
(Hold Start + Enter on remote to make it discoverable)
BD Remote Control (00:1E:3D:10:29:0F) in range
Found BD Remote Control with address 00:1E:3D:10:29:0F
Attempting to pair with remote
Remote Paired.
Traceback (most recent call last):
  File "/usr/bin/kodi-ps3remote", line 221, in <module>
    main()
  File "/usr/bin/kodi-ps3remote", line 212, in main
    if process_keys(remote, xbmc):
  File "/usr/bin/kodi-ps3remote", line 164, in process_keys
    keycode = data.encode("hex")[10:12]
AttributeError: 'bytes' object has no attribute 'encode'
11:24 ♒♒♒    ☹ 😟=> 1  

Fixing that too did not give me the desired result on using the BD Remote in the way I want. So eventually, I gave up and used Kodi’s Keymap Editor instead.

Next

Next in line, when I can manage to get some free time, is to improve the Kodi Video Scraper to have a fallback mode. Currently, for files where it cannot determine the content, it reject the file resulting in those files not showing up in your collection at all. A better approach would have been to have a fallback mode, that when the scraper cannot determine the content, it should fallback to using the filename scraper

16 June, 2020 05:38AM by Ritesh Raj Sarraf (rrs@researchut.com)

June 15, 2020

Enrico Zini

Qt5 custom build of Qt Creator

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

These are instructions for building Qt Creator with the custom Qt, so that it can load and use Designer plugins built with it.

Sadly, because of the requirement of being able to load Designer plugins, and because of the requirement of being able to compile custom widgets using the custom Qt and use them in the Designer, we need to also rebuild Qt Creator.

The resulting packaging is at https://github.com/Truelite/qt5custom.

Set up sources

Open the source tarball, and add the Qt Creator packaging:

tar axf qt-creator-enterprise-src-4.12.2.tar.xz
cp -a debian-qtcreator qt-creator-enterprise-src-4.12.2/debian
ln -s qt-creator-enterprise-src-4.12.2.tar.xz qt-creator-enterprise-src_4.12.2.orig.tar.xz

If needed, install the Qt license:

cp qt-license.txt ~/.qt-license

Install build dependencies

You can use apt build-dep to install dependencies manually:

cd qt-creator-enterprise-src-4.12.2
apt build-dep .

Alternatively, you can create an installable .deb metapackage that depends on the build dependencies:

apt install devscripts
mk-build-deps debian-qtcreator/control
apt -f install qt-creator-enterprise-src-build-deps_4.12.2-1_all.deb

Package build

The package is built by debian/rules base on the excellent work done by Debian Qt5 maintainers.

After installing the build dependencies, you can build like this:

cd qt-creator-enterprise-src-4.12.2
debuild -us -uc -rfakeroot

In debian/rules you can configure NUMJOBS with the number of available CPUs in the machine, to have parallel builds.

debian/rules automatically picks qt5custom as the Qt version to use for the build.

NOTE: Qt Creator 4.12.2 will NOT build if qtbase5custom-armhf-dev is installed. One needs to make sure to have qtbase5custom-dev installed, but NOT qtbase5custom-armhf-dev. Despite quite a bit of investigation, I have been unable to understand why, if both are installed, Qt Creator's build chooses the wrong one, and fails the build.

Build output

Building sources generates 4 packages:

  • qtcreator-qt5custom: the program
  • qtcreator-qt5custom-data: program data
  • qtcreator-qt5custom-doc: documentation
  • qtcreator-qt5custom-dbgsym: debugging symbols

Using the custom Qt Creator Enterprise

The packages are built with qt5custom and install their content in /opt/qt5custom.

The packages are coinstallable with the version of Qt Creator packaged in Debian.

The custom Qt Creator executable is installed in /opt/qt5custom/bin/qtcreator, which is not in $PATH by default. To run it, you can explitly use /opt/qt5custom/bin/qtcreator. qtcreator ran without an explicit path, runs the standard Debian version.

Installing Designer plugins

Designer plugings can be compiled with qt5custom and installed in /opt/qt5custom/plugins/designer/.

Cross-building with Qt Creator

Once the cross-build Qt5 packages are installed, one should see it appear in the Qt Creator kit configuration, where it can be selected and used normally.

If one sets device type to "Generic Linux Device", chooses a compiler for "arm 32bit" and sets Qt Version to "qt5custom-armhf", one can smoothly cross-compile and execute and debug the built program directly on the device.

15 June, 2020 01:00PM

Mark Brown

Book Club: Zettlekasten

Recently I was part of a call with Daniel and Lars to discuss Zettelkasten, a system for building up a cross-referenced archive of notes to help with research and study that has been getting a lot of discussion recently, the key thing being the building of links between ideas. Tomas Vik provided an overview of the process that we all found very helpful, and the information vs knowledge picture in Eugene Yan’s blog on the topic (by @gapingvoid) really helped us crystalize the goals. It’s not at all new and as Lars noted has a lot of similarities with a wikis in terms of what it produces but it couples this with an emphasis on the process and constant generation of new entries which Daniel found similar to some of the Getting Things Done recommendations. We all liked the emphasis on constant practice and how that can help build skills around effective note taking, clear writing and building links between ideas.

Both Daniel and Lars already have note taking practicies that they find useful, combinations of journalling and building up collections of notes of learnings over time, and felt that there could be value in integrating aspects of Zettelkasten into these practices so we talked quite a bit about how that could be done. There was a consensus that journalling is useful so the main idea we had was to keep maintaining the journal, using that as an inbox and setting aside time to write entries into a Zettelkasten. This is also a useful way to approach recording things when away from a computer, taking notes and then writing them up later. Daniel suggested that one way to migrate existing notes might be to simply start anew, moving things over from old notes as required and then after a suitably long period (for example a year) review anything that was left and migrate anything that was left.

We were all concerned about the idea of using any of the non-free solutions for something that is intended to be used long term, especially where the database isn’t in an easily understood format. Fortunately there are free software tools like Zettlr which seem to address these concerns well.

This was a really useful discussion, it really helps to bounce ideas off each other and this was certainly an interesting topic to learn about with some good ideas which will hopefully be helpful to us.

15 June, 2020 12:52PM by broonie

hackergotchi for Bits from Debian

Bits from Debian

Report of the Debian Perl Sprint 2020

Eight members of the Debian Perl team met online between May 15 and May 17 2020, in lieu of a planned physical sprint meeting. Work focussed on preparations for bullseye, and continued maintenance of the large number of perl modules maintained by the team.

Whilst an online sprint cannot fully replace an in-person sprint in terms of focussing attention, the weekend was still very productive, and progress was made on a range of topics including:

  • Reducing technical debt by removing unmaintained packages
  • Beginning packaging and QA for the next major release of perl, 5.32
  • Deciding on a team policy for hardening flags
  • Addressing concerns with Alien::*, a set of pacakges designed to download source code
  • Developing a proposal for debian/NEWS.Developer, to complement debian/NEWS
  • Developing a plan to enable SSL verification in HTTP::Tiny by default

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank OpusVL for providing the Jitsi instance for the weekend.

15 June, 2020 11:40AM by Dominic Hargreaves

Arturo Borrero González

A better Toolforge: a technical deep dive

Logos

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

In the previous post, we shared the context on the recent Kubernetes upgrade that we introduced in the Toolforge service. Today we would like to dive a bit more in the technical details.

Custom admission controllers

One of the key components of the Toolforge Kubernetes are our custom admission controllers. We use them to validate and enforce that the usage of the service is what we intended for. Basically, we have 2 of them:

The source code is written in Golang, which is pretty convenient for natively working in a Kubernetes environment. Both code repositories include extensive documentation: how to develop, test, use, and deploy them. We decided to go with custom admission controllers because we couldn’t find any native (or built-in) Kubernetes mechanism to accomplish the same sort of checks on user activity.

With the Ingress controller, we want to ensure that Ingress objects only handle traffic to our internal domains, which by the time of this writing, are toolforge.org (our new domain) and tools.wmflabs.org (legacy). We safe-list the kube-system namespace and the tool-fourohfour namespace because both need special consideration. More on the Ingress setup later.

The registry controller is pretty simple as well. It ensures that only our internal docker registry is used for user-scheduled containers running in Kubernetes. Again, we exclude from the checks containers running in the kube-system namespace (those used by Kubernetes itself). Other than that, the validation itself is pretty easy. For some extra containers we run (like those related to Prometheus metrics) what we do is simply upload those docker images to our internal registry. The controls provided by this admission controller helps us validate that only FLOSS software is run in our environment, which is one of the core rules of Toolforge.

RBAC and Pod Security Policy setup

I would like to comment next on our RBAC and Pod Security Policy setup. Using the Pod Security Policies (or PSP) we establish a set of constraints on what containers can and can’t do in our cluster. We have many PSP configured in our setup:

  • Privileged policy: used by Kubernetes containers themselves—basically a very relaxed set of constraints that are required for the system itself to work.
  • Default policy: a bit more restricted than the privileged policy, is intended for admins to deploy services, but it isn’t currently in use..
  • Toolforge user policies: this applies to user-scheduled containers, and there are some obvious restrictions here: we only allow unprivileged pods, we control which HostPath is available for pods, use only default Linux capabilities, etc.

Each user can interact with their own namespace (this is how we achieve multi-tenancy in the cluster). Kubernetes knows about each user by means of TLS certs, and for that we have RBAC. Each user has a rolebinding to a shared cluster-role that defines how Toolforge tools can use the Kubernetes API. The following diagram shows the design of our RBAC and PSP in our cluster:

RBAC and PSP for Toolforge diagram

RBAC and PSP for Toolforge, original image in wikitech

I mentioned that we know about each user by means of TLS certificates. This is true, and in fact, there is a key component in our setup called maintain-kubeusers. This custom piece of Python software is run as a pod inside the cluster and is responsible for reading our external user database (LDAP) and generating the required credentials, namespaces, and other configuration bits for them. With the TLS cert, we basically create a kubeconfig file that is then written into the homes NFS share, so each Toolforge user has it in their shell home directory.

Networking and Ingress setup

With the basic security controls in place, we can move on to explaining our networking and Ingress setup. Yes, the Ingress word might be a bit overloaded already, but we refer here to Ingress as the path that end-users follow from their web browser in their local machine to a webservice running in the Toolforge cluster.

Some additional context here. Toolforge is not only Kubernetes, but we also have a Son of GridEngine deployment, a job scheduler that covers some features not available in Kubernetes. The grid can also run webservices, although we are encouraging users to migrate them to Kubernetes. For compatibility reasons, we needed to adapt our Ingress setup to accommodate the old web grid. Deciding the layout of the network and Ingress was definitely something that took us some time to figure out because there is not a single way to do it right.

The following diagram can be used to explain the different steps involved in serving a web service running in the new Toolforge Kubernetes.

Toolforge k8s network topology diagram

Toolforge k8s network topology, original image in Wikitech

The end-user HTTP/HTTPs request first hits our front proxy in (1). Running here is NGINX with a custom piece of LUA code that is able to decide whether to contact the web grid or the new Kubernetes cluster. TLS termination happens here as well, for both domains (toolforge.org and tools.wmflabs.org). Note this proxy is reachable from the internet, as it uses a public IPv4 address, a floating IP from CloudVPS, the infrastructure service we provide based on Openstack. Remember that our Kubernetes is directly built in virtual machines–a bare-metal type deployment.

If the request is directed to a webservice running in Kubernetes, the request now reaches haproxy in (2), which knows the cluster nodes that are available for Ingress. The original 80/TCP packet is now translated to 30000/TCP; this is the TCP port we use internally for the Ingress traffic. This haproxy instance provides load-balancing also for the Kubernetes API as well, using 6443/TCP. It’s worth mentioning that unlike the Ingress, the API is only reachable from within the cluster and not from the internet.

We have a NGINX-Ingress NodePort service listening in 30000/TCP in every Kubernetes worker node in (3); this helps the request to eventually reach the actual NGINX-Ingress pod in (4), which is listening in 8080/TCP. You can see in the diagram how in the API server (5) we hook the Ingress admission controller (6) to validate Kubernetes Ingress configuration objects before allowing them in for processing by NGINX-Ingress (7).

The NGINX-Ingress process knows which tools webservices are online and how to contact them by means of an intermediate Service object in (8). This last Service object means the request finally reaches the actual tool pod in (9). At this point, it is worth noting that our Kubernetes cluster uses internally kube-proxy and Calico, both using Netfilter components to handle traffic.

tools-webservice

Most user-facing operations are simplified by means of another custom piece of Python code: tools-webservice. This package provides users with the webservice command line utility in our shell bastion hosts. Typical usage is to just run webservice start|stop|status. This utility creates all the required Kubernetes objects on-demand like Deployment, ReplicaSet, Ingress and Service to ease deploying web apps in Toolforge. Of course, advanced users can interact directly with Kubernetes API and create their custom configuration objects. This utility is just a wrapper, a shortcut.

tool-fourohfour and tool-k8s-status

The last couple of custom components we would like to mention are the tool-fourohfour and tool-k8s-status web services. These two utilities run inside the cluster as if they were any other user-created tool. The fourohfour tool allows for a controlled handling of HTTP 404 errors, and it works as the default NGINX-Ingress backend. The k8s-status tool shows plenty of information about the cluster itself and each tool running in the cluster, including links to the Server Admin Log, an auto-generated grafana dashboard for metrics, and more.

For metrics, we use an external Prometheus server that contacts the Kubernetes cluster to scrape metrics. We created a custom metrics namespace in which we deploy all the different components we use to observe the behavior of the system:

  • metrics-server: used by some utilities like kubectl top.
  • kube-state-metrics: provides advanced metrics about the state of the cluster.
  • cadvisor: to obtain fine-grained metrics about pods, deployments, nodes, etc.

All the Prometheus data we collect is used in several different Grafana dashboards, some of them directed for user information like the ones linked by the k8s-status tool and some others for internal use by us the engineers. These are for internal use but are still public, like the Ingress specific dashboard, or the cluster state dashboard. Working publicly, in a transparent way, is key for the success of CloudVPS in general and Toolforge in particular. Like we commented in the previous post, all the engineering work that was done here was shared by community members.

By the community, for the community

We think this post sheds some light on how the Toolforge Kubernetes service works, and we hope it could inspire others when trying to build similar services or, even better, help us improve Toolforge itself. Since this was first put into production some months ago we detected already some margin for improvement in a couple of the components. As in many other engineering products, we will follow an iterative approach for evolving the service. Mind that Toolforge is maintained by the Wikimedia Foundation, but you can think of it as a service by the community for the community. We will keep an eye on it and have a list of feature requests and things to improve in the future. We are looking forward to it!

This post was originally published in the Wikimedia Tech blog, and is authored by Arturo Borrero Gonzalez and Brooke Storm.

15 June, 2020 09:00AM

Russ Allbery

Radical haul

Along with the normal selection of science fiction and fantasy, a few radical publishers have done book giveaways due to the current political crisis in the United States. I've been feeling for a while like I've not done my homework on diverse political theory, so I downloaded those. (That's the easy part; making time to read them is the hard part, and we'll see how that goes.)

Yarimar Bonilla & Marisol LeBrón (ed.) — Aftershocks of Disaster (non-fiction anthology)
Jordan T. Camp & Christina Heatherton (ed.) — Policing the Planet (non-fiction anthology)
Zachary D. Carter — The Price of Peace (non-fiction)
Justin Akers Chacón & Mike Davis — No One is Illegal (non-fiction)
Grace Chang — Disposable Domestics (non-fiction)
Suzanne Collins — The Ballad of Songbirds and Snakes (sff)
Angela Y. Davis — Freedom is a Constant Struggle (non-fiction)
Danny Katch — Socialism... Seriously (non-fiction)
Naomi Klein — The Battle for Paradise (non-fiction)
Naomi Klein — No is Not Enough (non-fiction)
Naomi Kritzer — Catfishing on CatNet (sff)
Derek Künsken — The Quantum Magician (sff)
Rob Larson — Bit Tyrants (non-fiction)
Michael Löwy — Ecosocialism (non-fiction)
Joe Macaré, Maya Schenwar, et al. (ed.) — Who Do You Serve, Who Do You Protect? (non-fiction anthology)
Tochi Onyebuchi — Riot Baby (sff)
Sarah Pinsker — A Song for a New Day (sff)
Lina Rather — Sisters of the Vast Black (sff)
Marta Russell — Capitalism and Disbility (non-fiction)
Keeanga-Yamahtta Taylor — From #BlackLivesMatter to Black Liberation (non-fiction)
Keeanga-Yamahtta Taylor (ed.) — How We Get Free (non-fiction anthology)
Linda Tirado — Hand to Mouth (non-fiction)
Alex S. Vitale — The End of Policing (non-fiction)
C.M. Waggoner — Unnatural Magic (sff)
Martha Wells — Network Effect (sff)
Kai Ashante Wilson — Sorcerer of the Wildeeps (sff)

15 June, 2020 02:50AM

June 14, 2020

Enrico Zini

hackergotchi for Steve Kemp

Steve Kemp

Writing a brainfuck compiler.

So last night I had the idea that it might be fun to write a Brainfuck compiler, to convert BF programs into assembly language, where they'd run nice and quickly.

I figured I could allocate a day to do the work, and it would be a pleasant distraction on a Sunday afternoon. As it happened it only took me three hours from start to finish.

There are only a few instructions involved in brainfuck:

  • >
    • increment the data pointer (to point to the next cell to the right).
  • <
    • decrement the data pointer (to point to the next cell to the left).
  • +
    • increment (increase by one) the byte at the data pointer.
  • -
    • decrement (decrease by one) the byte at the data pointer.
  • .
    • output the byte at the data pointer.
  • ,
    • accept one byte of input, storing its value in the byte at the data pointer.
  • [
    • if the byte at the data pointer is zero, then instead of moving the instruction pointer forward to the next command, jump it forward to the command after the matching ] command.
  • ]
    • if the byte at the data pointer is nonzero, then instead of moving the instruction pointer forward to the next command, jump it back to the command after the matching [ command.

The Wikipedia link early shows how you can convert cleanly to a C implementation, so my first version just did that:

  • Read a BF program.
  • Convert to a temporary C-source file.
  • Compile with gcc.
  • End result you have an executable which you can run.

The second step was just as simple:

  • Read a BF program.
  • Convert to a temporary assembly language file.
  • Compile with nasm, link with ld.
  • End result you have an executable which you can run.

The following program produces the string "Hello World!" to the console:

++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.

My C-compiler converted that to the program:

extern int putchar(int);
extern char getchar();

char array[30000];
int idx = 0;

int main (int arc, char *argv[]) {
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;
  array[idx]++;

  while (array[idx]) {
    idx++;
    array[idx]++;
    array[idx]++;
    array[idx]++;
    array[idx]++;

    while (array[idx]) {
      idx++;
      array[idx]++;
      array[idx]++;
      idx++;
      array[idx]++;
      array[idx]++;
      array[idx]++;
    ..
    ..

The assembly language version is even longer:


global _start
section .text

_start:
  mov r8, stack
  add byte [r8], 8
label_loop_8:
  cmp byte [r8], 0
  je close_loop_8
  add r8, 1
  add byte [r8], 4
label_loop_14:
  cmp byte [r8], 0
  je close_loop_14
  add r8, 1
  add byte [r8], 2
  add r8, 1
  add byte [r8], 3
  add r8, 1
  add byte [r8], 3
  add r8, 1
  add byte [r8], 1
  sub r8, 4
  sub byte [r8], 1
  jmp label_loop_14
close_loop_14:
  add r8, 1
  add byte [r8], 1
  ..

  mov rax, 60
  mov rdi, 0
  syscall
section .bss
stack: resb 300000

Annoyingly the assembly language version ran slower than the C-version, which I was sneakily compiling with "gcc -O3 .." to ensure it was optimized.

The first thing that I did was to convert it to fold adjacent instructions. Instead of generating separate increment instructions for ">>>" I instead started to generate "add xx, 3". That didn't help as much as I'd hoped, but it was still a big win.

After that I made a minor tweak to the way that loops are handled to compare at the end of the loop as well as the start, and that shaved off a fraction of a second.

As things stand I think I'm "done". It might be nice to convert the generated assembly language to something gcc can handle, to drop the dependency on nasm, but I don't feel a pressing need for that yet.

Was a fun learning experience, and I think I'll come back to optimization later.

14 June, 2020 04:59PM