April 10, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.4.6: Bug fix interim version

rcpp logo

Rcpp 1.0.4 was released on March 17, following the usual sequence of fairly involved reverse-depends check along with a call for community testing issued weeks before the release. In that email I specifically pleaded with folks to pretty-please test non-standard setups:

It would be particularly beneficial if those with “unsual” build dependencies tested it as we would increase overall coverage beyond what I get from testing against 1800+ CRAN packages. BioConductor would also be welcome.

Alas, you can’t always get what you want. Shortly after the release we were made aware that the two (large) pull request at the book ends of the 1.0.3 to 1.0.4 release period created trouble. Of these two, the earliest PR in the 1.0.4 release upset older-than-CRAN-tested installation, i.e. R 3.3.0 or before. (Why you’d want to run R 3.3.* when R 3.6.3 is current is something I will never understand, but so be it.) This got addressed in two new PRs. And the matching last PR had a bit of sloppyness leaving just about everyone alone, but not all those macbook-wearing data scientists when using newer macOS SDKs not used by CRAN. In other words, “unsual” setups. But boy, do those folks have an ability to complain. Again, two quick PRs later that was addressed. Along came a minor PR with two more Rcpp::Shield<> uses (as life is too short to manually count PROTECT and UNPROTECT). And then a real issue between R 4.0.0 and Rcpp first noticed with RcppParallel builds on Windows but then also affecting RcppArmadillo. Another quickly issued fix. So by now the count is up to six, and we arrived at Rcpp 1.0.4.6.

Which is now on CRAN, after having sat there for nearly a full week, and of course with no reason given. Because the powers that be move in mysterious ways. And don’t answer to earthlings like us.

As may transpire here, I am little tired from all this. I think we can do better, and I think we damn well should, or I may as well throw in the towel and just release to the drat repo where each of the six interim versions was available for all to take as soon as it materialized.

Anyway, here is the state of things. Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 1897 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 191 in BioConductor. And per the (partial) logs of CRAN downloads, we are running steasy at one millions downloads per month.

The changes for this interim version are summarized below.

Changes in Rcpp patch release version 1.0.4.6 (2020-04-02)

  • Changes in Rcpp API:

    • The exception handler code in #1043 was updated to ensure proper include behavior (Kevin in #1047 fixing #1046).

    • A missing Rcpp_list6 definition was added to support R 3.3.* builds (Davis Vaughan in #1049 fixing #1048).

    • Missing Rcpp_list{2,3,4,5} definition were added to the Rcpp namespace (Dirk in #1054 fixing #1053).

    • A further updated corrected the header include and provided a missing else branch (Mattias Ellert in #1055).

    • Two more assignments are protect with Rcpp::Shield (Dirk in #1059)

  • Changes in Rcpp Attributes:

    • Empty strings are not passed to R CMD SHLIB which was seen with R 4.0.0 on Windows (Kevin in #1062 fixing #1061).
  • Changes in Rcpp Deployment:

    • Travis CI unit tests now run a matrix over the versions of R also tested at CRAN (rel/dev/oldrel/oldoldrel), and coverage runs in parallel for a net speed-up (Dirk in #1056 and #1057).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2356 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 April, 2020 01:37AM

April 09, 2020

Antoine Beaupré

Mumble dreams

With everyone switching to remote tools for social distancing, I've been using Mumble more and more. That's partly by choice -- I don't like videoconferencing much, frankly -- and partly by necessity: sometimes my web browser fails and Mumble is generally more reliable.

Some friend on a mailing list recently asked "shouldn't we make Mumble better?" and opened the door for me to go on a long "can I get a pony?" email. Because I doubt anyone on that mailing list has the time or capacity to actually fix those issues, I figured I would copy this to a broader audience in the hope that someone else would pick it up.

Why Mumble rocks

Before I go on with the UI critique, I should show why care: Mumble is awesome.

When you do manage to configure it correctly, Mumble just works; it's highly reliable. It uses little CPU, both on the client and the server side, and can have rooms with tens if not hundreds of participants. The server can be easily installed and configured: there's a Debian package and resource requirements are minimal. It's basically network-bound. There are at least three server implementations, the official one called Murmur, the minimalist umurmur and Grumble, a Go rewrite.

It has great quality: echo canceling, when correctly configured, is solid and latency is minimal. It has "overlays" so you can use it while gaming or demo'ing in full screen while still having an idea of who's talking. It also supports positional audio for gaming that integrates with popular games like Counterstrike or Half-Life.

It's moderately secure: it doesn't support end-to-end encryption, but client/server communication is encrypted with TLS. It supports a server password and some moderation mechanisms.

UI improvements

Mumble should be smarter about a bunch of things. Having all those settings is nice for geeky control freaks, but it makes the configuration absolutely unusable for most people. Hide most settings by default, and make better defaults.

Specifically, those should be on by default:

  • RNNoise
  • echo cancellation (the proper "monitor" channels)
  • pre-configured shortcut for PTT (Push To Talk) -- right-shift is my favorite
  • "double-PTT" to hold it enabled
  • be more silent by default (I understand why it would want to do voice synthesis, but it would need to be much better at it before it's default)

The echo test should be more accessible, one or two clicks away from the main UI. I have only found out about that feature when someone told me where to find it. This basically means to take it out of the settings page and into its own dialog.

The basic UI should be much simpler. It could look something like Jitsi: just one giant mute button with a list of speakers. Basically:

  1. Take that status bar and make it use the entire space of the main window

  2. Push the chat and room list dialog to separate, optional dialog (e.g. the room list could be a popup on login, but we don't need to continuously see the damn thing)

  3. Show the name of the person talking in the main UI, along with other speakers (Big Blue Button does this well: just a label that fades away with time after a person talks)

Some features could be better explained. For example, the "overlay" feature makes no sense at all for most users. It only makes sense when you're a gamer and use Mumble alongside another full-screen program, to show you who's talking.

Improved authentication. The current authentication systems in Mumble are somewhat limited: the server can have a shared password to get access to it, and from there it's pretty much free-for-all. There are client certificates but those are hard to understand and the most common usage scenario is that someone manages to configure it once, forgets about it and then cannot login again with the same username.

It should be easier to get the audio right. Now, to be fair, this is hard to do in any setup, and Mumble is only a part of this. There are way too many moving parts in Linux for this to be easy: between your hardware, ALSA drivers, Pulseaudio mixers and Mumble, too many things can go wrong. So this is a general problem when doing multimedia in general, and the Linux ecosystem in particular, but Mumble is especially hard to configure in there.

Improved speaker stats. When you right-click on a user in Mumble, you get detailed stats about the user: packet loss, latency, bandwidth, codecs... It's pretty neat. But that is hard to parse for a user. Jitsi, in contrast, shows a neat little "bar graph" (similar to what you get on a cell phone) with a color code to show network conditions for that user. Then you can drill down to show more information. Having that info for the user would be really useful to figure out which user is causing that echo or latency. Heck, while I'm dreaming, we could do the same thing Jitsi and tell the user when we detect too much noise on their side and suggest muting!

There's probably more UI issues, but at that point you have basically rebuilt the entire user interface. This problem is hard to fix because UX people are unlikely to have the skills required to hack at an (old) Qt app, and C++ hackers are unlikely to have the best UX skills...

Missing features

Video. It has been on the roadmap since 2011, so I'm not holding my breath. It is, obviously, the key feature missing from the software when compared to other conferencing tools and it's nice to see they are considering it. Screensharing and whiteboarding would also be a nice addition. Unfortunately, all that is a huge undertaking and it's unlikely to happen in the short term. And even if it does, it's possible hard-core Mumble users would be really upset at the change...

A good web app -- a major blocker to the adoption of Mumble is the need for that complex app. If users could join just with a web browser, adoption would be much easier. There is a web app called mumble-web out there, but it seems to work only for listening as there are numerous problems with recording: quality issues, audio glitches, voice activation, voice activation.. The CCC seems to be using that app to stream talk translation, so that part supposedly works correctly.

Dial-in -- allow plain old telephones to call into conferences. There seems to be a program called mumsi that can do this, but it's unmaintained and it's unclear if any of the forks work at all. Update: according to samba, mumsi works, but sometimes freezes and needs to be restarted. Each SIP account shows up as a bot that comes up when someone calls the number. It supports multiple callers, although apparently mumsi crashes after a while with 4 callers. A comment here also mentioned there's a fork that mentions using a "pin" as well for dialing in.

Caveats

Now the above will probably not happen soon. Unfortunately, Mumble has had trouble with their release process recently. It took them a long time to even agree on releasing 1.3, and when they did agree, it took them a long time again to actually do the release. There has been much more activity on the Mumble client and web app recently, so hopefully I will be proven wrong. The 1.3.1 release actually came out recently(correction:) is actually being worked on, which is encouraging.

All in all, mumble has some deeply ingrained UI limitations. it's built like an app from the 1990, all the way down to the menu system and "status bar" buttons. It's definitely not intuitive for a new user and while there's an audio wizard that can help you get started, it doesn't always work and can be confusing in itself.

I understand that I'm just this guy saying "please make this for me ktxbye". I'm not writing this as a critic of Mumble: I love the little guy, the underdog. Mumble has been around forever and it kicks ass. I'm writing this in a spirit of solidarity, in the hope the feedback can be useful and to provide useful guidelines on how things could be improved. I wish I had the time to do this myself and actually help the project beyond just writing, but unfortunately the reality is I'm a poor UI designer and I have little time to contribute to more software projects.

So hopefully someone could take those ideas and make Mumble even greater. And if not, we'll just have to live with it.

Thanks to all the Mumble developers who, over all those years, managed to make and maintain such an awesome product. You rock!

09 April, 2020 07:56PM

hackergotchi for David Bremner

David Bremner

Tangling multiple files

I have lately been using org-mode literate programming to generate example code and beamer slides from the same source. I hit a wall trying to re-use functions in multiple files, so I came up with the following hack. Thanks 'ngz' on #emacs and Charles Berry on the org-mode list for suggestions and discussion.

(defun db-extract-tangle-includes ()
  (goto-char (point-min))
  (let ((case-fold-search t)
        (retval nil))
    (while (re-search-forward "^#[+]TANGLE_INCLUDE:" nil t)
      (let ((element (org-element-at-point)))
        (when (eq (org-element-type element) 'keyword)
          (push (org-element-property :value element) retval))))
    retval))

(defun db-ob-tangle-hook ()
  (let ((includes (db-extract-tangle-includes)))
    (mapc #'org-babel-lob-ingest includes)))

(add-hook 'org-babel-pre-tangle-hook #'db-ob-tangle-hook t)

Use involves something like the following in your org-file.

#+SETUPFILE: presentation-settings.org
#+SETUPFILE: tangle-settings.org
#+TANGLE_INCLUDE: lecture21.org
#+TITLE: GC V: Mark & Sweep with free list

For batch export with make, I do something like

%.tangle-stamp: %.org
    emacs --batch --quick  -l org  -l ${HOME}/.emacs.d/org-settings.el --eval "(org-babel-tangle-file \"$<\")"
    touch $@

09 April, 2020 11:17AM

Elana Hashman

Repack Zoom .debs to remove the `ibus` dependency

For whatever reason, Zoom distributes .debs that have a dependency on ibus. ibus is the "intelligent input bus" package and as far as I'm aware, might be used for emoji input in chat or something?? But is otherwise not actually a dependency of the Zoom package. I've tested this extensively... the client works fine without it.

I noticed when I installed ibus along with the Zoom package that ibus would frequently eat an entire core of CPU. I'm sure this is a bug in the ibus package or service, but I have no energy to try to get that fixed. If it's not a hard dependency, Zoom shouldn't depend on it in the first place.

Anyways, here's how you can repack a Zoom .deb to remove the ibus dependency:

scratch=$(mktemp -d)

# Extract package contents
dpkg -x zoom_amd64.deb $scratch

# Extract package control information
dpkg -e zoom_amd64.deb $scratch/DEBIAN

# Remove the ibus dependency
sed -i -E 's/(ibus, |, ibus)//' $scratch/DEBIAN/control

# Rebuild the .deb
dpkg -b $scratch patched_zoom_amd64.deb

Now you can install the patched .deb with

dpkg -i patched_zoom_amd64.deb

The upstream fix would be for Zoom to move the ibus "Dependency" to a "Recommends", but they have been unwilling to do this for over a year.

But wait, what version even is my package?

By the way, you may have also noticed that the Zoom client downloads do not conform to the standard Debian package naming scheme (i.e. including the version in the filename). If you're not sure what version a zoom_amd64.deb package you've downloaded is, you can quickly extract that information with dpkg-deb:

dpkg-deb -I zoom_amd64.deb | grep Version
# Version: 3.5.383291.0407

09 April, 2020 04:00AM by Elana Hashman

April 08, 2020

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Using Jitsi Meet with Puppet for self-hosted video conferencing

Here's a blog post I wrote for the puppet.com blog. Many thanks to Ben Ford and all their team!.

With everything that is currently happening around the world, many of us IT folks have had to solve complex problems in a very short amount of time. Pretty quickly at work, I was tasked with finding a way to make virtual meetings easy, private and secure.

Whereas many would have turned to a SaaS offering, we decided to use Jitsi Meet, a modern and fully on-premise FOSS videoconferencing solution. Jitsi works on all platforms by running in a browser and comes with nifty Android and iOS applications.

We've been using our instance quite a bit, and so far everyone from technical to non-technical users have been pretty happy with it.

Jitsi Meet is powered by WebRTC and can be broken into multiple parts across multiple machines if needed. In addition to the webserver running the Jitsi Meet JavaScript code, the base configuration uses the Videobridge to manage users' video feeds, Jicofo as a conference focus to manage media sessions and the Prosody XMPP server to tie it all together.

Here's a network diagram I took from their documentation to show how those applications interact:

A network diagram that shows how the different bits of jitsi meet work together

Getting started with the Jitsi Puppet module

First of all, you'll need a valid domain name and a server with decent bandwidth. Jitsi has published a performance evaluation of the Videobridge to help you spec your instance appropriately. You will also need to open TCP ports 443, 4443 and UDP port 10000 in your firewall. The puppetlabs/firewall module could come in handy here.

Once that is done, you can use the smash/jitsimeet Puppet module on a Debian 10 (Buster) server to spin up an instance. A basic configuration would look like this:

  class { 'jitsimeet':
    fqdn                 => 'jitsi.example.com',
    repo_key             => puppet:///files/apt/jitsimeet.gpg,
    manage_certs         => true,
    jitsi_vhost_ssl_key  => '/etc/letsencrypt/live/jitsi.example.com/privkey.pem'
    jitsi_vhost_ssl_cert => '/etc/letsencrypt/live/jitsi.example.com/cert.pem'
    auth_vhost_ssl_key   => '/etc/letsencrypt/live/auth.jitsi.example.com/privkey.pem'
    auth_vhost_ssl_cert  => '/etc/letsencrypt/live/auth.jitsi.example.com/cert.pem'
    jvb_secret           => 'mysupersecretstring',
    focus_secret         => 'anothersupersecretstring',
    focus_user_password  => 'yetanothersecret',
    meet_custom_options  => {
      'enableWelcomePage'         => true,
      'disableThirdPartyRequests' => true,
    };
  }

The jitsimeet module is still pretty young: it clearly isn't perfect and some external help would be very appreciated. If you have some time, here are a few things that would be nice to work on:

  • Tests using puppet-rspec
  • Support for other OSes (only Debian 10 at the moment)
  • Integration with the Apache and Ngnix modules

If you use this module to manage your Jitsi Meet instance, please send patches and bug reports our way!

Learn more

08 April, 2020 08:51PM by Louis-Philippe Véronneau

April 07, 2020

hackergotchi for Shirish Agarwal

Shirish Agarwal

GMRT 2020 and lots of stories

First of all, congratulations to all those who got us 2022 Debconf, so we will finally have a debconf in India. There is of course, lot of work to be done between now and then. For those who would be looking forward to visit India and especially Kochi I would suggest you to hear this enriching tale –

I am sorry I used youtube link but it is too good a podcast not to be shared. Those who don’t want youtube can use the invidio.us link for the same as shared below.

https://www.invidio.us/watch?v=BvjgKuKmnQ4

I am sure there are lot more details, questions, answers etc. but would direct them gently to Praveen, Shruti, Balasankar and the rest who are from Kochi to answer if you have any questions about that history.

National Science Day, GMRT 2020

First, as always, we are and were grateful to both NCRA as well as GMRT for taking such good care of us. Even though Akshat was not around, probably getting engaged, a few of us were there. About 6-7 from the Mozilla Nasik while the rest representing the foss community. Here is a small picture which commentrates the event –

National Science Day, GMRT 2020

While there is and was a lot to share about the event. For e.g. Akshay had bought RPI- Zero as well as RPI-2 (Raspberry Pi’s ) and showed some things. He had also bought up a Debian stable live drive with persistence although the glare from the sun was too much that we couldn’t show it to clearly to students. This was also the case with RPI but still we shared what and how much we could. Maybe next year, we either ask them to have double screens or give us dark room so we can showcase things much better. We did try playing with contrast and all but it didn’t have much of an effect 😦 . Of course in another stall few students had used RPI’s as part of their projects so at times we did tell some of the newbies to go to those stalls and see and ask about those projects so they would have a much wider experience of things. The Mozilla people were pushing VR as well as Mozilla lite the browser for the mobile.

We also gossiped quite a bit. I shared about indicatelts , a third-party certificate extension although I dunno if I should file a wnpp about it or not. We didn’t have a good experience of when I had put an RFP (Request for Package) which was accepted for an extension which had similar functionality which we later come to know was sharing the sites people were using the extension to call home and share both the URL and the IP Address they were using it from. Sadly, didn’t leave a good taste in mouth 😦

Delhi Riots

One thing I have been disappointed with is the lack of general awareness about things especially in the youth. We have people who didn’t know that for e.g. in the Delhi riots which happened recently the law and order (Police) lies with Home Minister of India, Amit Shah. This is perhaps the only capital in the world which has its own Chief Minister but doesn’t have any say on its law and order. And this has been the case for last 70 years i.e. since independance. The closest I know so far is the UK but they too changed their tune in 2012. India and especially Delhi seems to be in a time-capsule which while being dysfunctional somehow is made to work. In many ways, it’s three body or a body split into three personalities which often makes governance a messy issue but that probably is a topic for another day. In fact, scroll had written a beautiful editorial that full statehood for Delhi was not only Arvind Kejriwal’s call (AAP) but also something that both BJP as well as Congress had asked in the past. In fact, nothing about the policing is in AAP’s power. All salaries, postings, transfers of police personnel everything is done by the Home Ministry, so if any blame has to be given it has to be given to the Home Ministry for the same.

American Capitalism and Ventilators

America had been having a history of high cost healthcare as can be seen in this edition of USA today from 2017 . The Affordable Care Act was signed as a law by President Obama in 2010 which Mr. Trump curtailed when he came into power couple of years back. An estimated 80,000 people died due to seasonal flu in 2018-19 . Similarly, anywhere between 24-63,000 have supposed to have died from Last October to February-March this year. Now if the richest country can’t take care of their population which is 1/3rd of the population of this country while at the same time United States has thrice the area that India has. This I am sharing as seasonal flu also strikes the elderly as well as young children more than adults. So in one senses, the vulnerable groups overlap although from some of the recent stats, for Covid-19 even those who are 20+ are also vulnerable but that’s another story altogether.

If you see the CDC graph of the seasonal flu it is clear that American health experts knew about it. One another common factor which joins both the seasonal flu and covid is both need ventilators for the most serious cases. So, in 2007 it was decided that the number of ventilators needed to be ramped up, they had approximately 62k ventilators at that point in time all over U.S. The U.S. in 2010, asked for bids and got bid from a small californian company called Newport Medic Instruments. The price of the ventilators was approximately INR 700000 at 2010 prices, while Newport said they would be able to mass-produce at INR 200000 at 2010 prices. The company got the order and they started designing the model which needed to be certified by FDA. By 2011, they got the product ready when a big company called Covidgen bought Newport Medic and shutdown the project. This was shared in a press release in 2012. The whole story was broken by New York Times again, just a few days ago which highlighted how America’s capitalism rough shod over public health and put people’s life unnecessarily in jeopardy. If those new-age ventilators would have been a reality then not just U.S. but India and many other countries would have bought the ventilators as every county has same/similar needs but are unable to pay the high cost which in many cases would be passed on to their citizens either as price of service, or by raising taxes or a mixture of both with public being none the wiser. Due to dearth of ventilators and specialized people to operate it and space, there is possibility that many countries including India may have to make tough choices like Italian doctors had to make as to who to give ventilator to and have the mental and emotional guilt which would be associated with the choices made.

Some science coverage about diseases in wire and other publications

Since Covid coverage broke out, the wire has been bringing various reports of India’s handling of various epidemics, mysteries, some solved, some still remaining unsolved due to lack of interest or funding or both. The Nipah virus has been amply discussed in the movie Virus (2019) which I shared in the last blog post and how easily it could have been similar to Italy in Kerala. Thankfully, only 24 people including a nurse succumbed to that outbreak as shared in the movie. I had shared about Kerala nurses professionalism when I was in hospital couple of years back. It’s no wonder that their understanding of hygeine and nursing procedures are a cut above the rest hence they are sought after not just in India but world-over including US and UK and the middle-east. Another study on respitory illness was bought to my attention by my friend Pavithran.

Possibility of extended lockdown in India

There was talk in the media of extended lockdown or better put an environment is being created so that an extended lockdown can be done. This is probably in part due to a mathematical model and its derivatives shared by two Indian-origin Cambridge scholars who predict that a minimum 49 days lockdown may be necessary to flatten the covid curve about a week back.

Predictions of the outcome of the current 21-day lockdown (Source: Rajesh Singh, R. Adhikari, Cambridge University)
Alternative lockdown strategies suggested by the Cambridge model (Source: Rajesh Singh, R. Adhikari, Cambridge University)

India caving to US pressure on Hydroxychloroquine

While there has been lot of speculation in U.S. about Hydroxychloroquine as the wonder cure, last night Mr. Trump threatened India in a response to a reporter that Mr. Modi has to say no for Hydroxychloroquine and there may be retaliations.

As shared before if youtube is not your cup you can see the same on invidio.us

https://www.invidio.us/watch?v=YP-ewgoJPLw

Now while there have been several instances in the past of U.S. trying to bully India, going all the way back to 1954. In fact, in recent memory, there were sanctions on India by US under Atal Bihari Vajpayee Government (BJP) 1998 but he didn’t buckle under the pressure and now we see our current PM taking down our own notification from a day ago and not just sharing Hydroxychloroquine but also Paracetemol to other countries so it would look as if India is sharing with other countries. Keep in mind, that India, Brazil haven’t seen eye to eye on trade agreements of late and Paracetemol prices have risen in India. The price rise has been because the API (Active Pharmaceutical Ingredients) for the same come from China where the supply chain will take time to be fixed and we would also have to open up, although should we, should we not is another question altogether. I talk about supply chains as lean supply chains were the talk since late 90’s when the Japanese introduced Just-in-time manufacturing which lead to lean supply chains as well as lot of outsourcing as consequence. Of course, the companies saved money but at the cost of flexibility and how this model was perhaps flawed was shared by a series of articles in Economist as early as 2004 when there were lot of shocks to that model and would be exaberated since then. There have been frequent shocks to these fragile ecosystem more since 2008 after the financial meltdown and this would put more companies out of business than ever before.

The MSME sector in India had already been severely impacted first by demonetization and then by the horrendous implementation of GST whose cries can be heard from all sectors. Also the frequent changing of GST taxes has made markets jumpy and investors unsure. With judgements such as retrospective taxes, AGR (Adjusted Gross Revenue) etc. it made not only the international investors scared, but also domestic investors. The flight of the capital has been noticeable. This I had shared before when Indian Government shared about LRS report which it hasn’t since then. In fact Outlook Business had an interesting article about it where incidentally it talked about localcircles, a community networking platform where you get to know of lot of things and whom I am also a member of.

At the very end I apologize for not sharing the blog post before but then I was feeling down but then I’m not the only one.

07 April, 2020 10:22PM by shirishag75

Reproducible Builds

Reproducible Builds in March 2020

Welcome to the March 2020 report from the Reproducible Builds project. In our reports we outline the most important things that we have been up to over the past month and some plans for the future.

What are reproducible builds?

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security.

However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.


News

The report from our recent summit in Marrakesh was published and is now available in both PDF and HTML formats. A sincere thank you to all of the Reproducible Builds community for the input to the event a sincere thank you to Aspiration for preparing and collating this report.

Harmut Schorrig published a detailed document on how to compile Java applications in such as way that the .jar build artefact is reproducible across builds. A practical and hands-on guide, it details how to avoid unnecessary differences between builds by explicitly declaring an encoding as the default value differs across Linux and MS Windows systems and ensuring that the generated .jar — a variant of a .zip archive — does not embed any nondeterministic filesystem metadata, and so on.

Janneke gave a quick presentation on GNU Mes and reproducible builds during the lighting talk session at LibrePlanet 2020. []

Vagrant Cascadian presented There and Back Again, Reproducibly! video at SCaLE 18x in Pasadena in California which generated some attention on Twitter.

Hervé Boutemy mentioned on our mailing list in a thread titled Rebuilding and checking Reproducible Builds from Maven Central repository that since the update of a central build script (the “parent POM”) every Apache project using the Maven build system should build reproducibly. A follow-up discussion regarding how to perform such rebuilds was also started on the Apache mailing list.

The Telegram instant-messaging platform announced that they had updated their iOS and Android OS applications and claim that they are reproducible according to their full instructions, verifying that its original source code is exactly the same code that is used to build the versions available on the Apple App Store and Google Play distribution platforms respectfully.

Hervé Boutemy also reported about a new project called reproducible-central which aims to allow anyone to rebuild a component from the Maven Central Repository that is expected to be reproducible and check that the result is as expected.

In last month’s report we detailed Omar Navarro Leija’s work in and around an academic paper titled Reproducible Containers which describes in detail the workings of a user-space container tool called dettrace (PDF). Since then, the PhD student from the University Of Pennsylvania presented on this tool at the ASPLOS 2020 conference in Lausanne, Switzerland. Furthermore, there were contributions to dettrace from the Reproducible Builds community itself. [][]


Distribution work

openSUSE

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:

Debian

Chris Lamb further refined his merge request for the debian-installer component to allow all arguments from sources.list files (such as “[check-valid-until=no]”) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure. (#13)

Holger Levsen filed a number of bug reports against the debrebuild tool that attempts to rebuild a Debian package given a .buildinfo file as input, including:

48 reviews of Debian packages were added, 17 were updated and 34 were removed this month adding to our knowledge about identified issues. Many issue types were noticed, categorised and updated by Chris Lamb, including:

Finally, Holger opened a bug report against the software running tracker.debian.org, a service for Debian Developers to follow the evolution of packages via web and email interfaces to request that they integrate information from buildinfos.debian.net (#955434) and Chris Lamb kept isdebianreproducibleyet.com up to date. []


Software development

diffoscope

Chris Lamb made the following changes to diffoscope, the Reproducible Builds project’s in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading version 138 to Debian:

  • Improvements:

    • Don’t allow errors with “R” script deserialisation cause the entire operation to fail, for example if an external library cannot be loaded. (#91)
    • Experiment with memoising output from expensive external commands, eg. readelf(#93)
    • Use dumppdf from the python3-pdfminer if we do not see any other differences from pdftext, etc. (#92)
    • Prevent a traceback when comparing two R .rdx files directly as the get_member method will return a file even if the file is missing. []
  • Reporting:

    • Display the supported file formats into the package long description. (#90)
    • Print a potentially-helpful message if the PyPDF2 module is not installed. []
    • Remove any duplicate comparator descriptions when formatting in the --help output or in the package long description. []
    • Weaken “Install the X package to get a better output” message to “… may produce a better output” as the former is not actually guaranteed. []
  • Misc:

    • Ensure we only parse the recommended packages from --list-debian-substvars when we want them for debian/tests/control generation. []
    • Add upstream metadata file [] and add a Lintian override for upstream-metadata-in-native-source as “we” are upstream. []
    • Inline the RequiredToolNotFound.get_package method’s functionality as it is only used once. []
    • Drop the deprecated “py36 = [..]” argument in the pyproject.toml file. []

In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 138 [], as well as updating reprotest — our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences — to version 0.7.14 [].

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month we wrote a large number of such patches, including:

Project documentation

There was further work performed on our documentation and website this month including Alex Wilson adding a section regarding using Gradle for reproducible builds in JVM projects [] and Holger Levsen added the report from our recent summit in Marrakesh [][].

In addition, Chris Lamb made a number of changes, including correcting the syntax of some CSS class formatting [], improved some “filed against” copy a little better [] and corrected a reference to calendar.monthrange Python method in a utility function. []

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org that, amongst many other tasks, tracks the status of our reproducibility efforts as well as identifies any regressions that have been introduced.

This month, Chris Lamb reworked the web-based package rescheduling tool to:

  • Require a HTTP POST method in the web-based scheduler as not only should HTTP GET requests be idempotent but this will allow many future improvements in the user interface. [][][]
  • Improve the authentication error message in said rescheduler to suggest that the developer’s SSL certificate may have expired. []

In addition, Holger Levsen made the following changes:

  • Add a new ath97 subtarget for the OpenWrt distribution.
  • Revisit ordering of Debian suites; sort the experimental distribution last and reverse the ordering of suites to prioritise the suites in development. [][][]
  • Schedule Debian buster and bullseye a little less in order to allow unstable to catch up on the i386 architecture. [][]
  • Various cosmetic changes to the web-based scheduler. [][][][]
  • Improve wordings in the node health maintenance output. []

Lastly, Vagrant Cascadian updated a link to the (formerly) weekly news to our reports page [] and kpcyrd fixed the escaping in an Alpine Linux inline patch []. The usual build nodes maintenance was performed by Holger Levsen [][], Mattia Rizzolo [] and Vagrant Cascadian [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

07 April, 2020 09:30AM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Morphite

Further Switch game recommendations…

Morphite is a first-person space exploration game, with a very distinctive aesthetic, which reminds me a little bit of No Man's Sky. This is a fairly child-friendly game of exploration and discovery. It also reminds me a little bit of Frontier: First Encounters, the second sequel to Elite.

[pic]

It's currently discounted in the Nintendo Switch eShop by 83%, an all-time low price of £2.29, until 5th May. I've barely scratched the surface of this one, so I don't know how deep the game will go, but it looks promising. Certainly worth sacrificing one Flat White for.

[pic]

07 April, 2020 09:13AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

For real…

Our good friend, Octavio Méndez «Octagesimal», passed away due to complications derived from COVID-2019.

Long-time free software supporter, very well known for his craft –and for his teaching– with Blender. Great systems administrator. 45 year old, father of two small girls, husband of our dear friend Claudia.

We are all broken. We will miss you.

For real, those that can still do it: Stay safe. Stay home.

07 April, 2020 07:00AM by Gunnar Wolf

hackergotchi for Steve Kemp

Steve Kemp

A busy few days

Over the past few weeks things have been pretty hectic. Since I'm not working at the moment I'm mostly doing childcare instead. I need a break, now and again, so I've been sending our child to päiväkoti two days a week with him home the rest of the time.

I love taking care of the child, because he's seriously awesome, but it's a hell of a lot of work when most of our usual escapes are unavailable. For example we can't go to the (awesome) Helsinki Central Library as that is closed.

Instead of doing things outdoors we've been baking bread together, painting, listening to music and similar. He's a big fan of any music with drums and shouting, so we've been listening to Rammstein, The Prodigy, and as much Queen as I can slip in without him complaining ("more bang bang!").

I've also signed up for some courses at the Helsinki open university, including Devops with Docker so perhaps I have a future career working with computers? I'm hazy.

Finally I saw a fun post the other day on reddit asking about the creation of a DSL for server-setup. I wrote a reply which basically said two things:

  • First of all you need to define the minimum set of primitives you can execute.
    • (Creating a file, fetching a package, reloading services when a configuration file changes, etc.)
  • Then you need to define a syntax for expressing those rules.
    • Not using YAML. Because Ansible fucked up bigtime with that.
    • It needs to be easy to explain, it needs to be consistent, and you need to decide before you begin if you want "toy syntax" or "programming syntax".
    • Because adding on conditionals, loops, and similar, will ruin everything if you add it once you've started with the wrong syntax. Again, see Ansible.

Anyway I had an idea of just expressing things in a simple fashion, borrowing Puppet syntax (which I guess is just Ruby hash literals). So a module to do stuff with files would just look like this:

file { name   => "This is my rule",
       target => "/tmp/blah",
       ensure => "absent" }

The next thing to do is to allow that to notify another rule, when it results in a change. So you add in:

notify => "Name of rule"

# or
notify => [ "Name of rule", "Name of another rule" ]

You could also express dependencies the other way round:

shell { name => "Do stuff",
        command => "wc -l /etc/passwd > /tmp/foo",
        requires => [ "Rule 1", "Rule 2"] }

Anyway the end result is a simple syntax which allows you to do things; I wrote a file to allow me to take a clean system and configure it to run a simple golang application in an hour or so.

The downside? Well the obvious one is that there's no support for setting up cron jobs, setting up docker images, MySQL usernames/passwords, etc. Just a core set of primitives.

Adding new things is easy, but also an endless job. So I added the ability to run external/binary plugins stored outside the project. To support that is simple with the syntax we have:

  • We pass the parameters, as JSON, to STDIN of the binary.
  • We read the result from STDOUT
    • Did the rule result in a change to the system?
    • Or was it a NOP?

All good. People can write modules, if they like, and they can do that in any language they like.

Fun times.

We'll call it marionette since it's all puppet-inspired:

And that concludes this irregular update.

07 April, 2020 05:53AM

hackergotchi for Norbert Preining

Norbert Preining

QOwnNotes for Debian (update)

Some time ago I posted about QOwnNotes for Debian. My recent experience with the openSUSE Build System has convinced me to move also the QOwnNotes packages there, which allows me to provide builds for Debian/Buster, Debian/testing, and Debian/sid, all for both i386 and amd64 architectures.

To repeat a bit about QOwnNotes, it is a cross-platform plain text and markdown note taking application. By itself, it wouldn’t be something to talk about, we have vim and emacs and everything in between. But QOwnNotes integrates nicely with the Notes application from NextCloud and OwnCloud, as well as providing useful integration with NextCloud like old version of notes, access to deleted files, watching changes, etc.

The new locations for binary packages for both amd64 and i386 architectures are as follows below. To make these repositories work out of the box, you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

Debian/buster:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-qownnotes/Debian_10  ./

Debian/testing:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-qownnotes/Debian_Testing  ./

Debian/unstable:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-qownnotes/Debian_Unstable  ./

The source can be obtained from either the git repository or the OBS project debian-qownnotes.

Enjoy.

07 April, 2020 04:19AM by Norbert Preining

April 06, 2020

hackergotchi for Joachim Breitner

Joachim Breitner

A Telegram bot in Haskell on Amazon Lambda

I just had a weekend full of very successful serious geekery. On a whim I thought: “Wouldn't it be nice if people could interact with my game Kaleidogen also via a telegram bot?” This led me to learn about how I write a Telegram bot in Haskell and how I can deploy such a Haskell program to Amazon Lambda. In particular the latter bit might be interesting to some of my readers, so here is how went about it.

Kaleidogen

Kaleidogen is a little contemplative game (or toy game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository. BTW, I am looking for help turning it into an Android app!

KaleidogenBot in action

KaleidogenBot in action

Amazon Lambda

Amazon Lambda is the “Function as a service” offering of Amazon Web Services. The idea is that you don’t rent a server, where you have to deal with managing the whole system and that you are paying for constantly, but you just upload the code that responds to outside requests, and AWS takes care of the rest: Starting and stopping instances, providing a secure base system etc. When nobody is using the service, no cost occurs.

This sounds ideal for hosting a toy Telegram bot: Most of the time nobody will be using it, and I really don't want to have to baby sit yet another service on my server. On Amazon Lambda, I can probably just forget about it.

But Haskell is not one of the officially supported languages on Amazon Lambda. So to run Haskell on Lambda, one has to solve two problems:

  • how to invoke the Haskell code on the server, and
  • how to build Haskell so that it runs on the Amazon Linux distribution

A Haskell runtime for Lambda

For the first we need a custom runtime. While this sounds complicated, it is actually a pretty simple concept: A runtime is an executable called bootstrap that queries the Lambda Runtime Interface for the next request to handle. The Lambda documentation is phrased as if this runtime has to be a dispatcher that calls the separate function’s handler. But it could just do all the things directly.

I found the Haskell package aws-lambda-haskell-runtime which provides precisely that: A function

runLambda :: (LambdaOptions -> IO (Either String LambdaResult)) -> IO ()

that talks to the Lambda Runtime API and invokes its argument on each message. The package also provides Template Haskell magic to collect “handlers“ of any JSON-able type and generates a dispatcher, like you might expect from other, more dynamic languages. But that was too much magic for me, so I ignored that and just wrote the handler manually:

main :: IO ()
main = runLambda run
  where
   run ::  LambdaOptions -> IO (Either String LambdaResult)
   run opts = do
    result <- handler (decodeObj (eventObject opts)) (decodeObj (contextObject opts))
    either (pure . Left . encodeObj) (pure . Right . LambdaResult . encodeObj) result

data Event = Event
  { path :: T.Text
  , body :: Maybe T.Text
  } deriving (Generic, FromJSON)

data Response = Response
  { statusCode :: Int
  , headers :: Value
  , body :: T.Text
  , isBase64Encoded :: Bool
  } deriving (Generic, ToJSON)

handler :: Event -> Context -> IO (Either String Response)
handler Event{body, path} context =

I expose my Lambda function to the world via Amazon’s API Gateway, configured to just proxy the HTTP requests. This means that my code receives a JSON data structure describing the HTTP request (here called Event, listing only the fields I care about), and it will respond with a Response, again as JSON.

The handler can then simply pattern-match on the path to decide what to do. For example this code handles URLs like /img/CAFFEEFACE.png, and responds with an image.

handler :: TC -> Event -> Context -> IO (Either String Response)
handler Event{body, path} context
    | Just bytes <- isImgPath path >>= T.decodeHex = do
        let pngData = genPurePNG bytes
        pure $ Right Response
            { statusCode = 200
            , headers = object [ "Content-Type" .= ("image/png" :: String) ]
            , isBase64Encoded = True
            , body = T.decodeUtf8 $ LBS.toStrict $ Base64.encode pngData
            }
    …

isImgPath :: T.Text -> Maybe T.Text
isImgPath  = T.stripPrefix "/img/" >=> T.stripSuffix ".png"

If this program would grow more, then one should probably use something more structured for routing here; maybe servant, or bridging towards wai apps (amost like wai-lamda, but that still assumes an existing runtime, instead of simply being the runtime). But for my purposes, no extra layers of indirection or abstraction are needed!

Deploying Haskell to Lambda

Building Haskell locally and deploying to different machiens is notoriously tricky; you often end up depending on a shared library that is not available on the other platform. The aws-lambda-haskell-runtime package, and similar projects like serverless-haskell, solve this using stack and Docker – two technologies that are probably great, but I never warmed up to them.

So instead adding layers and complexities, can I solve this instead my making things simpler? If I compiler my bootstrap into a static Linux binary, it should run on any Linux, including Amazon Linux.

Unfortunately, building Haskell programs statically is also notoriously tricky. But it is made much simpler by the work of Niklas Hambüchen and others in the context of the Nix package manager, coordinated in the static-haskell-nix project. The promise here is that once you have set up building your project with Nix, then getting a static version is just one flag away. The support is not completely upstreamed into nixpkgs proper yet, but their repository has a nix file that contains a nixpkgs set with their patches:

let pkgs = (import (sources.nixpkgs-static + "/survey/default.nix") {}).pkgs; in

This, plus a fairly standard nix setup to build the package, yields what I was hoping for:

$ nix-build -A kaleidogen
/nix/store/ppwyq4d964ahd6k56wsklh93vzw07ln0-kaleidogen-0.1.0.0
$ file result/bin/kaleidogen-amazon-lambda
result/bin/kaleidogen-amazon-lambda: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, stripped
$ ls -sh result/bin/kaleidogen-amazon-lambda
6,7M result/bin/kaleidogen-amazon-lambda

If we put this file, named bootstrap, into a zip file and upload it to Amazon Lambda, then it just works! Creating the zip file is easily scripted using nix:

  function-zip = pkgs.runCommandNoCC "kaleidogen-lambda" {
    buildInputs = [ pkgs.zip ];
  } ''
    mkdir -p $out
    cp ${kaleidogen}/bin/kaleidogen-amazon-lambda bootstrap
    zip $out/function.zip bootstrap
  '';

So to upload this, I use this one-liner (line-wrapped for your convenience):

nix-build -A function-zip &&
aws lambda update-function-code --function-name kaleidogen \
  --zip-file fileb://result/function.zip

Thanks to how Nix pins all dependencies, I am fairly confident that I can return to this project in 4 months and still be able to build it.

Of course, I want continuous integration and deployment. So I build the project with GitHub Actions, using a cachix nix cache to significantly speed up the build, and auto-deploy to Lambda using aws-lambda-deploy; see my workflow file for details.

The Telegram part

The above allows me to run basically any stateless service, and a Telegram bot is nothing else: When configured to act as a WebHook, Telegram will send a request with a message to our Lambda function, where we can react on it.

The telegram-api package provides bindigs for the Telegram Bot API (although I had to use the repository version, as the version on Hackage has some bitrot). Slightly simplified I can write a handler for an Update:

handleUpdate :: Update -> TelegramClient ()
handleUpdate Update{ message = Just m } = do
  let c = ChatId (chat_id (chat m))
  liftIO $ printf "message from %s: %s\n" (maybe "?" user_first_name (from m)) (maybe "" T.unpack (text m))
  if "/start" `T.isPrefixOf` fromMaybe "" (text m)
  then do
    rm <- sendMessageM $ sendMessageRequest c "Hi! I am @KaleidogenBot. …"
    return ()
  else do
    m1 <- sendMessageM $ sendMessageRequest c "One moment…"
    withPNGFile  $ \pngFN -> do
      m2 <- uploadPhotoM $ uploadPhotoRequest c
        (FileUpload (Just "image/png") (FileUploadFile pngFN))
      return ()
handleUpdate _ u =
  liftIO $ putStrLn $ "Unhandled message: " ++ show u

and call this from the handler that I wrote above:

    …
    | path == "/telegram" =
      case eitherDecode (LBS.fromStrict (T.encodeUtf8 (fromMaybe "" body))) of
        Left err -> …
        Right update -> do
          runTelegramClient token manager $ handleUpdate Nothing update
          pure $ Right Response
            { statusCode = 200
            , headers = object [ "Content-Type" .= ("text/plain" :: String) ]
            , isBase64Encoded = False
            , body = "Done"
            }
    …

Note that the Lambda code receives the request as JSON data structure with a body that contains the original HTTP request body. Which, in this case, is itself JSON, so we have to decode that.

All that is left to do is to tell Telegram where this code lives:

curl --request POST \
  --url https://api.telegram.org/bot<token>/setWebhook
  --header 'content-type: application/json'
  --data '{"url": "https://api.kaleidogen.nomeata.de/telegram"}'

As a little add on, I also created a Telegram game for Kaleidogen. A Telegram game is nothing but a webpage that runs inside Telegram, so it wasn’t much work to wrap the Web version of Kaleidogen that way, but the resulting Telegram game (which you can access via https://core.telegram.org/bots/games) still looks pretty neat.

No /dev/dri/renderD128

I am mostly happy with this setup: My game is now available to more people in more ways. I don’t have to maintain any infrastructure. When nobody is using this bot no resources are wasted, and the costs of the service are neglectible -- this is unlikely to go beyond the free tier, and even if it would, the cost per generated image is roughly USD 0.000021.

There is one slight disappointment, though. What I find most intersting about Kaleidogen from a technical point of view is that when you play it in the browser, the images are not generated by my code. Instead, my code creates a WebGL shader program on the fly, and that program generates the image on your graphics card.

I even managed to make the GL rendering code work headlessly, i.e. from a command line program, using EGL and libgbm and a helper written in C. But it needs access to a graphics card via /dev/dri/renderD128. Amazon does not provide that to Lambda code, and neither do the other big Function-as-a-service providers. So I had to swallow my pride and reimplement the rendering in pure Haskell.

So if you think the bot is kinda slow, then that’s why. Despite properly optimizing the pure implementation (the inner loop does not do allocations and deals only with unboxed Double# values), the GL shader version is still three times as fast. Maybe in a few years GPU access will be so ubiquitous that it’s even on Amazon Lambda; then I can easily use that.

06 April, 2020 08:40PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2020-03

DPL Campaign 2020

On the 12th of March, I posted my self-nomination for the Debian Project Leader election. This is the second time I’m running for DPL, and you can read my platform here. The campaign period covered the second half of the month, where I answered a bunch of questions on the debian-vote list. The voting period is currently open and ends on 18 April.

Debian Social

This month we finally announced the Debian Social project. A project that hosts a few websites with the goal to improve communication and collaboration within the Debian project, improve visibility on the work that people do and make it easier for general users to interact with the community and feel part of the project.

Some History

This has been a long time in the making. From my side I’ve been looking at better ways to share/play our huge DebConf video archives for the last 3 years or so. Initially I was considering either some sort of script or small server side app that combined the archives and the metadata into a player, or using something like MediaDrop (which I was using on my highvoltage.tv website for a while). I ran into a lot of MediaDrop’s limitations early on. It was fine for a very small site but I don’t think it would ever be the right solution for a Debian-wide video hosting platform, and it didn’t seem all that actively maintained either. Wouter went ahead and implemented a web player option for the video archives. His solution is good because it doesn’t rely on any server side software, so it’s easy to mirror and someone who lives on an island could download it and view it offline in that player. It still didn’t solve all our problems though. Popular videos (by either views or likes) weren’t easily discoverable, and the site itself isn’t that easy to discover.

Then PeerTube came along. PeerTube provides a similar type of interface such as MediaDrop or YouTube that gives you likes, viewcount and comments. But what really set it apart from previous things that we looked at was that it’s a federated service. Not only does it federate with other PeerTube instances, but the protocols it uses means that it can connect to all kinds of other services that makes up an interconnected platform called the Fediverse. This was especially great since independent video sites tend to become these lonely islands on the web that become isolated and forgotten. With PeerTube, video sites can subscribe to similar sites on the Fediverse, which makes videos and other video sites significantly more discoverable and attracts more eyeballs.

At DebConf19 I wanted to ramp up the efforts to make a Debian PeerTube instance a reality. I spoke to many people about this and discovered that some Debianites are already making all kinds of Debian videos in many different languages. Some were even distributing them locally on DVD and have never uploaded them. I thought that the Debian PeerTube instance could not only be a good platform for DebConf videos, but it could be a good home for many free software content creators, especially if they create Debian specific content. I spoke to Rhonda about it, who’s generally interested in the Fediverse and wanted to host a instances of Pleroma (microblogging service) and PixelFed (free image hosting service that resembles the Instagram site), but needed a place to host them. We decided to combine efforts, and since a very large amount of fediverse services end with .social in their domain names, we ended up calling this project Debian Social. We’re also hosting some non-fediverse services like a WordPress multisite and a Jitsi instance for video chatting.

Current Status

Currently, we have a few services in a beta/testing state. I think we have most of the kinks sorted out to get them to a phase where they’re ready for wider use. Authentication is a bit of a pain point right now. We don’t really have a single sign-on service in Debian, that guest users can use, or that all these services integrate with. So for now, if you’re a Debian Developer who wants an account on one of these services, you can request a new account by creating a ticket on salsa.debian.org and selecting the “New account” template. Not all services support having dashes (or even any punctuation in the username whatsoever), so to keep it consistent we’re currently appending just “guest” to salsa usernames for guest users, and “team” at the end of any Debian team accounts or official accounts using these services

Stefano finished uploading all the Debconf videos to the PeerTube instance. Even though it’s largely automated, it ended up being quite a big job fixing up some old videos, their metadata and adding support for PeerTube to the DebConf video scripts. This also includes some videos from sprints and MiniDebConfs that had video coverage, currently totaling 1359 videos.

Future plans

This is still a very early phase for the project. Here are just some ideas that might develop over time on the Debian Social sites:

  • Team accounts. Some Debian teams already have accounts on a myriad of other platforms. For example, the Debian Med team has a blog on blogspot and the Debian Publicity team has an account on framapiaf.org. I’d really like to make our Debian Social platforms (like our WordPress multisite instance and Pleroma) a place where Debian teams can trust to host their updates on. It would also be nice to have more teams use these that don’t have a particularly big online presence right now, like Debian women or a DPL team account.
  • Developer demos. I enjoy the videos that the GNOME project makes that demos the new features in every release, as they’ve done for the 3.36 release. I think it would be great if people in Debian could make some small videos to demo the things that they’ve been working on. It doesn’t have to be as flashy or elaborate as the GNOME video I’ve linked to, but sometimes just a minute long demo can be really useful to convey a new idea or feature or to show progress that has been made.
  • User participation. YouTube is full of videos that review Debian or demo how to customise it. It would be great if we could get users to post such videos to PeerTube. For Pixelfed, I’d like to try out projects like users posting pictures of their computers with freshly installed Debian systems with a hashtag like #WeInstallDebian, then at the end of the year we could build a nice big mosaic that contains these images. Might make a cool poster for events too.
  • DebConf and other Debian events. We used to use a Gallery instance to host DebConf photos, but it’s always been a bit cumbersome managing photos there and Gallery hasn’t updated it’s UI much over the years causing it to fall a bit out of favour with attendees at these events. As a result, photos end up getting lost in WhatsApp/Telegram/Signal groups, Twitter, Facebook, etc. I hope that we could get enough users signed up on the Pixelfed instance that it could become the de facto standard for posting Debian event photos to. Having a known central place to post these make them easier to find as well.

If you’d like to join this initiative and help out, please join #debian-social on oftc. We’re also looking for people who can help moderate posts on these sites.

Debian packaging

I had the sense that there were fewer upstream releases this month. I suspect that everyone was busy figuring out how to cope during Covid-19 lockdowns taking place all over the world.

2020-03-02: Upload package calamares (3.2.10-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-dash-to-panel (29-1) to Debian unstable.

2020-03-10: Upload package gnome-shell-extension-draw-on-your-screen (5.1-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-dash-to-panel (31-1) to Debian unstable.

2020-03-28: Upload package gnome-shell-extension-draw-on-your-screen (6-1) to Debian unstable.

2020-03-28: Update package python3-flask-autoindexing packaging, not releasing due to licensing change that needs further clarification. (GitHub issue #55).

2020-03-28: Upload package gamemode (1.5.1-1) to Debian unstable.

2020-03-28: Upload package calamares (3.2.21-1) to Debian unstable.

Debian mentoring

2020-03-03: Sponsor package python-jaraco.functools (3.0.0-1) (Python team request).

2020-03-03: Review python-ftputil (3.4-1) (Needs some more work) (Python team request).

2020-03-04: Sponsor package pythonmagick (0.9.19-6) for Debian unstable (Python team request).

2020-03-23: Sponsor package bitwise (0.41-1) for Debian unstable (Email request).

2020-03-23: Sponsor package gpxpy (1.4.0-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package gpxpy (1.4.0-2) for Debian unstable (Python team request).

2020-03-28: Sponsor package celery (4.4.2-1) for Debian unstable (Python team request).

2020-03-28: Sponsor package buildbot (2.7.0-1) for Debian unstable (Python team request).

06 April, 2020 05:00PM by jonathan

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.1 released

I released version 2.1 of ledger2beancount, a ledger to beancount converter.

Here are the changes in 2.1:

  • Handle postings with posting dates and comments but no amount
  • Show transactions with only one posting (without bucket)
  • Adding spacing between automatic declarations
  • Preserve preliminary info at the top

You can get ledger2beancount from GitHub.

Thanks to Thierry (thdox) for reporting a bug and for fixing some typos in the documentation. Thanks to Stefano Zacchiroli for some good feedback.

06 April, 2020 11:38AM by Martin Michlmayr

Russ Allbery

Review: Thick

Review: Thick, by Tressie McMillan Cottom

Publisher: The New Press
Copyright: 2019
ISBN: 1-62097-437-1
Format: Kindle
Pages: 247

Tressie McMillan Cottom is an associate professor of sociology at Virginia Commonwealth University. I first became aware of her via retweets and recommendations from other people I follow on Twitter, and she is indeed one of the best writers on that site. Thick: And Other Essays is an essay collection focused primarily on how American culture treats black women.

I will be honest here, in part because I think much of the regular audience for my book reviews is similar to me (white, well-off from working in tech, and leftist but privileged) and therefore may identify with my experience. This is the sort of book that I always want to read and then struggle to start because I find it intimidating. It received a huge amount of praise on release, including being named as a finalist for the National Book Award, and that praise focused on its incisiveness, its truth-telling, and its depth and complexity. Complex and incisive books about racism are often hard for me to read; they're painful, depressing, and infuriating, and I have to fight my tendency to come away from them feeling more cynical and despairing. (Despite loving his essays, I'm still procrastinating reading Ta-Nehisi Coates's books.) I want to learn and understand but am not good at doing anything with the information, so this reading can feel like homework.

If that's also your reaction, read this book. I regret having waited as long as I did.

Thick is still, at times, painful, depressing, and infuriating. It's also brilliantly written in a way that makes the knowledge being conveyed easier to absorb. Rather than a relentless onslaught of bearing witness (for which, I should stress, there is an important place), it is a scalpel. Each essay lays open the heart of a subject in a few deft strokes, points out important features that the reader has previously missed, and then steps aside, leaving you alone with your thoughts to come to terms with what you've just learned. I needed this book to be an essay collection, with each thought just long enough to have an impact and not so long that I became numb. It's the type of collection that demands a pause at the end of each essay, a moment of mental readjustment, and perhaps a paging back through the essay again to remember the sharpest points.

The essays often start with seeds of the personal, drawing directly on McMillan Cottom's own life to wrap context around their point. In the first essay, "Thick," she uses advice given her younger self against writing too many first-person essays to talk about the writing form, its critics, and how the backlash against it has become part of systematic discrimination because black women are not allowed to write any other sort of authoritative essay. She then draws a distinction between her own writing and personal essays, not because she thinks less of that genre but because that genre does not work for her as a writer. The essays in Thick do this repeatedly. They appear to head in one direction, then deepen and shift with the added context of precise sociological analysis, defying predictability and reaching a more interesting conclusion than the reader had expected. And, despite those shifts, McMillan Cottom never lost me in a turn. This is a book that is not only comfortable with complexity and nuance, but helps the reader become comfortable with that complexity as well.

The second essay, "In the Name of Beauty," is perhaps my favorite of the book. Its spark was backlash against an essay McMillan Cottom wrote about Miley Cyrus, but the topic of the essay wasn't what sparked the backlash.

What many black women were angry about was how I located myself in what I'd written. I said, blithely as a matter of observable fact, that I am unattractive. Because I am unattractive, the argument went, I have a particular kind of experience of beauty, race, racism, and interacting with what we might call the white gaze. I thought nothing of it at the time I was writing it, which is unusual. I can usually pinpoint what I have said, written, or done that will piss people off and which people will be pissed off. I missed this one entirely.

What follows is one of the best essays on the social construction of beauty I've ever read. It barely pauses at the typical discussion of unrealistic beauty standards as a feminist issue, instead diving directly into beauty as whiteness, distinguishing between beauty standards that change with generations and the more lasting rules that instead police the bounds between white and not white. McMillan Cottom then goes on to explain how beauty is a form of capital, a poor and problematic one but nonetheless one of the few forms of capital women have access to, and therefore why black women have fought to be included in beauty despite all of the problems with judging people by beauty standards. And the essay deepens from there into a trenchant critique of both capitalism and white feminism that is both precise and illuminating.

When I say that I am unattractive or ugly, I am not internalizing the dominant culture's assessment of me. I am naming what has been done to me. And signaling who did it. I am glad that doing so unsettles folks, including the many white women who wrote to me with impassioned cases for how beautiful I am. They offered me neoliberal self-help nonsense that borders on the religious. They need me to believe beauty is both achievable and individual, because the alternative makes them vulnerable.

I could go on. Every essay in this book deserves similar attention. I want to quote from all of them. These essays are about racism, feminism, capitalism, and economics, all at the same time. They're about power, and how it functions in society, and what it does to people. There is an essay about Obama that contains the most concise explanation for his appeal to white voters that I've read. There is a fascinating essay about the difference between ethnic black and black-black in U.S. culture. There is so much more.

We do not share much in the U.S. culture of individualism except our delusions about meritocracy. God help my people, but I can talk to hundreds of black folks who have been systematically separated from their money, citizenship, and personhood and hear at least eighty stories about how no one is to blame but themselves. That is not about black people being black but about people being American. That is what we do. If my work is about anything it is about making plain precisely how prestige, money, and power structure our so-called democratic institutions so that most of us will always fail.

I, like many other people in my profession, was always more comfortable with the technical and scientific classes in college. I liked math and equations and rules, dreaded essay courses, and struggled to engage with the mandatory humanities courses. Something that I'm still learning, two decades later, is the extent to which this was because the humanities are harder work than the sciences and I wasn't yet up to the challenge of learning them properly. The problems are messier and more fluid. The context required is broader. It's harder to be clear and precise. And disciplines like sociology deal with our everyday lived experience, which means that we all think we're entitled to an opinion.

Books like this, which can offer me a hand up and a grounding in the intellectual rigor while simultaneously being engaging and easy to read, are a treasure. They help me fill in the gaps in my education and help me recognize and appreciate the depth of thought in disciplines that don't come as naturally to me.

This book was homework, but the good kind, the kind that exposes gaps in my understanding, introduces topics I hadn't considered, and makes the time fly until I come up for air, awed and thinking hard. Highly recommended.

Rating: 9 out of 10

06 April, 2020 04:21AM

April 05, 2020

Enrico Zini

Vincent Bernat

Safer SSH agent forwarding

ssh-agent is a program to hold in memory the private keys used by SSH for public-key authentication. When the agent is running, ssh forwards to it the signature requests from the server. The agent performs the private key operations and returns the results to ssh. It is useful if you keep your private keys encrypted on disk and you don’t want to type the password at each connection. Keeping the agent secure is critical: someone able to communicate with the agent can authenticate on your behalf on remote servers.

ssh also provides the ability to forward the agent to a remote server. From this remote server, you can authenticate to another server using your local agent, without copying your private key on the intermediate server. As stated in the manual page, this is dangerous!

Agent forwarding should be enabled with caution. Users with the ability to bypass file permissions on the remote host (for the agent’s UNIX-domain socket) can access the local agent through the forwarded connection. An attacker cannot obtain key material from the agent, however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent. A safer alternative may be to use a jump host (see -J).

As mentioned, a better alternative is to use the jump host feature: the SSH connection to the target host is tunneled through the SSH connection to the jump host. See the manual page and this blog post for more details.


If you really need to use SSH agent forwarding, you can secure it a bit through a dedicated agent with two main attributes:

  • it holds only the private key to connect to the target host, and
  • it asks confirmation for each requested signature.

The following alias around the ssh command will spawn such an ephemeral agent:

alias assh="ssh-agent ssh -o AddKeysToAgent=confirm -o ForwardAgent=yes"

With the -o AddKeysToAgent=confirm directive, ssh adds the unencrypted private key to the agent but each use must be confirmed.1 Once connected, you get a password prompt for each signature request:2

ssh-agent prompt confirmation with fingerprint and yes/no buttons
Request for the agent to use the specified private key

But, again, avoid using agent forwarding! ☠️

Update (2020-04)

In a previous version of this article, the wrapper around the ssh command was a more complex function. Alexandre Oliva was kind enough to point me to the simpler solution above.

Update (2020-04)

Guardian Agent is an even safer alternative: it shows and ensures the usage (target and command) of the requested signature. There is also a wide range of alternative solutions to this problem. See for example SSH-Ident, Wikimedia solution and solo-agent.


  1. Alternatively, you can add the keys with ssh-add -c↩︎

  2. Unfortunately, the dialog box default answer is “Yes.” ↩︎

05 April, 2020 03:50PM by Vincent Bernat

Hideki Yamane

Zoom: You should hire an appropriate package maintainer

Through my daily job, sometimes I should use zoom for meetings and webinar but several resources indicate that they didn't pay enough security effort to their product, so I've decided to remove it from my laptop. However, I've found a weird message at that time.
The following packages will be REMOVED:
  zoom*
0 upgraded, 0 newly installed, 1 to remove and 45 not upgraded.
After this operation, 269 MB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 362466 files and directories currently installed.)
Removing zoom (3.5.374815.0324) ...
run post uninstall script, action is remove ...
current home is /root
Processing triggers for mime-support (3.64) ...
Processing triggers for gnome-menus (3.36.0-1) ...
Processing triggers for shared-mime-info (1.15-1) ...
Processing triggers for desktop-file-utils (0.24-1) ...
(Reading database ... 361169 files and directories currently installed.)
Purging configuration files for zoom (3.5.374815.0324) ...
run post uninstall script, action is purge ...
current home is /root
Wait. "current home is /root"? What did you do? Then I've extracted its package (ar -x zoom_amd64.deb; tar xvf contro.tar.xz; view post*)
#!/bin/bash
# Program:
#       script to be run after package installation

echo "run post install script, action is $1..."

#ln -s -f /opt/zoom/ZoomLauncher /usr/bin/zoom

#$1 folder path
function remove_folder
{
        if [ -d $1 ]; then
                rm -rf $1
        fi
}

echo current home is $HOME
remove_folder "$HOME/.cache/zoom"
(snip)
Ouch. When I run apt with sudo, $HOME is /root. So, their maintscript tried to remove files under /root! Did they do any tests? Even if it would work well, touch user's files under $Home is NOT a good idea...

And it seems that not only for .deb package but also .rpm package.



05 April, 2020 02:19PM by Hideki Yamane (noreply@blogger.com)

François Marier

Installing Debian buster on a GnuBee PC 2

Here is how I installed Debian 10 / buster on my GnuBee Personal Cloud 2, a free hardware device designed as a network file server / NAS.

Flashing the LibreCMC firmware with Debian support

Before we can install Debian, we need a firmware that includes all of the necessary tools.

On another machine, do the following:

  1. Download the latest librecmc-ramips-mt7621-gb-pc1-squashfs-sysupgrade_*.bin.
  2. Mount a vfat-formatted USB stick.
  3. Copy the file onto it and rename it to gnubee.bin.
  4. Unmount the USB stick

Then plug a network cable between your laptop and the black network port and plug the USB stick into the GnuBee before rebooting the GnuBee via ssh:

ssh 192.68.10.1
reboot

If you have a USB serial cable, you can use it to monitor the flashing process:

screen /dev/ttyUSB0 57600

otherwise keep an eye on the LEDs and wait until they are fully done flashing.

Getting ssh access to LibreCMC

Once the firmware has been updated, turn off the GnuBee manually using the power switch and turn it back on.

Now enable SSH access via the built-in LibreCMC firmware:

  1. Plug a network cable between your laptop and the black network port.
  2. Open web-based admin panel at http://192.168.10.1.
  3. Go to System | Administration.
  4. Set a root password.
  5. Disable ssh password auth and root password logins.
  6. Paste in your RSA ssh public key.
  7. Click Save & Apply.
  8. Go to Network | Firewall.
  9. Select "accept" for WAN Input.
  10. Click Save & Apply.

Finaly, go to Network | Interfaces and note the ipv4 address of the WAN port since that will be needed in the next step.

Installing Debian

The first step is to install Debian jessie on the GnuBee.

Connect the blue network port into your router/switch and ssh into the GnuBee using the IP address you noted earlier:

ssh root@192.168.1.xxx

and the root password you set in the previous section.

Then use fdisk /dev/sda to create the following partition layout on the first drive:

Device       Start       End   Sectors   Size Type
/dev/sda1     2048   8390655   8388608     4G Linux swap
/dev/sda2  8390656 234441614 226050959 107.8G Linux filesystem

Note that I used an 120GB solid-state drive as the system drive in order to minimize noise levels.

Then format the swap partition:

mkswap /dev/sda1

and download the latest version of the jessie installer:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/GnuBee_Docs/master/GB-PCx/scripts/jessie_3.10.14/debian-jessie-install

(Yes, the --no-check-certificate is really unfortunate. Please leave a comment if you find a way to work around it.)

The stock installer fails to bring up the correct networking configuration on my network and so I have modified the install script by changing the eth0.1 blurb to:

auto eth0.1
iface eth0.1 inet static
    address 192.168.10.1
    netmask 255.255.255.0

Then you should be able to run the installer succesfully:

sh ./debian-jessie-install

and reboot:

reboot

Restore ssh access in Debian jessie

Once the GnuBee has finished booting, login using the serial console:

  • username: root
  • password: GnuBee

and change the root password using passwd.

Look for the IPv4 address of eth0.2 in the output of the ip addr command and then ssh into the GnuBee from your desktop computer:

ssh root@192.168.1.xxx  # type password set above
mkdir .ssh
vim .ssh/authorized_keys  # paste your ed25519 ssh pubkey

Finish the jessie installation

With this in place, you should be able to ssh into the GnuBee using your public key:

ssh root@192.168.1.172

and then finish the jessie installation:

wget --no-check-certificate https://raw.githubusercontent.com/gnubee-git/gnubee-git.github.io/master/debian/debian-modules-install
bash ./debian-modules-install
reboot

After rebooting, I made a few tweaks to make the system more pleasant to use:

update-alternatives --config editor  # choose vim.basic
dpkg-reconfigure locales  # enable the locale that your desktop is using

Upgrade to stretch and then buster

To upgrade to stretch, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian stretch main
deb http://httpredir.debian.org/debian stretch-updates main
deb http://security.debian.org/ stretch/updates main

Then upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

To upgrade to buster, put this in /etc/apt/sources.list:

deb http://httpredir.debian.org/debian buster main
deb http://httpredir.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

and upgrade the packages:

apt update
apt full-upgrade
apt autoremove
reboot

At this point, my GnuBee is running the latest version of Debian stable, however there are two remaining issues to fix:

  1. openssh-server doesn't work and I am forced to access the GnuBee via the serial interface.

  2. The firmware is running an outdated version of the Linux kernel.

Both of these issues can be resolved by upgrading the firmware to a recent version of Linux.

Upgrading the firmware

In order to move to the firmware that Neil Brown has been working on for a while, I prepared a USB stick:

$ sudo fdisk -l /dev/sdc
Disk /dev/sdc: 3.77 GiB, 4027580416 bytes, 7866368 sectors
Disk model: USB Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x6cb65e6c

Device     Boot Start     End Sectors  Size Id Type
/dev/sdc1        2048 7866367 7864320  3.8G  c W95 FAT32 (LBA)

$ sudo mkfs.vfat /dev/sdc1

using a dos partition table and a W95 FAT32 (LBA) partition.

Then I grabbed the latest gnubee-*-gbpc2.bin file from https://neil.brown.name/gnubee/ and copied it onto the USB stick with the appropriate name:

cp gnubee-5.4.14-gbpc2.bin /media/usbdisk/GNUBEE.BIN

I plugged the stick into the GnuBee and rebooted it to upgrade the firmware, watching the process using the serial console.

Once booted, I had to delete /etc/udev/rules.d/70-persistent-net.rules in order to fix a long timeout while bringing up the network interfaces during boot.

If you want to see the boot messages to ensure there are no errors, run journalctl -b.

Finally, I cleaned up a deprecated and no-longer-needed package:

apt purge ntpdate

and removed its invocation from /etc/rc.local.

05 April, 2020 12:44AM

April 04, 2020

hackergotchi for Joey Hess

Joey Hess

solar powered waterfall controlled by a GPIO port

This waterfall is beside my yard. When it's running, I know my water tanks are full and the spring is not dry.

Also it's computer controlled, for times when I don't want to hear it. I'll also use the computer control later on to avoid running the pump excessively and wearing it out, and for some safety features like not running when the water is frozen.

This is a whole hillside of pipes, water tanks, pumps, solar panels, all controlled by a GPIO port. Easy enough; the pump controller has a float switch input and the GPIO drives a 4n35 optoisolator to open or close that circuit. Hard part will be burying all the cable to the pump. And then all the landscaping around the waterfall.

There's a bit of lag to turning it on and off. It can take over an hour for it to start flowing, and around half an hour to stop. The water level has to get high enough in the water tanks to overcome some airlocks and complicated hydrodynamic flow stuff. Then when it stops, all that excess water has to drain back down.

Anyway, enjoy my soothing afternoon project and/or massive rube goldberg machine, I certainly am.

04 April, 2020 08:58PM

Thorsten Alteholz

My Debian Activities in March 2020

FTP master

This month I accepted 156 packages and rejected 26. The overall number of packages that got accepted was 203.

Debian LTS

This was my sixty ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 30h. During that time I did LTS uploads of:

  • [DLA 2156-1] e2fsprogs security update for one CVE
  • [DLA 2157-1] weechat security update for three CVEs
  • [DLA 2160-1] php5 security update for two CVEs
  • [DLA 2164-1] gst-plugins-bad0.10 security update for four CVEs
  • [DLA 2165-1] apng2gif security update for one CVE

Also my work on graphicsmagic was accepted which resulted in:

  • [DSA 4640-1] graphicsmagick security update in Buster and Strech for 16 CVEs

Further I sent debdiffs of weechat/stretch, weechat/buster, e2fsprogs/stretch to the corresponding maintainers but got no feedback yet.

As there have been lots of no-dsa-CVEs accumulated for wireshark, I started to work on them but could not upload yet.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty first ELTS month.

During my really allocated time I uploaded:

  • ELA-218-1 for e2fsprogs
  • ELA-220-1 for php5
  • ELA-221-1 for nss

I also did some days of frontdesk duties.

Other stuff

Unfortunately this month again strange things happened outside Debian and the discussions within Debian did not stop. Nonetheless I got some stuff done.

I improved packaging of …

I sponsored uploads of …

  • … ocf-spec-core
  • … theme-d-gnome

Sorry to all people who also requested sponsoring, but sometimes things happen and your upload might be delayed.

I uploaded new upstream versions of …

On my Go challenge I uploaded:
golang-github-dreamitgetit-statuscake, golang-github-ensighten-udnssdk, golang-github-apparentlymart-go-dump, golang-github-suapapa-go-eddystone, golang-github-joyent-gosdc, golang-github-nrdcg-goinwx, golang-github-bmatcuk-doublestar, golang-github-go-xorm-core, golang-github-svanharmelen-jsonapi, golang-github-goji-httpauth, golang-github-phpdave11-gofpdi

04 April, 2020 04:02PM by alteholz

April 03, 2020

hackergotchi for Sean Whitton

Sean Whitton

Manifest to run Debian pre-upload tests on builds.sr.ht

Before uploading stuff to Debian, I build in a clean chroot, and then run piuparts, autopkgtest and lintian. For some of my packages this can take around an hour on my laptop, which is fairly old. Normally I don’t mind waiting, but sometimes I want to put my laptop away, and then it would be good for things to be faster. It occurred to me that I could make use of my builds.sr.ht account to run these tests on more powerful hardware.

This build manifest seems to work:

# BEGIN CONFIGURABLE
sources:
  - https://salsa.debian.org/perl-team/modules/packages/libgit-annex-perl.git
environment:
  source: libgit-annex-perl
  quilt:  auto
# END CONFIGURABLE

image: debian/unstable
packages:
  - autopkgtest
  - devscripts
  - dgit
  - lintian
  - piuparts
  - sbuild
tasks:
  - setup: |
      cd $source
      source_version=$(dpkg-parsechangelog -SVersion)
      echo "source_version=$source_version" >>~/.buildenv
      git deborig || origtargz
      sudo sbuild-createchroot --command-prefix=eatmydata --include=eatmydata unstable /srv/chroot/unstable-amd64-sbuild
      sudo sbuild-adduser $USER
  - build: |
      cd $source
      dgit --quilt=$quilt sbuild -d unstable --no-run-lintian
  - lintian: |
      lintian ${source}_${source_version}_multi.changes
  - piuparts: |
      sudo piuparts --no-eatmydata --schroot unstable-amd64-sbuild ${source}_${source_version}_multi.changes
  - autopkgtest: |
      autopkgtest ${source}_${source_version}_multi.changes -- schroot unstable-amd64-sbuild

And here’s my script.

03 April, 2020 11:02PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

More Switch games

Sonic Mania

Sonic Mania

Sonic Mania is a really lovely homage to the classic 90s Sonic the Hedgehog platform games. Featuring more or less the classic gameplay, and expanded versions of the original levels, with lots of secrets, surprises and easter eggs for fans of the original. On my recommendation a friend of mine bought it for her daughter's birthday recently but her daughter will now have to prise her mum off it! Currently on sale at 30% off (£11.19). The one complaint I have about it is the lack of females in the roster of 5 playable characters.

Butcher is a Doom-esque aesthetic, very violent side-scrolling shooter/platformer, currently on sale at 70% off (just £2.69, the price of a coffee). I've played it for about 10 minutes during coffee breaks and it's fun, hard, and pretty intense. The sound track is great, and available to buy separately but only if you own or buy the original game from the same store, which is a strange restriction. It's also on Spotify.

03 April, 2020 03:44PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSimdJson 0.0.4: Even Faster Upstream!

A new (upstream) simdjson release was announced by Daniel Lemire earlier this week, and my Twitter mentions have been running red-hot ever since as he was kind enough to tag me. Do look at that blog post, there is some impressive work in there. We wrapped up the (still very simple) rcppsimdjson around it last night and shipped it this morning.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. For illustration, I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk). The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed.

This release brings upstream 0.3 (and 0.3.1) plus a minor tweak (also shipped back upstream). Our full NEWS entry follows.

Changes in version 0.0.4 (2020-04-03)

  • Upgraded to new upstream releases 0.3 and 0.3.1 (Dirk in #9 closing #8)

  • Updated example validateJSON to API changes.

But because Daniel is such a fantastic upstream developer to collaborate with, he even filed a full feature-request ‘maybe you can consider upgrading’ as issue #8 at our repo containing the fully detailed list of changes. As it is so impressive I will simple quote the upper half of just the major changes:

Highlights

  • Multi-Document Parsing: Read a bundle of JSON documents (ndjson) 2-4x faster than doing it individually. API docs / Design Details
  • Simplified API: The API has been completely revamped for ease of use, including a new JSON navigation API and fluent support for error code and exception styles of error handling with a single API. Docs
  • Exact Float Parsing: Now simdjson parses floats flawlessly without any performance loss (https://github.com/simdjson/simdjson/pull/558). Blog Post
  • Even Faster: The fastest parser got faster! With a shiny new UTF-8 validator and meticulously refactored SIMD core, simdjson 0.3 is 15% faster than before, running at 2.5 GB/s (where 0.2 ran at 2.2 GB/s).

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

03 April, 2020 03:15PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Opinionated IkiWiki

For various personal projects and things, past and present (including my personal site) I use IkiWiki, which (by modern standards) is a bit of a pain to set up and maintain. For that reason I find it hard to recommend to people. It would be nice to fire up a snapshot of an existing IkiWiki instance to test what the outcome of some changes might be. That's cumbersome enough at the moment that I haven't bothered to do it more than once. Separately, some months ago I did a routine upgrade of Debian for the web server running this site, and my IkiWiki installation broke for the first time in ten years. I've never had issues like this before.

For all of these reasons I've just dusted off an old experiment of mine now renamed Opinionated IkiWiki. It's IkiWiki in a container, configured to be usable out-of-the-box, with some opinionated configuration decisions made for you. The intention is you should be able to fire up this container and immediately have a useful IkiWiki instance to work from. It should hopefully be easier to clone an existing wiki— content, configuration and all—for experimentation.

You can check out the source at GitHub, and grab container images from quay.io. Or fire one up immediately at http://127.0.0.1:8080 with something like

podman run --rm -ti -p 8080:8080 \
quay.io/jdowland/opinionated-ikiwiki:latest

This was a good excuse to learn about multi-stage container builds and explore quay.io.

Feedback gratefully received: As GitHub issues, comments here, or mail.

03 April, 2020 02:10PM

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma updates for Debian sid/testing

I have written before about getting updated packages for KDE/Plasma on Debian. In the meantime I have moved all package building to the openSUSE Build Service, thus I am able to provide builds for Debian/testing, both i386 and amd64 architectures.

For those in hurry: new binary packages that can be used on both Debian/testing and Debian/sid can be obtained for both i386 and amd64 archs here:

Debian/testing:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-plasma/Debian_Testing  ./

Debian/unstable:

deb http://download.opensuse.org/repositories/home:/npreining:/debian-plasma/Debian_Unstable  ./

To make these repositories work out of the box, you need to import my OBS gpg key: obs-npreining.asc, best to download it and put the file into /etc/apt/trusted.gpg.d/obs-npreining.asc.

The sources for the above binaries are available at the OBS site for the debian-plasma sub-project, but I will also try to keep them apt-get-able on my server as before:

deb-src https://www.preining.info/debian unstable kde

I have choosen the openSUSE build service because of its ease to push new packages, and automatic resolution of package dependencies within the same repository. No need to compile the packages myself, nor search for the correct order. I have also added a few new packages and updates (dolphin, umbrello, kwalletmanager, kompare,…), at the moment we are at 131 packages that got updated. If you have requests for update, drop me an email!

Enjoy

Norbert

03 April, 2020 12:07AM by Norbert Preining

April 02, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.12: Small QuantLib 1.18 update

A new release 0.4.12 of RQuantLib arrived on CRAN today, and was uploaded to Debian as well.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does relatively little. When QuantLib 1.18 came out, I immediately did my usual bit of packaging it for Debian as well creating binaries via my Ubuntu PPA so that I could test the package against it. And a few call from RQuantLib are now hitting interface functions marked as ‘deprecated’ leading to compiler nags. So I fixed that in PR #146. And today CRAN sent me email to please fix in the released version—so I rolled this up as 0.4.12. Not other changes.

Changes in RQuantLib version 0.4.12 (2020-04-01)

  • Changes in RQuantLib code:

    • Calls deprecated-in-QuantLib 1.18 were updated (Dirk in #146).

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 April, 2020 09:57PM

Sven Hoexter

New TLDs and Automatic link detection was a bad idea

Update: Seems this is a Firefox specific bug in the Slack Webapplication, it works in Chrome and the Slack Electron Application as it should. Tested with Firefox ESR on Debian/buster and Firefox 74 on OS X.

Ah I like it that we now have so many TLDs and matching on those seems to go bad more often now. Last occassion is Slack (which I think is a pile of shit written by morons, but that is a different story) which somehow does not properly match on .co domains. Leading to this auto linking:

nsswitch.conf link

Now I'm not sure if someone enocountered the same issue, or people just registered random domains just because they could. I found registrations for

  • resolv.co
  • pam.co
  • sysctl.co
  • so.co (ld.so.co woud've been really cute)

I've a few more .conf files in /etc which could be interesting in an IT environment, but for the sake of playing with it I registered nsswitch.co at godaddy. I do not want to endorse them in anyway, but for the first year it's only 13.08EUR right now, which is okay to pay for a stupid demo. So if you feel like it, you can probably register something stupid for yourself to play around with. I do not intent to renew this domain next year, so be aware of what happens then with the next owner.

02 April, 2020 03:12PM

hackergotchi for Ulrike Uhlig

Ulrike Uhlig

Breaking the chain reaction of reactions to reactions

Sometimes, in our day-to-day-interactions, communication becomes disruptive, resembling a chain of reactions to reactions to reactions. Sometimes we lose the capacity to express our ideas and feelings. Sometimes communication just gets stuck, maybe conflict breaks out. When we see these same patterns over and over again, this might be due to the ever same roles that we adopt and play. Learnt in childhood, these roles are deeply ingrained in our adult selves, and acted out as unconscious scripts. Until we notice and work on them.

This is a post inspired by contents from my mediation training.

In the 1960s, Stephen Karpman has thought of a model of human communication that maps the destructive interactions which occur between people. This map is known as the drama triangle.

Karpman defined three roles that interact with each other. We can play one role at work, and a different one at home, and another one with our children. Or we can switch from one role to the other in just one conversation. The three roles are:

  • The Persecutor. I'm right. It's all your fault. The Persecutor acts out criticism, accusation, and condemnation. Their behavior is controlling, blaming, shaming, oppressive, hurtful, angry, authoritarian, superior. They know everything better, they laugh about others, bully, shame, or belittle them. The Persecutor discounts others' value, looking down on them. Persecutor's thought: I'm okay, you're not okay.
  • The Victim. I'm blameless. Poor me. The Victim feels not accepted by others, oppressed, helpless, hopeless, powerless, ashamed, inferior. The Victim thinks they are unable or not good enough to solve problems on their own. The Victim discounts themselves. Victim's thought: I'm not okay, you're okay.
  • The Rescuer. I'm good. Let me help you! The Rescuer is a person who has unsolicited and unlimited advice concerning the Victim's problems. They think for the Victim, and comfort them, generally without having been asked to do so. The Rescuer acts seemingly to help the Victim but rescuing mostly helps them to feel better themselves, as it allows them to ignore their own anxieties, worries, or shortcomings. The Rescuer needs a Victim to rescue, effectively keeping the Victim powerless. The Rescuer discounts others' abilities to think and act for themselves, looking down on them. Rescuer's thought: I'm okay, you're not okay.

Does this sound familiar?

"Involvement in an unhealthy drama triangle is not something another person is doing to you. It's something you are doing with another person or persons." Well, to be more precise, it's something that we are all doing to each other: "Drama triangles form when participants who are predispositioned to adopt the roles of a drama triangle come together over an issue." (quoted from: Escaping conflict and the Karpman Drama Triangle.)

People act out these roles to meet personal (often unconscious) needs. But each of these roles is toxic in that it sees others as problems to react to. In not being able to see that we take on these roles, we keep the triangle going, like in a dispute in which one word provokes another until someone leaves, slamming the door. This is drama. When we are stuck in the drama triangle, no one wins because all three roles "cause pain", "perpetuate shame [and] guilt", and "keep people caught in dysfunctional behavior" (quoted from Lynne Namka: The Drama Triangle, Three Roles of Victim-hood).

How to get out of the drama triangle

Awareness. To get out of the triangle, it is foremost suggested to be aware of its existence. I agree, it helps. I see it everywhere now.

Identifying one's role and starting to act differently. While we switch roles, we generally take on a preferred role that we act out most of the time, and that was learnt in childhood. (I found a test to identify one's common primary role — in German.)

But how do we act differently? We need to take another look at that uncanny triangle.

From the drama triangle to the winner triangle

I found it insightful to ask what benefit each role could potentially bring into the interaction.
Acey Choy has created the Winner triangle, in 1990, as an attempt to transform social interactions away from drama. Her winner triangle shifts our perceptions of the roles: the Victim becomes the Vulnerable, the Rescuer becomes the Caring, the Persecutor becomes the Assertive.

Persecutor            Rescuer       Assertive              Caring
I'm right.            I'm good.     I have needs.          I'm listening.
    ----------------------              ----------------------
    \                    /              \                    /
     \                  /                \                  /
      \                /                  \                /
       \              /                    \              /
        \            /                      \            /
         \          /                        \          /
          \        /                          \        /
           \      /                            \      /
            \    /                              \    /
             \  /                                \  /
              \/                                  \/
            Victim                            Vulnerable
            I'm blameless.                    I'm struggling.

Karpman Dreaded Drama Triangle            Choy's Winner Triangle

The Assertive "I have needs." has a calling, aims at change, initiates, and gives feedback. Skills to learn: The Assertive needs to learn to identify their needs, communicate them, and negotiate with others on eye level without shaming, punishing, or belittling them. The Assertive needs to learn to give constructive feedback, without dismissing others. (In the workplace, it could be helpful to have a space for this.) The Assertive could benefit from learning to use I-Statements.

The Caring "I'm listening." shows good will and sensitivity, cares, is empathic and supportive. Skills to learn: The Caring needs to learn to respect the boundaries of others: trusting their abilities to think, problem solve and talk for themselves. Therefore, the Caring could benefit from improving their active listening skills. Furthermore the Caring needs to learn to identify and respect their own boundaries and not to do things only because it makes them feel better about themselves.

The Vulnerable "I'm struggling." has the skill of seeing and naming problems. Skills to learn: The Vulnerable needs to learn to acknowledge their feelings and needs, practice self-awareness, and self-compassion. They need to untie their self-esteem from the validation of other people. They need to learn to take care of themselves, and to strengthen their problem solving and decision making skills.

What has this got to do with autonomy and power structures?

Each of these interactions is embedded in larger society, and, as said above, we learn these roles from childhood. Therefore, we perpetually reproduce power structures, and learnt behavior. I doubt that fixing this on an individual level is sufficient to transform our interactions outside of small groups, families or work places. Although that would be a good start.

We can see that the triangle holds together because the Victim, seemingly devoid of a way to handle their own needs, transfers care of their needs to the Rescuer, thereby giving up on their autonomy. The Rescuer is provided by the Victim with a sense of autonomy, knowledge, and power, that only works while denying the Victim their autonomy. At the same time, the Persecutor denies everyone else's needs and autonomy, and feels powerful by dismissing others. I've recently mentioned the importance of autonomy in order to avoid burnout, and as a means to control one's own life. If the Rescuer can acknowledge being in the triangle, and give the Victim autonomy, by supporting them with compassion, empathy, and guidance, and at the same time respecting their own boundaries, we could find even more ways to escape the drama triangle.

Notes

My description of the roles was heavily inspired by the article Escaping Conflict and the Karpman Drama Triangle that has a lot more detail on how to escape the triangle, and how to recognize when we're moving into one of the roles. While the article is informing families living with a person suffering from a spectrum of Borderline Personality Disorder, the content applies to any dysfunctional interaction.

02 April, 2020 07:00AM by ulrike

hackergotchi for Mike Gabriel

Mike Gabriel

Q: RoamingProfiles under GNU/Linux? What's your Best Practice?

This post is an open question to the wide range of GNU/Linux site admins out there. Possibly some of you have the joy of maintaining GNU/Linux also on user endpoint devices (i.e. user workstations, user notebooks, etc.), not only on corporate servers.

TL;DR; In the context of a customer project, I am researching ways of mimicking (or inventing anew) a feature well known (and sometimes also well hated) from the MS Windows world: Roaming User Profiles. If anyone does have any input on that, please contact me (OFTC/Freenode IRC, Telegram, email). I am curious what your solution may be.

The Use Case Scenario

In my use case, all user machines shall be mobile (notebooks, convertibles, etc). The machines maybe on-site most of the time, but they need offline capabilities so that the users can transparently move off-site and continue their work. At the same time, a copy of the home directory (or the home directory itself) shall be stored on some backend fileservers (for central backups as well as for providing the possibility to the user to login to another machine and be up-and-running +/- out-of-the-box).

The Vision

Initial Login

Ideally, I'd like to have a low level file system feature for this that handles it all. On corporate user logon (which must take place on-site and uses some LDAP database as backend), the user credentials get cached locally (and get re-mapped and re-cached with every on-site login later on), and the home directory gets mounted from a remote server at first.

Shortly after having logged in everything in the user's home gets sync'ed to a local cache in the background without the user noticing. At the end of the sync a GUI user notification would be nice, e.g. like "All user data has been cached locally, you are good to go and leave off-site now with this machine."

Moving Off-Site

A day later, the user may be travelling or such, the user logs into the machine again, the machine senses being offline or on some alien (not corporate) network, but the user can just continue their work, all in local cache.

Several days later, the same user with the same machine returns back to office, logs into the machine again, and immediately after login, all cached data gets synced back to the user's server filespace.

Possible Conflict Policies

Now there might be cases where the user has been working locally for a while and all the profile data received slight changes. The user might have had the possibility to log into other corporate servers from the alien network he*she is on and with that login, some user profile files probably will have gotten changed.

Regarding client-server sync policies, one could now enforce a client-always-wins policy that leads to changes being dropped server-side once the user's mobile workstation returns back on-site. One could also set up a bi-directional sync policy for normal data files, but a client-always-wins policy for configuration files (.files and .folders). Etc.pp.

Request for Feedback and Comments

I could go on further and further with making up edges and corner cases of all this. We had a little discussion on this some days ago on the #debian-devel IRC channel already. Thanks to all contributors to that discussion.

And again, if you have solved the above riddle on your site and are corporate-wise allowed to share the concept, I'd be happy about your feedback.

Plese get in touch!

light+love
Mike (aka sunweaver on the Fediverse and in Debian)

02 April, 2020 06:36AM by sunweaver

April 01, 2020

Free Software Fellowship

Malaysia de-mystifies tone policing

When the leaders of free software organizations want to avoid answering questions about money and conflicts of interest, one of their most popular fudges is to have some sidekick come in and complain about the tone of the question. These are the tone police. Beware.

What, then, is the correct tone for women and volunteers to use when asking husbands and leaders about money?

The Malaysian Government has provided an insight: try to sound like the cartoon character Doraemon. Doraemon is a robotic cat without ears.

The Malaysians have gone a lot further, creating a complete Code of Conduct for women to observe during the Coronavirus lockdown:

  • Put on your make-up
  • Wear a skirt and high heels (see the picture in the advertisement below)
  • Avoid nagging your husband when he is comfortable on the sofa

What happens if you have a Code of Conduct issue? Well, most Codes of Conduct have a reporting procedure. In many free software organizations, it involves sending a report to the leader or the event organizer. If you look around the real world, you'll notice that in many cases the most serious Code of Conduct abuses are committed by people in positions of authority. Therefore, if free software organizations designate their leaders and close allies to handle CoC complaints, they make it impossible for the most serious complaints to be investigated.

The marital home provides an opportunity for us to understand this: if a Malaysian woman has a Code of Conduct problem, what is she going to do, put on her best Doraemon voice and ask permission to complain? Sadly, that is exactly what the brochure instructs.

In her infamous talk about enforcement at FOSDEM 2019, OSI president Molly de Blanc insists that it is necessary to follow through on community guidelines. She even gives a horrendous picture of a cat behind bars, how would Doraemon feel looking at that?

This is no laughing matter unfortunately. A recent survey found one in five women still believe husbands deserve to beat ‘disobedient’ wives as they enforce Codes of Conduct in the home.

As we read that, we couldn't help wondering if the rate of domestic homicides will increase in 2020 and if so, is the Code of Conduct to blame for that?

While the wording of this Code of Conduct varies significantly from those used in free software organizations, the principle is the same: trying to justify a situation where some people are more equal than others.

01 April, 2020 10:05PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, March 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, and carried over 0.75 hours from February. I only worked 12.25 hours this month, so I will carry over 8.5 hours to April.

I issued DLA 2114-1 for the update to linux-4.9.

I continued preparing and testing the next update to Linux 3.16. This includes a number of filesystem fixes that require running the "xfstests" test suite.

I also replied to questions from LTS contributors and users, sent to me personally or on the public mailing list.

01 April, 2020 09:34PM

hackergotchi for Joachim Breitner

Joachim Breitner

30 years of Haskell

Vitaly Bragilevsky, in a mail to the GHC Steering Committee, reminded me that the first version of the Haskell programming language was released exactly 30 years ago. On April 1st. So that raises the question: Was Haskell just an April fool's joke that was never retracted?

The cover of the 1.0 Haskell report

The cover of the 1.0 Haskell report

My own first exposure to Haskell was in April 2005; the oldest piece of Haskell I could find on my machine is this part of a university assignment from April:

> pascal 1 = [1]
> pascal (n+1) = zipWith (+) (x ++ [0]) (0 : x) where x = pascal n

This means that I now have witnessed half of Haskell's existence. I have never regretted getting into Haskell, and every time I come back from having worked in other languages (which all have their merits too), I greatly enjoy the beauty and elegance of expressing my ideas in a lazy and strictly typed language with a concise syntax.

I am looking forward to witnessing (and, to a very small degree, shaping) the next 15 years of Haskell.

01 April, 2020 06:16PM by Joachim Breitner (mail@joachim-breitner.de)

Sylvain Beucler

Debian LTS and ELTS - March 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In March, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 30h for LTS (out of 30 max; all done) and 20h for ELTS (out of 20 max; I did 0).

Most contributors claimed vulnerabilities by performing early CVE monitoring/triaging on their own, making me question the relevance of the Front-Desk role. It could be due to a transient combination of higher hours volume and lower open vulnerabilities.

Working as a collective of hourly paid freelancers makes it more likely to work in silos, resulting in little interaction when raising workflow topics on the mailing list. Maybe we're reaching a point where regular team meetings will be benefical.

As previously mentioned, I structure my work keeping the global Debian security in mind. It can be stressful though, and I believe current communication practices may deter such initiatives.

ELTS - Wheezy

  • No work. ELTS has few sponsors right now and few vulnerabilities to fix, hence why I could not work on it this month. I gave back my hours at the end of the month.

LTS - Jessie

  • lua-cgi: global triage: CVE-2014-10399,CVE-2014-10400/lua-cgi not-affected, CVE-2014-2875/lua-cgi referenced in BTS
  • libpcap: global triage: request CVE-2018-16301 rejection as upstream failed to; got MITRE to reject (not "dispute") a CVE for the first time!
  • nfs-utils: suites harmonization: CVE-2019-3689: ping upstream again, locate upstream'd commit, reference it at BTS and MITRE; close MR which had been ignored and now redone following said referencing
  • slurm-llnl: re-add; create CVE-2019-12838 reproducer, test abhijith's pending upload; reference patches; witness regression in CVE-2019-19728, get denied access to upstream bug, triage as ignored (minor issue + regression); security upload DLA 2143-1
  • xerces-c: global triage progress: investigate ABI-(in)compatibility of hle's patch direction; initiate discussion at upstream and RedHat; mark postponed
  • nethack: jessie triage fix: mark end-of-life
  • tor: global triage fix: CVE-2020-10592,CVE-2020-10593: fix upstream BTS links, fix DSA reference
  • php7.3: embedded copies: removed from unstable (replaced with php7.4); checked whether libonig is still bundled (no, now properly unbundled at upstream level); jessie still not-affected
  • okular: CVE-2020-9359: reference PoC, security upload DLA 2159-1

Documentation/Scripts

  • data/dla-needed.txt: tidy/refresh pending packages status
  • LTS/Development: DLA regression numbering when a past DLA affects a different package
  • LTS/FAQ: document past LTS releases archive location following a user request; trickier than expected, 3 contributors required to find the answer ;)
  • Question aggressive package claims; little feedback
  • embedded-copies: libvncserver: reference various state of embedded copies in italc/ssvnc/tightvnc/veyon/vncsnapshot; builds on initial research from sunweaver
  • Attempt to progress on libvncserver embedded copies triaging; technical topic not anwered, organizational topic ignored
  • phppgadmin: provide feedback on CVE-2019-10784
  • Answer general workflow question about vulnerability severity
  • Answer GPAC CVE information request from a PhD student at CEA, following my large security update

01 April, 2020 02:26PM

hackergotchi for Joey Hess

Joey Hess

DIN distractions

My offgrid house has an industrial automation panel.

A row of electrical devices, mounted on a metal rail. Many wires neatly extend from it above and below, disappearing into wire gutters.

I started building this in February, before covid-19 was impacting us here, when lots of mail orders were no big problem, and getting an unusual 3D-printed DIN rail bracket for a SSD was just a couple clicks.

I finished a month later, deep into social isolation and quarentine, scrounging around the house for scrap wire, scavenging screws from unused stuff and cutting them to size, and hoping I would not end up in a "need just one more part that I can't get" situation.

It got rather elaborate, and working on it was often a welcome distraction from the news when I couldn't concentrate on my usual work. I'm posting this now because people sometimes tell me they like hearing about my offfgrid stuff, and perhaps you could use a distraction too.

The panel has my house's computer on it, as well as both AC and DC power distribution, breakers, and switching. Since the house is offgrid, the panel is designed to let every non-essential power drain be turned off, from my offgrid fridge to the 20 terabytes of offline storage to the inverter and satellite dish, the spring pump for my gravity flow water system, and even the power outlet by the kitchen sink.

Saving power is part of why I'm using old-school relays and stuff and not IOT devices, the other reason is of course: IOT devices are horrible dystopian e-waste. I'm taking the utopian Star Trek approach, where I can command "full power to the vacuum cleaner!"

Two circuit boards, connected by numerous ribbon cables, and clearly hand-soldered. The smaller board is suspended above the larger. An electrical schematic, of moderate complexity.

At the core of the panel, next to the cubietruck arm board, is a custom IO daughterboard. Designed and built by hand to fit into a DIN mount case, it uses every GPIO pin on the cubietruck's main GPIO header. Making this board took 40+ hours, and was about half the project. It got pretty tight in there.

This was my first foray into DIN rail mount, and it really is industrial lego -- a whole universe of parts that all fit together and are immensely flexible. Often priced more than seems reasonable for a little bit of plastic and metal, until you look at the spec sheets and the ratings. (Total cost for my panel was $400.) It's odd that it's not more used outside its niche -- I came of age in the Bay Area, surrounded by rack mount equipment, but no DIN mount equipment. Hacking the hardware in a rack is unusual, but DIN invites hacking.

Admittedly, this is a second system kind of project, replacing some unsightly shelves full of gear and wires everywhere with something kind of overdone. But should be worth it in the long run as new gear gets clipped into place and it evolves for changing needs.

Also, wire gutters, where have you been all my life?

A cramped utility room with an entire wall covered with electronic gear, including the DIN rail, which is surrounded by wire gutters Detail of a wire gutter with the cover removed. Numerous large and small wires run along it and exit here and there.

Finally, if you'd like to know what everything on the DIN rail is, from left to right: Ground block, 24v DC disconnect, fridge GFI, spare GFI, USB hub switch, computer switch, +24v block, -24v block, IO daughterboard, 1tb SSD, arm board, modem, 3 USB hubs, 5 relays, AC hot block, AC neutral block, DC-DC power converters, humidity sensor.

Full width of DIN rail.

01 April, 2020 02:12PM

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS (March 2020)

In March 2020, I have worked on the Debian LTS project for 10.25 hours (of 10.25 hours planned).

LTS Work

  • Frontdesk: CVE Bug Triaging for Debian jessie LTS: libpam-krb5, symfony, edk2 (EOL), icu, twisted, yubikey-val, netkit-telnet(-ssl), libperlspeak-perl (new EOL). and glibc.
  • Upload to jessie-security: tinyproxy (DLA-2163-1 [1], 1 CVE, 1 severe bug [2]).
  • Revisit CVE-2015-9541 in jessie's qtbase-opensource-src and agree with Dmitry Shachnev from Debian's KDE/Qt Team about tagging this CVE '<ignored>' in Debian's security tracker. The proposed upstream patch uses an API not available in jessie's Qt5 version (QStringView API) and the serious of patched ot be applied would be quite invasive.
  • Prepare upload of libpam-krb5 4.6-3+deb8u1 (1 CVE) (will be uploaded during the day).
  • Look closer into CVE-2019-17177 for FreeRDP v1.1 (and decide to ignore it, as patchwork would have to be applied all over the code).

Other security related work for Debian

  • Upload to stretch: libvncserver 0.9.11+dfsg-1.3~deb9u4 (1 CVE)
  • Upload to buster: libvncserver 0.9.11+dfsg-1.3+deb10u3 (1 CVE)
  • Upload to stretch: tinyproxy 1.8.4-3~deb9u2 (1 CVE, 1 severe bug [2])
  • Upload to buster: tinyproxy 1.10.0-2+deb10u1 (1 severe bug)
  • Study the code of x11vnc (regarding Debian bug #672435 [3], which currently has a temp-CVE), apply upstream's fix (which did not work) and ping upstream about possible other required patches in x11vnc and/or libVNC.

Credits

A very big thanks goes to Utkarsh Gupta, a colleague from the Debian LTS team, who sponsored all my uploads and who sent the DLA mails on my behalf, while I was (and still am) in self-induced GPG lockdown (I forgot to update my GPG public key in Debian's GPG keyring). Thanks, Utkarsh!

References

01 April, 2020 09:41AM by sunweaver

Russ Allbery

Review: A Grand and Bold Thing

Review: A Grand and Bold Thing, by Ann Finkbeiner

Publisher: Free Press
Copyright: August 2010
ISBN: 1-4391-9647-8
Format: Kindle
Pages: 200

With the (somewhat excessively long) subtitle of An Extraordinary New Map of the Universe Ushering In a New Era of Discovery, this is a history of the Sloan Digital Sky Survey. It's structured as a mostly chronological history of the project with background profiles on key project members, particularly James Gunn.

Those who follow my blog will know that I recently started a new job at Vera C. Rubin Observatory (formerly the Large Synoptic Survey Telescope). Our goal is to take a complete survey of the night sky several times a week for ten years. That project is the direct successor of the Sloan Digital Sky Survey, and its project team includes many people who formerly worked on Sloan. This book (and another one, Giant Telescopes) was recommended to me as a way to come up to speed on the history of this branch of astronomy.

Before reading this book, I hadn't understood how deeply the ready availability of the Sloan sky survey data had changed astronomy. Prior to the availability of that survey data, astronomers would develop theories and then try to book telescope time to make observations to test those theories. That telescope time was precious and in high demand, so was not readily available, and was vulnerable to poor weather conditions (like overcast skies) once the allocated time finally arrived.

The Sloan project changed all of that. Its output was a comprehensive sky survey available digitally whenever and wherever an astronomer needed it. One could develop a theory and then search the Sloan Digital Sky Survey for relevant data and, for at least some types of theories, test that theory against the data without needing precious telescope time or new observations. It was a transformational change in astronomy, made possible by the radical decision, early in the project, to release all of the data instead of keeping it private to a specific research project.

The shape of that change is one takeaway from this book. The other is how many problems the project ran into trying to achieve that goal. About a third of the way into this book, I started wondering if the project was cursed. So many things went wrong, from institutional politics through equipment failures to software bugs and manufacturing problems with the telescope mirror. That makes it all the more impressive how much impact the project eventually had. It's also remarkable just how many bad things can happen to a telescope mirror without making the telescope unusable.

Finkbeiner provides the most relevant astronomical background as she tells the story so that the unfamiliar reader can get an idea of what questions the Sloan survey originally set out to answer (particularly about quasars), but this is more of a project history than a popular astronomy book. There's enough astronomy here for context, but not enough to satisfy curiosity. If you're like me, expect to have your curiosity piqued, possibly resulting in buying popular surveys of current astronomy research. (At least one review is coming soon.)

Obviously this book is of special interest to me because of my new field of work, my background at a research university, and because it features some of my co-workers. I'm not sure how interesting it will be to someone without that background and personal connection. But if you've ever been adjacent to or curious about how large-scale science projects are done, this is a fascinating story. Both the failures and problems and the way they were eventually solved is different than how the more common stories of successful or failed companies are told. (It helps, at least for me, that the shared goal was to do science, rather than to make money for a corporation whose fortunes are loosely connected to those of the people doing the work.)

Recommended if this topic sounds at all interesting.

Rating: 7 out of 10

01 April, 2020 03:43AM

Paul Wise

FLOSS Activities March 2020

Changes

Issues

Review

Administration

  • Debian wiki: approve accounts

Communication

Sponsors

The dh-make-perl feature requests, file bug report, File::Libmagic changes, autoconf-archive change, libpst work and the purple-discord upload were sponsored by my employer. All other work was done on a volunteer basis.

01 April, 2020 02:38AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

After the snow cherry blossoms fell.

After the snow cherry blossoms fell. It's already April.

01 April, 2020 12:52AM by Junichi Uekawa

Jonathan Wiltshire

neuraldak

We are proud to announce that dak, the Debian Archive Kit, has been replaced by a neural network for processing package uploads and other archive maintenance. All FTP masters and assistants have been re-deployed to concentrate on managing neuraldak.

neuraldak is an advanced machine learning algorithm which has been taught about appropriate uploads, can write to maintainers about their bugs and can automatically make an evaluation about suitable licenses and code quality. Any uploads which do not meet its standards will be rejected with prejudice.

We anticipate that neuraldak will also monitor social media for discontent about package uploads, and train itself to do better with its decisions.

In terms of licensing , neuraldak has been seeded only with the GPL license. This we consider the gold standard of licenses, and its clauses will be the basis for neuraldak evaluating other licenses as it is exposed to them.

Over the course of the next few weeks, neuraldak will also learn to manage the testing suite. Once it is established, we expect to be able to make a full stable release of Debian approximately every six weeks. We have therefore also re-purposed Janelle Shane’s cat name algorithm to invent suitable release names, since the list of Toy Story names is likely to be exhausted before 2021.

neuraldak is an independent software project. Rumours of it being derived from Skynet are entirely unfounded.

The post neuraldak appeared first on jwiltshire.org.uk.

01 April, 2020 12:50AM by Jon

Debian Community News

Donald Trump resigns, releases Non-Platform for 2020 election

Donald Trump's attack on a reporter was remarkably similar to Debian Project Leader (DPL) Sam Hartman's recent attacks on volunteers:

Trump has announced he will not contest the 2020 election.

Inspired by Sam Hartman's Non-Platform email, President Trump released the following statement:

TL;DR: Overall, being US President has been incredibly rewarding (for my companies and family). I have enjoyed trampling on you all. I hope to be US president again some year, but 2020 is the wrong year for me and for the Republican party. So I will not nominate myself this year, but hope to do so some future year when we control both houses and we don't have to negotiate about anything with the Democrats. A monoculture is my natural environment and safe space.

Consensus And Summaries: Space Force and Tax Cuts
===================================

In my platform I wrote that I wanted to start the US Space Force and I did that. It just turns out that they weren't much use for fighting real world problems like the Coronavirus pandemic.

I promised you tax cuts. I made some progress on this. My companies are testing the tax cuts and we'll let you know when its a good idea for the rest of you.

Facilitated GRs
===============

So the House had that vote on Impeachment and it didn't go my way but when it got to the Senate, the voting went to plan.

Delegates and Project Needs
===========================

One area where i was completely naive is in my understanding of how to manage delegations.

So I just turned on Fox News, picked out people who looked like experts and tried to appoint them to the Federal Reserve Board.

Technical Committee
===================

Bluntly, the Supreme Court is an inadequate tool to deal with maintainers who are not cooperating with the larger community.

So I picked some of my ideological buddies to fix it.

Conflict Deescalation
=====================

I've been talking about trying to find better ways to resolve conflicts in the world since they put me on TV.

Being US President gives me the toys to meddle in conflict anywhere. So I had my summit with North Korea and Iran shot down my drone. As we say in The Apprentice, You're Fired!: I killed their General with a drone strike in Iraq. I moved the US Embassy in Israel to Jerusalem and put at risk the lives of uniformed Americans everywhere.

Campaign of Harassment
======================

Throughout my entire term as President, America has been subjected to a campaign of harassment from an inflatable baby Trump.

Conclusions
===========

For me the deciding factor was the lack of other interest from people interested in Conflict Deescalation.

Why couldn't reasonable Americans see the benefits of my policies on Israel?

So I failed to execute my promise to roll back Obamacare but that's OK because now we can use Obama as a scapegoat and blame him for the cost of treating Coronavirus.

Happy April Fool's Day! We're sad to report that we didn't make up anything in the above email forgery. The shocking news is that all of it is fact.

01 April, 2020 12:05AM

March 31, 2020

hackergotchi for Joachim Breitner

Joachim Breitner

Animations in Kaleidogen

A while ago I wrote a little game (or toy) called Kaleidogen. It is a relatively contemplative game where, starting from just unicolored disks, you combine abstract circular patterns to breed more interesting patterns. See my FARM 2019 talk for more details, or check out the source repository.

It has mostly been quiet with this game, but I finally got around to add a little bit of animation: When you have bred one of these patterns, you can animate its genesis, from nothing to a complex patterns, as you can see in this screencast:

Kaleidogen, animated

By the way: I am looking for collaborators who help me to get this into the Play Store properly, so let me know if you want to play around with Haskell, Android, Nix, OpenGL and cross-compilation.

31 March, 2020 09:29PM by Joachim Breitner (mail@joachim-breitner.de)

Antoine Beaupré

Remote presence tools for social distancing

As a technologist, I've been wondering how I can help people with the rapidly spreading coronavirus pandemic. With the world entering the "exponential stage" (e.g. Canada, the USA and basically all of Europe), everyone should take precautions and limit practice Social Distancing (and not dumbfuckery). But this doesn't mean we should dig ourselves in a hole in our basement: we can still talk to each other on the internet, and there are great, and free, tools available to do this. As part of my work as a sysadmin, I've had to answer questions about this a few times and I figured it was useful to share this more publicly.

Just say hi using whatever

First off, feel free to use the normal tools you normally use: Signal, Facetime, Skype, Zoom, and Discord can be fine to connect with your folks, and since it doesn't take much to make someone's day please do use those tools to call your close ones and say "hi". People, especially your older folks, will feel alone and maybe scared in those crazy times. Every little bit you can do will help, even if it's just a normal phone call, an impromptu balcony fanfare, a remote workout class, or just a sing-along from your balcony, anything goes.

But if those tools don't work well for some reason, or you want to try something new, or someone doesn't have an iPad, or it's too dang cold to go on your balcony, you should know there are other alternatives that you can use.

Jitsi

We've been suggesting our folks use a tool called "Jitsi". Jitsi is a free software platform to host audio/video conferences. It has a web app which means anyone with a web browser can join a session. It can also do "screen sharing" if you need to work together on a project.

There are many "instances", but here's a subset I know about:

You can connect to those with your web browser directly. If your web browser doesn't work, try switching to another (e.g. if Firefox doesn't work, try Chrome and vice-versa). There are also apps for desktop and mobile apps (F-Droid, Google Play, Apple Store) that will work better than just using your browser.

Jitsi should scale for small meetings up to a dozen people.

Mumble

... but beyond that, you might have trouble doing a full video-conference with a lot of people anyways. If you need to have a large conference with a lot of people, or if you have bandwidth and reliability problems with Jitsi, you can also try Mumble.

Mumble is an audio-only conferencing service, similar to Discord or Teamspeak, but made with free software. It requires users to install an app but there are clients for every platform out there (F-Droid, Google Play, Apple Store). Mumble is harder to setup, but is much more efficient in terms of bandwidth and latency. In other words, it will just scale and sound better.

Mumble ships with a list of known servers, but you can also connect to those trusted ones:

  • mumble.mayfirst.org - Mayfirst (see also their instructions on how to use it, hosted in New York city
  • mumble.riseup.net - Riseup, an autonomous collective, hosted in Seattle (ask me if you need their password) not a public service
  • talk.systemli.org - systemli, a left-wing network and technics-collective, hosted in Berlin

Live streaming

If for some reason those tools still don't scale, you might have a bigger problem on your hands. If your audience is over 100 people, you will not be able to all join in the same conference together. And besides, maybe you just want to broadcast some news and do not need audio or video feedback from the audience. In this case, you need "live streaming".

Here, proprietary services are Twitch, Livestream.com and Youtube. But the community also provides alternatives to those. This is more complicated to setup, but just to get you started, I'll link to:

For either of those tools, you need an app on your desktop. The Mayfirst instructions use OBS Studio for this, but it might be possible to hotwire VLC to stream video from your computer as well.

Text chat

When all else fails, text should go through. Slack, Twitter and Facebook are the best known alternatives here, obviously. I would warn against spending too much time on those, as they can foment harmful rumors and can spread bullshit like a virus on any given day. The situation does not make that any better. But it can be a good way to keep in touch with your loved ones.

But if you want to have a large meetings with a crazy number of people, text can actually accomplish wonders. Internet Relay Chat also known as "IRC" (and which oldies might have experienced for a bit as mIRC) is, incredibly, still alive at the venerable age of 30 years old. It is mainly used by free software projects, but can be used by anyone. Here are some networks you can try:

Those are all web interface to the IRC networks, but there are also a plenitude of IRC apps you can install on your desktop if you want the full experience.

Whiteboards and screensharing

I decided to add this section later on because it's a frequently mentioned "oh but you forgot..." comment I get from this post.

  • Big Blue Button - seems to check all the boxes: free software, VoIP integration, whiteboarding and screen sharing, works from a web browser
  • CodiMD: collaborative text editor with UML and diagrams support
  • Excalidraw: (collaborative) whiteboard tool that lets you easily sketch diagrams that have a hand-drawn feel

I'll also mention that collaborative editors, in general, like Etherpad are just great for taking minutes because you don't have that single person with the load of writing down what people are saying and is too busy to talk. Google Docs and Nextcloud have similar functionality, of course.

Update, public Big Blue Button instances:

BBB requires one user to register to start the conference, but once that's done, anyone with the secret URL can join.

Common recommendations

Regardless of the tools you pick, audio and video streaming is a technical challenge. A lot of things happen under the hood when you pick up your phone and dial a number, and sometimes using a desktop, it can be difficult to get everything "just right".

Some advice:

  1. get a good microphone and headset: good audio really makes a difference in how pleasing the experience will be, both for you and your peers. good hardware will reduce echo, feedback and other audio problems. (see also my audio docs)

  2. check your audio/video setup before joining the meeting, ideally with another participant on the same platform you will use

  3. find a quiet place to meet: even a good microphone will pick up noises from the environment, if you reduce this up front, everything will sound better. if you do live streaming and want high quality recording, considering setting up a smaller room to do recording. (tip: i heard of at least one journalist hiding in a closer full of clothes to make recordings, as it dampens the sound!)

  4. mute your microphone when you are not speaking (spacebar in Jitsi, follow the "audio wizard" in Mumble)

If you have questions or need help, feel free to ask! Comment on this blog or just drop me an email (see contact), I'd be happy to answer your questions.

Other ideas

Inevitably, when I write a post like this, someone writes something like "I can't believe you did not mention APL!" Here's a list of tools I have not mentioned here, deliberately or because I forgot:

  • Nextcloud Talk - needs access to a special server, but can be used for small meetings (less than 5, or so i heard)
  • Jabber/XMPP - yes, I know, XMPP can do everything and it's magic. but I've given up a while back, and I don't think setting up audio conferences with multiple enough is easy enough to make the cut here
  • Signal - signal is great. i use it every day. it's the primary way I do long distance, international voice calls for free, and the only way I do video-conferencing with family and friends at all. but it's one to one only, and the group (text) chat kind of sucks

Also, all the tools I recommend above are made of free software, which means they can be self-hosted. If things go bad and all those services stop existing, it should be possible for you to run your own instance.

Let me know if I forgot anything, but in a friendly way. And stay safe out there.

Update: a similar article from the good folks at systemli also recommends Mastodon, Ticker, Wikis and Etherpad.

Update 2: same, at SFC, which also mentions Firefox Send and Etherpad (and now I wish I did).

31 March, 2020 06:00PM

hackergotchi for Pau Garcia i Quiles

Pau Garcia i Quiles

Uyuni 2020.03 released — with enhanced Debian support!

Uyuni is a configuration and infrastructure management tool that saves you time and headaches when you have to manage and update tens, hundreds or even thousands of machines.

Uyuni is a fork of Spacewalk that leverages Salt, Cobbler and containers to modernize it. Uyuni is the upstream for SUSE Manager (the main difference is support: with SUSE Manager you get it from SUSE; with Uyuni you get it from the community) and our development and feature discussion is done in the open.

Last week we released Uyuni 2020.03, with much improved Debian support, coming from the community: we have got client tools (both the Salt stack and the traditional stack) for Debian 9 and 10, and bootstrapping support!

In addition to that, Uyuni 2020.03 brings many other new features:

  • Package pre-downloading for Debian and Ubuntu
  • Automatic generation of bootstrap repositories
  • Provisioning API for Salt clients (previously only for traditional clients), which allows to provision and re-provision Salt minions
  • Recurring actions scheduling, e. g. schedule highstate to happen every so often, repeatedly
  • Content Lifecycle Management filters for RHEL 8 appstreams so that you can convert modular repositories to plain repositories by applying a combination of filters. It will also work on RHEL derivatives, of course: CentOS, Oracle Linux and SLES Expanded Support.
  • Yomi: Yet One More Installer is a Salt-based installer for SUSE and openSUSE operating systems. More architectures (e. g. ARM) and Linux distributions will follow soon!
  • Hub XML-RPC API: the first component of our multi-Server architecture, to support hundreds of thousands of clients
  • SUSE Container as a Platform 4 (SUSE’s Kubernetes distribution) cluster awareness. Nodes in a SUSE CaaSP 4 cluster will by default not install updates, patches, run commands, etc from Uyuni Server on the normal schedule but default to doing that using skuba, CaaSP’s tool in charge of updates and reboots. Further enhancements are coming to this feature soon.

While this version of Uyuni provides a much better experience for Debian sysadmins, we still have a lot of room for improvement:

Do you want to help us with development, or just with feedback? Join our community on IRC, Gitter or the mailing lists. And check our user documentation, developer documentation and presentations.

We are also participating in Google Summer of Code 2020. Hurry up and submit a proposal to provide Uyuni for Debian, and/or enhance Debian support!

31 March, 2020 05:12PM by pgquiles

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in March 2020

Here is my monthly update covering what I have been doing in the free software world during March 2020 (previous month):

  • As part of being on being on the judging panel of the OpenUK Awards I am pleased to announce that after some discussion nominations are now open until 15th June in five different categories.

  • Merged a number of contributions to my django-cache-toolbox "non-magical" caching library for Django web applications, including caching negative relation lookups locally (#14) and to include the README file in the package long description (#17).

  • Made some small changes to my tickle-me-email library which implements Gettings Things Done (GTD)-like behaviours in IMAP inboxes to support to optionally limiting the number of messages in the send-later functionality. [...]

In addition, I did even more hacking on the Lintian static analysis tool for Debian packages, including:

  • New features:

    • Check for py3versions -i in autopkgtests and debian/rules files. (#954763)
    • Warn when py3versions -s is used without a python3-all dependency. (#954763)
    • Expand possible-missing-colon-in-closes to also check for semicolons used in place of colons. (#954484)
    • Check for new packages that use a date-based versioning scheme (eg. YYYYMMDD-1) without a 0~ suffix. (#953036)
  • Improvements:

  • Misc:

    • Correct reference to build dependencies in the long description of the debian-rules-uses-installed-python-versions tag. [...]
    • Make some cosmetic improvements to the CONTRIBUTING.md file. [...]
    • Correct reference to a bug in a previous debian/changelog entry. [...]
    • Avoid indenting approximately 150 lines by returning early from a subroutine and other code improvements. [...]


Reproducible builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:

  • Filed an issue against IMAP Spam Begone — a script by Louis-Philippe Véronneau (pollo) that makes it easy to process an email inbox using SpamAssassin — to report that a (duplicate) documentation entry includes nondeterministic value taken from the value of the XDG cache directory (#151) and filed an upstream pull requests against the pmemkv key-value data store to make their documentation build reproducibly (#615).

  • Further refined my merge request against the debian-installer component to allow all arguments from sources.list files (such as [check-valid-until=no]) in order that we can test the reproducibility of the installer images on the Reproducible Builds own testing infrastructure. (#13)

  • Submitted two following patches to fix reproducibility-related toolchain issues within Debian:

    • node-browserify-lite: Please make the output reproducible. (#954409)

    • pdb2pqr: Please make the aconf.py file reproducible. (#955287)

  • Submitted eight patches to fix specific reproducibility issues in beep (caused by a variation between /bin/dash and /bin/bash), cloudkitty (due to a default value being taken from the number of CPUs on the build machine), font-manager (embedding the value of @abs_top_srcdir@ into the resulting binary), gucharmap (due to embedding the absolute build path when generating a comment in a header file), infernal (timestamps are injected into a Python example, which should not be shipped anyway), ndisc6 (embeds the value of CFLAGS into the binary without sanitising any absolute build paths), node-nodedbi (embedded timestamp in binary) & pmemkv (does not respect SOURCE_DATE_EPOCH when populating a YEAR variable).

  • Kept isdebianreproducibleyet.com up to date. [...]

  • Continued collaborative work on an academic paper to be published within the next few months.

  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.

  • Drafted, published and publicised our monthly report.

  • Improved our website, including correcting the syntax of some CSS class formatting [...], improved some "filed against" copy a little better [...] and corrected a reference to calendar.monthrange Python method. [...]

In our tooling, I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues, including preparing and uploading version 138 to Debian:

  • Improvements:

    • Don't allow errors with R script deserialisation cause the entire operation to fail, for example if an external library cannot be loaded. (#91)
    • Experiment with memoising output from expensive external commands, eg. readelf(#93)
    • Use dumppdf from the python3-pdfminer if we do not see any other differences from pdftext, etc. (#92)
    • Prevent a traceback when comparing two R .rdx files directly as the get_member method will return a file even if the file is missing. [...]
  • Reporting:

    • Display the supported file formats into the package long description. (#90)
    • Print a potentially-helpful message if the PyPDF2 module is not installed. [...]
    • Remove any duplicate comparator descriptions when formatting in the --help output or in the package long description. [...]
    • Weaken "Install the X package to get a better output." message to "... may produce a better output." as the former is not guaranteed. [...]
  • Misc:

    • Ensure we only parse the recommended packages from --list-debian-substvars when we want them for debian/tests/control generation. [...]
    • Add upstream metadata file [...] and add a Lintian override for upstream-metadata-in-native-source as "we" are upstream. [...]
    • Inline the RequiredToolNotFound.get_package method's functionality as it is only used once. [...]
    • Drop the deprecated "py36 = [..]" argument in the pyproject.toml file. [...]

The Reproducible Builds project also operates a fully-featured and comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, I reworked the web-based package rescheduling tool to:

  • Require a HTTP POST method in the web-based scheduler as not only should HTTP GET requests be idempotent but this will allow many future improvements in the user interface. [...][...][...]

  • Improve the authentication error message in the rescheduler to suggest that the developer's SSL certificate may have expired. [...]


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 8 hours on its sister Extended LTS project.

  • Investigated and triaged glibc (CVE-2020-1751), jackson-databind, libbsd (CVE-2019-20367), libvirt (CVE-2019-20485), netkit-telnet & netkit-telnet-ssl (CVE-2020-10188), pdfresurrect (CVE-2020-9549) & shiro (CVE-2020-1957), etc.

  • In the script that reserves a unique advisory number don't warn about potential duplicate work when issuing a regression in order to avoid this message being missed when it does apply. [...]

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, etc.

  • xtrlock versions 2.8+deb9u1 (#949112) and 2.8+deb10u1 (#949113) was accepted to the Debian jessie and buster distributions.

  • Issued DLA 2115-2 to correct a regression in a previous fix (a use-after-free vulnerability) in the ProFTPD FTP server.

  • Issued DLA 2132-1 to fix an issue where incorrect default permissions on a HTTP cookie store could have allowed local attackers to read private credentials in libzypp, the library underpinning package management tools such as YaST, zypper and the openSUSE/SLE implementation of PackageKit.

  • Issued DLA 2134-1 to patch an out-of-bounds write vulnerability in pdfresurrect, a tool for extracting or scrubbing versioning data from PDF documents.

  • Issued DLA 2136-1, addressing an out-of-bounds buffer read vulnerability in libvpx, a library implementing the VP8 & VP9 video codecs.

  • Issued DLA 2142-1. It was discovered that there was a buffer overflow vulnerability in slirp, a SLIP/PPP emulator for using a dial up shell account. This was caused by the incorrect usage of return values from snprintf(3).

  • Issued DLA 2145-1 and DLA 2145-2 for twisted to prevent a large number of HTTP request splitting vulnerabilities in Twisted, a Python event-based framework for building various types of internet applications.

  • Issued ELA-219-1 to address an out-of-bounds read vulnerability during string comparisons in libbsd, a library of functions commonly available on BSD systems but not on others such as GNU.

You can find out more about the Debian LTS project via the following video:


Debian Uploads

For the Debian Privacy Maintainers team I requested that the pyptlib package be removed from the archive (#953429) as well as uploading onionbalance (0.1.8-6) to fix test failures under Pytest 3.x (#953535) and a new upstream release of nautilus-wipe.

Finally, I sponsored an upload of bilibop (0.6.1) on behalf of Yann Amar.

31 March, 2020 03:05PM

hackergotchi for Norbert Preining

Norbert Preining

Fixing the Breeze Dark theme for gtk3 apps

It has been now about two weeks that I switched to KDE/Plasma on all my desktops, and to my big surprise, that went much more smooth than I thought. There are only a few glitches with respect to the gtk3 part of the Breeze Dark theme I am using, which needed fixup.

Tab distinction

As I wrote already in a previous blog, the active tab in all kinds of terminal emulators, but in fact everything that uses the gtk3 notebook widget, is not distinguishable from other tabs. It turned out that this fix is a bit convoluted, but still possible, see the linked blog. Just for completeness, here is the CSS code I use in ~/.config/gtk-3.0/gtk.css:

notebook tab {
    /* background-color: #222; */
    padding: 0.4em;
    border: 0;
    border-color: #444;
    border-style: solid;
    border-width: 1px;
}
 
notebook tab:checked {
    /* background-color: #000; */
    background-image: none;
    border-color: #76C802;
}
 
notebook tab:checked label {
    color: #76C802;
    font-weight: 500;
}
 
notebook tab button {
    padding: 0;
    background-color: transparent;
    color: #ccc;
}
 
notebook tab button:hover {
  border: 0;
  background-image: none;
  border-color: #444;
  border-style: solid;
  border-width: 1px;
}

Scroll bars

Another of the disturbing properties of the Breeze theme is the width-changing scroll bar. While not hovered upon, it is rather small, but when the mouse moves over it it expands its width. Now that might sound like a flashy cool idea, but in fact it is nothing but a PITA: When used with a terminal emulator, the result is that the line length changes when the mouse moves over the vertical scroll bar, and thus suddenly the layout (line break) changes for instant, which is really really disturbing. I can’t imagine why developers ever come up with such a stupid idea. Anyway, the fix is not that difficult again, simply put the following into your ~/.config/gtk-3.0/gtk.css (adjusting the width to your liking) and all will be fine:

.scrollbar.vertical slider, scrollbar.vertical slider {
        min-width: 10px;
}

Not that bad, right?

Other than this I haven’t found any disturbing issue with using the Breeze theme with gtk3 (and gtk2) apps!

Hope that helps

31 March, 2020 10:36AM by Norbert Preining

Russell Coker

Russ Allbery

pam-krb5 4.9

This is a security release fixing a one-byte buffer overflow when relaying prompts from the underlying Kerberos library. All users of my pam-krb5 module should upgrade as soon as possible. See the security advisory for more information.

There are also a couple more minor security improvements in this release: The module now rejects passwords as long or longer than PAM_MAX_RESP_SIZE (normally 512 octets) since they can be a denial of service attack via the Kerberos string-to-key function, and uses explicit_bzero where available to clear passwords before releasing memory.

Also in this release, use_pkinit is now supported with MIT Kerberos, the Kerberos prompter function returns more accurate error messages, I fixed an edge-case memory leak in pam_chauthtok, and the module/basic test will run properly with a system krb5.conf file that doesn't specify a realm.

You can get the latest release from the pam-krb5 distribution page. I've also uploaded the new version to Debian unstable and patched security releases with only the security fix to Debian stable and oldstable.

31 March, 2020 02:34AM

March 30, 2020

hackergotchi for Mike Gabriel

Mike Gabriel

UBports: Packaging of Lomiri Operating Environment for Debian (part 02)

Before and during FOSDEM 2020, I agreed with the people (developers, supporters, managers) of the UBports Foundation to package the Unity8 Operating Environment for Debian. Since 27th Feb 2020, Unity8 has now become Lomiri.

Recent Uploads to Debian related to Lomiri

Over the past 7-8 weeks the packaging progress has been slowed down due to other projects I am working on in parallel. However, quite a few things have been achieved:

  • review forks of unity-api, ubuntu-download-manager and unity-app-launch under the names lomiri-api, lomiri-download-manager, lomiri-app-launch.
  • request upstream releases of lomiri-api and lomiri-download-manager
  • package and upload lomiri-api to Debian unstable (unfortunately still in Debian's NEW queue)
  • package and upload lomiri-download-manager to Debian unstable (dito)
  • package (and with 'package' I mean Debian policy compliant packaging) lomiri-app-launch (no upload, yet, as there are some strange unit test failures that need more debugging)
  • package and upload qtsystems (under the umbrella of the Debian QT/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)
  • package and upload qtfeedback (under the umbrella of the Debian QT/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)
  • package and (upload) [1] qtpim (under the umbrella of the Debian Qt/KDE Maintainers' team) to Debian unstable (pending review in Debian's NEW queue)

The packages qtsystems, qtfeedback, and qtpim are no official Qt5 components, and so I had to package Git snapshots of them; with all implicit consequences regarding ABI and API compatibilities, possibly Debian-internal library transitions, etc.

Esp. packaging qtsystems was pretty tricky due to a number of failing unit tests when the package had been built in a clean chroot (like it is the case on Debian's buildd infrastructure). I learned a lot about DBus and DBus mocking while working on all those unit tests to finally pass in chrooted builds.

Unfortunately, the Lomiri App Launch component still needs more work due to (finally only) one unit test (jobs-systemd) not always passing. Sometimes, the test gets stucks and then fails after having reached a time out. I'll add it to my list of those unreproducible build failures I have recently seen in several GTest related unit test scenarios. Sigh...

Credits

A great thanks goes to Lisandro Perez Meyer from the Debian KDE/Qt Team for providing an intro and help on Qt Debian packaging and an intro on symbols handling with C++ projects.

Another big thanks goes to Dmitry Shachnev from the Debian KDE/Qt Team for doing a sponsored upload [1] of qtpim (and also a nice package review).

Also a big thanks goes to Marius Gripsgard for his work on forking the first Lomiri components on the UBports upstream side.

Previous Posts about my Debian UBports Team Efforts

References

  • [1] Unfortunately, I missed a crucial element of the GPG key update workflow as Debian Developer. My GPG key was about to expire at the end of March 2020. I renewed its expiration date and exported its public key to the public PGP/GPG keyserver. However, for being able to upload packages to Debian, one has to push the public key to Debian's own keyring server. Which I missed. Thus, I won't be able to upload any packages before the end of April myself and will depend on DD colleagues helping out with sponsoring my uploads.

30 March, 2020 08:30PM by sunweaver

hackergotchi for Jonathan Dowland

Jonathan Dowland

ephemeral note-taking wins

Some further thoughts on ephemeral versus preserve-everything note-taking.

Note-taking is about capturing ideas, thoughts, and processes. You want as little friction as possible when doing so: you don't want to be thinking the page is too small, or the paper drying up the ink too quickly so the pen doesn't move smoothly, or similar such things distracting from capturing what you are trying to capture.

I used my PhD notebook as an example of a preserve-everything approach. A serious drawback of the notebook as the sole place to capture work is the risk that it will be damaged or lost. I periodically photograph all the pages and store those photos digitally, alongside other things relating to the work. Those other things include two different private wiki instances that I use to capture notes when I'm working at the computer, as well as several Git repositories (some public, some private) for source code, experiments, drafts of papers, etc. There's also a not-insignificant amount of email correspondence.

There have been several train journeys and several meetings where I've grabbed a cheap, larger-format pad of paper and a box of Pound-shop felt-tip pens to sketch ideas, whiteboard-style. At the time it just seemed easier to capture what we were doing in that way, rather than try to do so into the notebook.

So the notebook is neither canonical nor comprehensive. Ultimately it's really another example of ephemeral note-taking, and so I think the Ephemeral model wins out.

Use whatever notebook or paper or envelope or window pane that is convenient and feels attractive at the time you need to capture something with the least amount of friction. Digitise that and store, catalogue, adjust, derive, etc. from that in the digital domain.

30 March, 2020 02:47PM

hackergotchi for Mike Gabriel

Mike Gabriel

Mailman3 - Call for Translations (@Weblate)

TL;DR; please help localizing Mailman3 [1]. You can find it on hosted Weblate [2].The next component releases are planned in 1-2 weeks from now. Thanks for your contribution! If you can't make it now, please consider working on Mailman3 translations at some later point of time. Thanks!

Time has come for Mailman3

Over the last months I have found an interest in Mailman3. Given the EOL of Python2 in January 2020 and also being a heavy Mailman2 provider for various of my projects and also for customers, I felt it was time to look at Mailman2's successor: Mailman3 [1].

One great novelty in Mailman3 is the strict split up between backend (Mailman Core), and the frontend components (django-mailman3, Postorius, Hyperkitty). All three are Django applications. Postorius is the list management web frontend whereas Hyperkitty is an archive viewer. Other than in Mailman2, you can also drop list posts into Hyperkitty directly (instead of sending a mail to the list). This makes Hyperkitty also some sort of forum software with a mailing list core in the back. The django-mailman3 module knits the previous two together (and handles account management, login dialog, profile settings, etc.).

Looking into Mailman3 Upstream Code

Some time back in midst 2019 I decided to deploy Mailman3 at a customer's site and also for my own business (which still is the test installation). Living and working in Germany, my customers' demand often is a fully localized WebUI. And at that time, Mailman3 could not provide this. Many exposed parts of the Mailman3 components were still not localized (or not localizable).

Together with my employee I put some hours of effort into providing merge requests, filing bug reports, request better Weblate integration (meaning: hosted Weblate), improving the membership scripting support, etc. It felt a bit like setting the whole i18n thing in motion.

Call for Translations

Over the past months I had to focus on other work and two days ago I was delighted that Abhilash Raj (one of the Mailman3 upstream maintainers) informed me (via closing one of the related bugs [3]) that Mailman3 is now fully integrated with the hosted Weblate service and a continous translation workflow is set to go.

The current translation stati of the Mailman3 components are at ~ 10%. We can do better than this, I sense.

So, if you are a non-native English speaker and feel like contributing to Mailman3, please visit the hosted Weblate site [2], sign up for an account (if you don't have one already), and chime in into the translation of one of the future mailing list software suites run by many FLOSS projects all around the globe. Thanks a lot for your help.

As a side note, if you plan working on translating Mailman Core into your language (and can't find it in the list of supported languages), please request this new language via the Weblate UI. All other components have all available languages enabled by default.

References:

30 March, 2020 07:47AM by sunweaver

hackergotchi for Axel Beckert

Axel Beckert

How do you type on a keyboard with only 46 or even 28 keys?

Some of you might have noticed that I’m into keyboards since a few years ago — into mechanical keyboards to be precise.

Preface

It basically started with the Swiss Mechanical Keyboard Meetup (whose website I started later on) was held in the hackerspace of the CCCZH.

I mostly used TKL keyboards (i.e. keyboards with just the — for me useless — number block missing) and tried to get my hands on more keyboards with Trackpoints (but failed so far).

At some point a year or two ago, I looking into smaller keyboards for having a mechanical keyboard with me when travelling. I first bought a Vortex Core at Candykeys. The size was nice and especially having all layers labelled on the keys was helpful, but nevertheless I soon noticed that the smaller the keyboards get, the more important is, that they’re properly programmable. The Vortex Core is programmable, but not the keys in the bottom right corner — which are exactly the keys I wanted to change to get a cursor block down there. (Later I found out that there are possibilities to get this done, either with an alternative firmware and a hack of it or desoldering all switches and mounting an alternative PCB called Atom47.)

40% Keyboards

So at some point I ordered a MiniVan keyboard from The Van Keyboards (MiniVan keyboards will soon be available again at The Key Dot Company), here shown with GMK Paperwork (also bought from and designed by The Van Keyboards):

The MiniVan PCBs are fully programmable with the free and open source firmware QMK and started to use that more and more instead of bigger keyboards.

Layers

With the MiniVan I learned the concepts of layers. Layers are similar to what many laptop keyboards do with the “Fn” key and to some extent also what the German standard layout does with the “AltGr” key: Layers are basically alternative key maps you can switch with a special key (often called “Fn”, “Fn1”, “Fn2”, etc., or — especially if there are two additional layers — “Raise” and “Lower”).

There are several concepts how these layers can be reached with these keys:

  • By keeping the Fn key pressed, i.e. the alternative layer is active as long as you hold the Fn key down.
  • One-shot layer switch: After having pressed and released the Fn key, all keys are on the alternative layer for a single key press and then you are back to the default layer.
  • Layer toggle: Pressing the Fn key once switches to the alternative layer and pressing it a second time switches back to the default layer.
  • There are also a lot of variants of the latter variant, e.g. rotating between layers upon every key press of the Fn key. In that case it seems common to have a second special key which always switches back to the default layer, kinda Escape key for layer switching.
My MiniVan Layout

For the MiniVan, two additional layers suffice easily, but since I have a few characters on multiple layers and also have mouse control and media keys crammed in there, I have three additional layers on my MiniVan keyboards:


“TRNS” means transparent, i.e. use the settings from lower layers.

I also use a feature that allows me to mind different actions to a key depending if I just tap the key or if I hold it. Some also call this “tap dance”. This is especially very popular on the usually rather huge spacebar. There, the term “SpaceFn” has been coined, probably after this discussion on Geekhack.

I use this for all my layer switching keys:

  • The left spacebar is space on tap and switches to layer 1 if hold. The right spacebar is a real spacebar, i.e. already triggers a space on key press, not only on key release.

    Layer 1 has numbers on the top row and the special characters of the number row in the second row. It also has Home/End and Page Up/Down on the cursor keys.

  • The key between the Enter key and the cursor-right key (medium grey with a light grey caret in the picture) is actually the Slash and Question Mark key, but if hold, it switches me to layer 2.

    Layer 2 has function keys on the top row and also the special characters of the number row in the second row. On the cursor keys it has volume up and down as well as the media keys “previous” and “next”.

  • The green key in the picture is actually the Backslash and Pipe key, but if hold, it switches me to layer 3.

    On layer 3 I have mouse control.

With this layout I can type English texts as fast as I can type them on a standard or TKL layout.

German umlauts are a bit more difficult because it requires 4 to 6 key presses per umlaut as I use the Compose key functionality (mapped to the Menu key between the spacebars and the cursor block. So to type an Ä on my MiniVan, I have to:

  1. press and release Menu (i.e. Compose); then
  2. press and hold either Shift-Spacebar (i.e. Shift-Fn1) or Slash (i.e. Fn2), then
  3. press N for a double quote (i.e. Shift-Fn1-N or Fn2-N) and then release all keys, and finally
  4. press and release the base character for the umlaut, in this case Shift-A.

And now just use these concepts and reduce the amount of keys to 28:

30% and Sub-30% Keyboards

In late 2019 I stumbled upon a nice little keyboard kit “shop” on Etsy — which I (and probably most other people in the mechanical keyboard scene) didn’t take into account for looking for keyboards — called WorldspawnsKeebs. They offer mostly kits for keyboards of 40% size and below, most of them rather simple and not expensive.

For about 30€ you get a complete sub-30% keyboard kit (without switches and keycaps though, but that very common for keyboard kits as it leaves the choice of switches and key caps to you) named Alpha28 consisting of a minimal Acrylic case and a PCB and electronics set.

This Alpha28 keyboard is btw. fully open source as the source code, (i.e. design files) for the hardware are published under a free license (MIT license) on GitHub.

And here’s how my Alpha28 looks like with GMK Mitolet (part of the GMK Pulse group-buy) key caps:

So we only have character keys, Enter (labelled “Data” as there was no 1u Enter key with that row profile in that key cap set; I’ll also call it “Data” for the rest of this posting) and a small spacebar, not even modifier keys.

The Default Alpha28 Layout

The original key layout by the developer of the Alpha28 used the spacbar as Shift on hold and as space if just tapped, and the Data key switches always to the next layer, i.e. it switches the layer permanently on tap and not just on hold. This way that key rotates through all layers. In all other layers, V switches back to the default layer.

I assume that the modifiers on the second layer are also on tap and apply to the next other normal key. This has the advantage that you don’t have to bend your fingers for some key combos, but you have to remember on which layer you are at the moment. (IIRC QMK allows you to show that via LEDs or similar.) Kinda just like vi.

My Alpha28 Layout

But maybe because I’m more an Emacs person, I dislike remembering states myself and don’t bind bending my fingers. So I decided to develop my own layout using tap-or-hold and only doing layer switches by holding down keys:


A triangle means that the settings from lower layers are used, “N/A” means the key does nothing.

It might not be very obvious, but on the default layer, all keys in the bottom row and most keys on the row ends have tap-or-hold configurations.

Basic ideas
  • Use all keys on tap as labelled by default. (Data = Enter as mentioned above)
  • Use different meanings on hold for the whole bottom row and some edge column keys.
  • Have all classic modifiers (Shift, Control, OS/Sys/Win, Alt/Meta) on the first layer twice (always only on hold), so that any key, even those with a modifier on hold, can be used with any modifier. (Example: Shift is on A hold and L hold so that Shift-A is holding L and then pressing A and Shift-L is holding A and then pressing L.)
Bottom row if hold
  • Z = Control
  • X = OS/Sys/Win
  • C = Alt/Meta
  • V = Layer 3 (aka Fn3)
  • Space = Layer 1 (aka Fn1)
  • B = Alt/Meta
  • N = OS/Sys/Win
  • M = Ctrl
Other rows if hold
  • A = Shift
  • L = Shift
  • Data (Enter) = Layer 2 (aka Fn2)
  • P = Layer 4 (aka Fn4)
How the keys are divided into layers
  • Layer 0 (Default): alphabetic keys, Space, Enter, and (on hold) standard modifiers
  • Layer 1: numbers, special characters (most need Shift, too), and some more common other keys, e.g.
    • Space-Enter = Backspace
    • Space-S = Esc
    • Space-D = Tab
    • Space-F = Menu/Compose
    • Space-K = :
    • Space-L = '
    • Space-B = ,
    • Space-N = .
    • Space-M = /, etc.
  • Layer 2: F-keys and less common other keys, e.g.
    • Enter-K = -
    • Enter-L = =
    • Enter-B = [
    • Enter-N = ]
    • Enter-M = \, etc.)
  • Layer 3: Cursor movement, e.g.
    • scrolling
    • and mouse movement.
    • Cursor cross is on V-IJKL (with V-I for Up)
    • V-U and V-O are Home and End
    • V-P and V-Enter are Page Up/Down.
    • Mouse movement is on V-WASD
    • V-Q
    • V-E and V-X being mouse buttons
    • V-F and V-R is the scroll wheel up down
    • V-Z and V-C left and right.
  • Layer 4: Configuring the RGB bling-bling and the QMK reset key:
    • P-Q (the both top corner keys) are QMK reset to be able to reflash the firmware.
    • The keys on the right half of the keyboard control the modes of the RGB LED strip on the bottom side of the PCB, with the upper two rows usually having keys with some Plus and Minus semantics, e.g. P-I and P-K is brightness up and down.
    • The remaining left half is unused and has no function at all on layer 4.
Using the Alpha28

This layout works surprisingly well for me.

Only for Minus, Equal, Single Quote and Semicolon I still often have to think or try if they’re on Layer 1 or 2 as on my 40%s (MiniVan, Zlant, etc.) I have them all on layer 1 (and in general one layer less over all). And for really seldom used keys like Insert, PrintScreen, ScrollLock or Pause, I might have to consult my own documentation. They’re somewhere in the middle of the keyboard, either on layer 1, 2, or 3. ;-)

And of course, typing umlauts takes even two keys more per umlaut as on the MiniVan since on the one hand Menu is not on the default layer and on the other hand, I don’t have this nice shifted number row and actually have to also press Shift to get a double quote. So to type an Ä on my Alpha, I have to:

  1. press and release Space-F (i.e. Fn1-F) for Menu (i.e. Compose); then
  2. press and hold A-Spacebar-L (i.e. Shift-Fn1-L) for getting a double quote, then
  3. press and release the base character for the umlaut, in this case L-A for Shift-A (because we can’t use A for Shift as I can’t hold a key and then press it again :-).

Conclusion

If the characters on upper layers are not labelled like on the Vortex Core, i.e. especially on all self-made layouts, typing is a bit like playing that old children’s game Memory: as soon as you remember (or your muscle memory knows) where some special characters are, typing gets faster. Otherwise, you start with trial and error or look the documentation. Or give up. ;-)

Nevertheless, typing on a sub-30% keyboard like the Alpha28 is much more difficult and slower than on a 40% keyboard like the MiniVan. So the Alpha28 very likely won’t become my daily driver while the MiniVan defacto is my already my daily driver.

But I like these kind of challenges as others like the game “Memory”. So I ordered three more 30% and sub-30% keyboard kits and WorldspawnsKeebs for soldering on the upcoming weekend during the COVID19 lockdown:

  • A Reviung39 to start a new try on ortholinear layouts.
  • A Jerkin (sold out, waitlist available) to try an Alice-style keyboard layout.
  • A Pain27 (which btw. is also open source under the CC0 license) to try typing with even one key less than the Alpha28 has. ;-)

And if I at some point want to try to type with even fewer keys, I’ll try a Butterstick keyboard with just 20 keys. It’s a chorded keyboard where you have to press multiple keys at the same time to get one charcter: So to get an A from the missing middle row, you have to press Q and Z simultaneously, to get Escape, press Q and W simultaneously, to get Control, press Q, W, Z and X simultaneously, etc.

And if that’s not even enough, I already bought a keyboard kit named Ginny (or Ginni, the developer can’t seem to decide) with just 10 keys from an acquaintance. Couldn’t resist when offered his surplus kits. :-) It uses the ASETNIOP layout which was initially developed for on-screen keyboards on tablets.

30 March, 2020 06:51AM by Axel Beckert (abe+blog@deuxchevaux.org)

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Using Zoom's web client on Linux

TL;DR: The zoom meeting link you have probably look like this:

https://zoom.us/j/123456789

To use the web client, use this instead:

https://zoom.us/wc/join/123456789

Avant-propos

Like too many institutions, the school where I teach chose to partner up with Zoom. I wasn't expecting anything else, as my school's IT department is a Windows shop. Well, I guess I'm still a little disappointed.

Although I had vaguely heard of Zoom before, I had never thought I'd be forced to use it. Lucky for me, my employer decided not to force us to use it. To finish the semester, I plan to record myself and talk with my students on a Jitsi Meet instance.

I will still have to attend meetings on Zoom though. I'm well aware of Zoom's bad privacy record and I will not install their desktop application. Zoom does offer a web client. Sadly, on Linux you need to jump through hoops to be able to use it.

Using Zoom's web client on Linux

Zoom's web client apparently works better on Chrome, so I decided to use Chromium.

Without already having the desktop client installed on your machine, the standard procedure to use the web client would be:

  1. Open the link to the meeting in Chromium
  2. Click on the "download & run Zoom" link showed on the page
  3. Click on the "join from your browser" link that then shows up

Sadly, that's not what happens on Linux. When you click on the "download & run Zoom" link, it brings you to a page with instructions on how to install the desktop client on Linux.

You can thwart that stupid behavior by changing your browser's user agent to make it look like you are using Windows. This is the UA string I've been using:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36

With that, when you click on the "download & run Zoom" link, it will try to download a .exe file. Cancel the download and you should now see the infamous "join from your browser" link.

Upon closer inspection, it seem you can get to the web client by changing the meeting's URL. The zoom meeting link you have probably look like this:

https://zoom.us/j/123456789

To use the web client, use this instead:

https://zoom.us/wc/join/123456789

Jitsi Meet Puppet Module

I've been playing around with Jitsi Meet quite a bit recently and I've written a Puppet module to install and configure an instance! The module certainly isn't perfect, but should wield a working Jitsi instance.

If you already have a Puppet setup, please give it a go! I'm looking forward receiving feedback (and patches) to improve it.

30 March, 2020 04:00AM by Louis-Philippe Véronneau

hackergotchi for Shirish Agarwal

Shirish Agarwal

Covid 19 and the Indian response.

There have been lot of stories about Coronavirus and with it a lot of political blame-game has been happening. The first step that India took of a lockdown is and was a good step but without having a plan as to how especially the poor and the needy and especially the huge migrant population that India has (internal migration) be affected by it. A 2019 World Economic Forum shares the stats. as 139 million people. That is a huge amount of people and there are a variety of both push and pull factors which has displaced these huge number of people. While there have been attempts in the past and probably will continue in future they will be hampered unless we have trust-worthy data which is where there is lots that need to be done. In the recent few years, both the primary and secondary data has generated lot of controversies within India as well as abroad so no point in rehashing all of that. Even the definition of who is a ‘migrant’ needs to be well-established just as who is a ‘farmer’ . The simplest lucanae in the later is those who have land are known as ‘farmers’ but the tenant farmers and their wives are not added as farmers hence the true numbers are never known. Is this an India-specific problem or similar definition issues are there in the rest of the world I don’t know.

How our Policies fail to reach the poor and the vulnerable

The sad part is most policies in India are made in castles in the air . An interview by the wire shares the conundrum of those who are affected and the policies which are enacted for them (it’s a youtube video, sorry) –

If one with an open and fresh mind sees the interview it is clear that why there was a huge reverse migration from Indian cities to villages. The poor and marginalized has always seen the Indian state as an extortive force so it doesn’t make sense for them to be in the cities. The Prime Minister’s annoucement of food for 3 months was a clear indication for the migrant population that for 3 months they will have no work. Faced with such a scenario, the best option for them was to return to their native places. While videos of huge number of migrants were shown of Delhi, this was the scenario of most states and cities, including Pune, my own city . Another interesting point which was made is most of the policies will need the migrants to be back in the villages. Most of these are tied to the accounts which are opened in villages, so even if they want to have the benefits they will have to migrate to villages in order to use them. Of course, everybody in India knows how leaky the administration is. The late Shri Rajiv Gandhi had famously and infamously remarked once how leaky the Public Distribution system and such systems are. It’s only 10 paise out of rupee which reaches the poor. And he said this about 30 years ago. There have been numerous reports of both IPS (Indian Police Services) reforms and IAS (Indian Administrative Services) reforms over the years, many of the committee reports have been in public domain and in fact was part of the election manifesto of the ruling party in 2014 but no movement has happened on that part. The only thing which has happened is people from the ruling party have been appointed on various posts which is same as earlier governments.

I was discussing with a friend who is a contractor and builder about the construction labour issues which were pointed in the report and if it is true that many a times the migrant labour is not counted. While he shared a number of cases where he knew, a more recent case in public memory was when some labourers died while building Amanora mall which is perhaps one of largest malls in India. There were few accidents while constructing the mall. Apparently, the insurance money which should have gone to the migrant laborer was taken by somebody close to the developers who were building the mall. I have a friend in who lives in Jharkhand who is a labour officer. She has shared with me so many stories of how the labourers are exploited. Keep in mind she has been a labor officer appointed by the state and her salary is paid by the state. So she always has to maintain a balance of ensuring worker’s rights and the interests of the state, private entities etc. which are usually in cahoots with the state and it is possible that lot of times the State wins over the worker’s rights. Again, as a labour officer, she doesn’t have that much power and when she was new to the work, she was often frustrated but as she remarked few months back, she has started taking it easy (routinized) as anyways it wasn’t helping her in any good way. Also there have been plenty of cases of labor officers being murdered so its easier to understand why one tries to retain some sanity while doing their job.

The Indian response and the World Response

The Indian response has been the lockdown and very limited testing. We seem to be following the pattern of UK and U.S. which had been slow to respond and slow to testing. In the past Kerala showed the way but this time even that is not enough. At the end of the day we need to test, test and test just as shared by the WHO chairman. India is trying to create its own cheap test kits with ICMR approval, for e.g. a firm from my own city Pune MyLab has been given approval. We will know how good or bad they are only after they have been field-tested. For ventilators we have asked Mahindra and Mahindra even though there are companies like Allied Medical and others who have exported to EU and others which the Govt. is still taking time to think through. This is similar to how in UK some companies who are with the Govt. but who have no experience in making ventilators are been given orders while those who have experience and were exporting to Germany and other countries are not been given orders. The playbook is errily similar. In India, we don’t have the infrastructure for any new patients, period. Heck only a couple of states have done something proper for the anganwadi workers. In fact, last year there were massive strikes by anganwadi workers all over India but only NDTV showed a bit of it along with some of the news channels from South India. Most mainstream channels chose to ignore it.

On the world stage, some of the other countries and how they have responded perhaps need sharing. For e.g. I didn’t know that Cuba had so many doctors and the politics between it and Brazil. Or the interesting stats. shared by Andreas Backhaus which seems to show how distributed the issue (age-wise) is rather than just a few groups as has been told in Indian media. What was surprising for me is the 20-29 age group which has not been shared so much in the Indian media which is the bulk of our population. The HBR article also makes a few key points which I hope both the general public and policymakers both in India as well as elsewhere take note of.

What is worrying though that people can be infected twice or more as seems to be from Singapore or China and elsewhere. I have read enough of Robin Cook and Michael Crichton books to be aware that viruses can do whatever. They will over time mutate, how things will happen then is anybody’s guess. What I found interesting is the world economic forum article which hypothesis that it may be two viruses which got together as well as research paper from journal from poteome research which has recently been published. The biggest myth flying around is that summer will halt or kill the spread which even some of my friends have been victim of . While a part of me wants to believe them, a simple scientific fact has been viruses have probably been around us and evolved over time, just like we have. In fact, there have been cases of people dying due to common cold and other things. Viruses are so prevalent it’s unbelivable. What is and was interesting to note is that bat-borne viruses as well as pangolin viruses had been theorized and shared by Chinese researchers going all the way back to 90’s . The problem is even if we killed all the bats in the world, some other virus will take its place for sure. One of the ideas I had, dunno if it’s feasible or not that at least in places like Airports, we should have some sort of screenings and a labs working on virology. Of course, this will mean more expenses for flying passengers but for public health and safety maybe it would worth doing so. In any case, virologists should have a field day cataloging various viruses and would make it harder for viruses to spread as fast as this one has. The virus spread also showed a lack of leadership in most of our leaders who didn’t react fast enough. While one hopes people do learn from this, I am afraid the whole thing is far from over. These are unprecedented times and hope that all are maintaining social distancing and going out only when needed.

30 March, 2020 01:07AM by shirishag75

March 29, 2020

Enrico Zini

Molly de Blanc

Computing Under Quarantine

Under the current climate of lock-ins, self-isolation, shelter-in-place policies, and quarantine, it is becoming evident to more people the integral role computers play in our lives. Students are learning entirely online, those who can are working from home, and our personal relationships are being carried largely by technology like video chats, online games, and group messages. When these things have become our only means of socializing with those outside our homes, we begin to realize how important they are and the inequity inherent to many technologies.

Someone was telling me how a neighbor doesn’t have a printer, so they are printing off school assignments for their neighbor. People I know are sharing internet connections with people in their buildings, when possible, to help save on costs with people losing jobs. I worry now even more about people who have limited access to home devices or poor internet connections.

As we are forced into our homes and are increasingly limited in the resources we have available, we find ourselves potentially unable to easily fill material needs and desires. In my neighborhood, it’s hard to find flour. A friend cannot find yeast. A coworker couldn’t find eggs. Someone else is without dish soap. Supply chains are not designed to meet with the demand currently being exerted on the system.

This problem is mimicked in technology. If your computer breaks, it is much harder to fix it, and you lose a lot more than just a machine – you lose your source of connection with the world. If you run out of toner cartridges for your printer – and only one particular brand works – the risk of losing your printer, and your access to school work, becomes a bigger deal. As an increasing number of things in our homes are wired, networked, and only able to function with a prescribed set of proprietary parts, gaps in supply chains become an even bigger issue. When you cannot use whatever is available, and instead need to wait for the particular thing, you find yourself either hoarding or going without. What happens when you can’t get the toothbrush heads for your smart toothbrush due to prioritization and scarcity with online ordering when it’s not so easy to just go to the pharmacy and get a regular toothbrush?

In response to COVID-19 Adobe is offering no-cost access to some of their services. If people allow themselves to rely on these free services, they end up in a bad situation when a cost is re-attached.

Lock-in is always a risk, but when people are desperate, unemployed, and lacking the resources they need to survive, the implications of being trapped in these proprietary systems are much more painful.

What worries me even more than this is the reliance on insecure communication apps. Zoom, which is becoming the default service in many fields right now, offers anti-features like attendee attention tracking and user reporting.

We are now being required to use technologies designed to maximize opportunities for surveillance to learn, work, and socialize. This is worrisome to me for two main reasons: the violation of privacy and the normalization of a surveillance state. It is a violation of privacy, to have our actions tracked. It also gets us used to being watched, which is dangerous as we look towards the future.

29 March, 2020 05:51PM by mollydb

Sven Hoexter

Looking into Envertech Enverbridge EVB 202 SetID tool

Disclaimer: I'm neither an experienced programmer nor proficient in reverse engineering, but I like at least to try to figure out how things work. Sometimes the solution is so easy, that even I manage to find it, still take this with a grain of salt.

I lately witnessed the setup of an Envertech EnverBridge ENB-202 which is kind of a classic Chinese IoT device. Buy it, plug it in, use some strange setup software, and it will report your PV statistics to a web portal. The setup involved downloading a PE32 Windows executable, with an UI that basically has two input boxes and a sent button. You've to input the serial number(s) of your inverter boxes and the ID of your EnverBridge. That made me interested in what this setup process really looks like.

The EnverBridge device itself has on one end a power plug, which is also used to communicate with the inverter via some Powerline protocol, and a network plug with a classic RJ45 end you plug into your network. If you power it up it will request an IPv4 address via DHCP. That brings us to the first oddity, the MAC address is in the BC:20:90 prefix which I could not find in the IEEE lists.

Setting Up the SetID Software

You can download the Windows software in a Zipfile, once you unpack it you end up with a Nullsoft installer .exe. Since this is a PE32 executable we've to add i386 as foreign architecture to install the wine32 package.

dpkg --add-architecture i386
apt update
apt install wine32:i386
wget http://www.envertec.com/uploads/bigfiles/Set%20ID.zip
unzip Set\ ID.zip
wine Set\ ID.exe

The end result is an installation in ~/.wine/drive_c/Program Files/SetID which reveals that this software is build with Qt5 according to the shipped dll's. The tool itself is in udpCilentNow.exe, and looks like this: Envertech SetID software

The Network Communication

To my own surprise, the communication is straight forward. A single UDP paket is sent to the broadcast address (255.255.255.255) on port 8765.

Envertech SetID udp paket in wireshark

I expected some strange binary protocol, but the payload is just simple numbers. They're a combination of the serial numbers of the inverter and the ID of the Enverbridge device. One thing I'm not 100% sure about are the inverter serial numbers, there are two of them, but on the inverter I've seen the serial numbers are always the same. So the payload is assembled like this:

  • ID of the EnverBridge
  • char 9 to 16 of the inverter serial 1
  • char 9 to 16 of the inverter serial 2

If you've more inverter the serials are just appended in the same way. Another strange thing is that the software does close to no input validation, they only check that the inverter serials start with CN and then just extract char 9 to 16.

The response from the EnverBridge is also a single UDP paket to the broadcast address on port 8764, with exactly the same content we've sent.

Writing a Replacement

My result is probably an insult for all proficient Python coders, but based on a bit of research and some cut&paste programming I could assemble a small script to replicate the function of the Windows binary. I guess the usefulness of this exercise was mostly my personal entertainment, though it might help some none Windows users to setup this device. Usage is also very simple:

./enverbridge.py -h
Usage: enverbridge.py [options] MIIDs

Options:
  -h, --help         show this help message and exit
  -b BID, --bid=BID  Serial Number of your EnverBridge

./enverbridge.py -b 90087654 CN19100912345678 CN19100912345679

This is basically 1:1 the behaviour of the Windows binary, though I tried to add a bit more validation then the original binary and some more error messages. I also assume the serial numbers are also always the same so I take only one as the input, and duplicate it for the paket data.

29 March, 2020 05:13PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Nageru 1.9.2 released

Obviously, the Covid-19 outbreak caused some of my streaming events to be cancelled, but that's a small thing in the big picture. However, I've accumulated a fair amount of changes to both Nageru, my video mixer, and Futatabi, my slow motion video server, this winter and spring. I've packaged them up and released 1.9.2. As usual, you can get both at https://nageru.sesse.net/, and they're also on the way up to Debian unstable. The complete changelog follows:

Nageru and Futatabi 1.9.2, March 29th, 2020

  - Support handling white balance directly in Nageru, without themes
    manually inserting a WhiteBalanceEffect or handling set_wb().
    To use it, call scene:add_white_balance() instead of
    scene:add_effect(WhiteBalanceEffect.new()). If using this functionality,
    white balance will be properly propagated to the MJPEG feed and
    through Futatabi, so that replays get the correct white balance.
    Futatabi's UI will still be uncorrected, though.

  - Make it possible to siphon out a single MJPEG stream, for remote
    debugging, single-camera recording, single-camera streaming via
    Kaeru or probably other things. The URL for this is /feeds/N.mp4
    where N is the card index (starting from zero).

  - The theme can now access some audio settings; it can get (not set)
    number of buses and names, get/set fader volume, get/set mute,
    and get/set EQ parameters.

  - In Futatabi, it is now possible to set custom source labels, with
    the parameter --source-label NUM:LABEL (or -l NUM:LABEL).

  - When the playback speed changes in Futatabi, ease into the new speed.
    The easing period is nominally 200 ms, but it will be automatically
    shortened or lengthened (up to as much as two seconds in extreme
    cases, especially involving very slight speed change) if this
    helps getting back into a cadence of hitting the original frames.
    This can mean significant performance improvements when ramping
    from higher speeds back into 100%.

  - Updates for newer versions of CEF (tested with Chrome 80).

  - Various bugfixes and performance improvements.

Enjoy!

29 March, 2020 04:33PM

hackergotchi for Paulo Henrique de Lima Santana

Paulo Henrique de Lima Santana

My free software activities in February 2020

My free software activities in february 2020

March is ending but I finally wrote my monthly report about activities in Debian and Free Software in general for February.

As I already wrote here, I attended to FOSDEM 2020 on February 1st and 2nd in Brussels. It was a amazing experience.

After my return to Curitiba, I felt my energies renewed to start new challenges.

MiniDebConf Maceió 2020

I continued helping to organize MiniDebConf and I got positive answers from 4Linux and Globo.com and they are sponsorsing the event.

FLISOL 2020

I started to talk with Maristela from IEP - Instituto de Engenharia do Paraná and after some messages and I joined a meeting with her and other members of Câmara Técnica de Eletrônica, Computação e Ciências de Dados.

I explained about FLISOL in Curitiba to them and they agreed to host the event at IEP. I asked to use three spaces: Auditorium for FLISOL talks, Salão Nobre for meetups from WordPress and PostgreSQL Communities, and the hall for Install Fest.

Besides FLISOL, they would like to host other events and meetups from Communities in Curitiba as Python, PHP, and so on. At least one per month.

I helped to schedule a PHP Paraná Community meetup on March.

New job

Since 17th I started to work at Rentcars as Infrastructure Analyst. I’m very happy to work there because we use a lot of FLOSS and with nice people.

Ubuntu LTS is the approved OS for desktops but I could install Debian on my laptop :-)

Misc

I signed pgp keys from friends I met in Brussels and I had my pgp key signed by them.

Finally my MR to the DebConf20 website fixing some texts was accepted.

I have watched vídeos from FOSDEM

  1. Until now, I saw these great talks:
  • Growing Sustainable Contributions Through Ambassador Networks
  • Building Ethical Software Under Capitalism
  • Cognitive biases, blindspots and inclusion
  • Building a thriving community in company-led open source projects
  • Building Community for your Company’s OSS Projects
  • The Ethics of Open Source
  • Be The Leader You Need in Open Source
  • The next generation of contributors is not on IRC
  • Open Source Won, but Software Freedom Hasn’t Yet
  • Open Source Under Attack
  • Lessons Learned from Cultivating Open Source Projects and Communities

That’s all folks!

29 March, 2020 10:00AM

François Marier

How to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box.

The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through.

Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality

Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity.

Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:

  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).

  • Firefox: Ensure that http.network.referer.spoofSource is set to false in about:config, which it is by default.

  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have

Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on.

My favorite service at the moment is Whereby (formerly Appear.in), so I'm going to use that to connect from two different computers:

  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address (192.168.1.2) is set as the DMZ host.

Chromium

For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals.

Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:

  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true

Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them.

In the case of a direct connection, I saw the following on the remote-candidate:

  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx

and the following on local-candidate:

  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx

These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct.

On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:

  • ip shows an IP address belonging to the TURN server
  • candidateType: relay

and the same information as before on the local-candidate.

Firefox

If you are using Firefox, the debugging page you want to look at is about:webrtc.

Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:

  • ICE State: succeeded
  • Nominated: true
  • Selected: true

then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay

In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections.

This isn't great and so I decided to tighten that up in two ways by:

  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.

To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: 66.228.46.55) and pass it to the whois command:

$ whois 66.228.46.55 | grep CIDR
CIDR:           66.228.32.0/19

To get the list of open UDP ports on siberia, I sshed into it and ran nmap:

$ sudo nmap -sU localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000015s latency).
Not shown: 994 closed ports
PORT      STATE         SERVICE
631/udp   open|filtered ipp
5060/udp  open|filtered sip
5353/udp  open          zeroconf

Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds

I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):

# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s 66.228.32.0/19 -p udp --dport 1024:65535 -j ACCEPT

29 March, 2020 12:03AM