
Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros. The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day.
Similar Podcasts

Elixir Outlaws
Elixir Outlaws is an informal discussion about interesting things happening in Elixir. Our goal is to capture the spirit of a conference hallway discussion in a podcast.

The Cynical Developer
A UK based Technology and Software Developer Podcast that helps you to improve your development knowledge and career,
through explaining the latest and greatest in development technology and providing you with what you need to succeed as a developer.

Programming Throwdown
Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.
202: Brokering Bind
We look at an OpenBSD setup on a new laptop, revel in BSDCan trip reports, and visit daemons and friendly ninjas. This episode was brought to you by Headlines OpenBSD and the modern laptop (http://bsdly.blogspot.de/2017/07/openbsd-and-modern-laptop.html) Peter Hansteen has a new blog post about OpenBSD (http://www.openbsd.org/) on laptops: Did you think that OpenBSD is suitable only for firewalls and high-security servers? Think again. Here are my steps to transform a modern mid to high range laptop into a useful Unix workstation with OpenBSD. One thing that never ceases to amaze me is that whenever I'm out and about with my primary laptop at conferences and elsewhere geeks gather, a significant subset of the people I meet have a hard time believing that my laptop runs OpenBSD, and that it's the only system installed. and then it takes a bit of demonstrating that yes, the graphics runs with the best available resolution the hardware can offer, the wireless network is functional, suspend and resume does work, and so forth. And of course, yes, I do use that system when writing books and articles too. Apparently heavy users of other free operating systems do not always run them on their primary workstations. Peter goes on to describe the laptops he’s had over the years (all running OpenBSD) and after BSDCan 2017, he needed a new one due to cracks in the display. So the time came to shop around for a replacement. After a bit of shopping around I came back to Multicom, a small computers and parts supplier outfit in rural Åmli in southern Norway, the same place I had sourced the previous one. One of the things that attracted me to that particular shop and their own-branded offerings is that they will let you buy those computers with no operating system installed. That is of course what you want to do when you source your operating system separately, as we OpenBSD users tend to do. The last time around I had gone for a "Thin and lightweight" 14 inch model (Thickness 20mm, weight 2.0kg) with 16GB RAM, 240GB SSD for system disk and 1TB HD for /home (since swapped out for a same-size SSD, as the dmesg will show). Three years later, the rough equivalent with some added oomph for me to stay comfortable for some years to come ended me with a 13.3 inch model, 18mm and advertised as 1.3kg (but actually weighing in at 1.5kg, possibly due to extra components), 32GB RAM, 512GB SSD and 2TB harddisk. For now the specification can be viewed online here (https://www.multicom.no/systemconfigurator.aspx?q=st:10637291;c:100559;fl:0#4091-10500502-1;4086-10637290-1;4087-8562157-2;4088-9101982-1;4089-9101991-1) (the site language is Norwegian, but product names and units of measure are not in fact different). The OpenBSD installer is a wonder of straightforward, no-nonsense simplicity that simply gets the job done. Even so, if you are not yet familiar with OpenBSD, it is worth spending some time reading the OpenBSD FAQ's installation guidelines and the INSTALL.platform file (in our case, INSTALL.amd64) to familiarize yourself with the procedure. If you're following this article to the letter and will be installing a snapshot, it is worth reading the notes on following -current too. The main hurdle back when I was installing the 2014-vintage 14" model was getting the system to consider the SSD which showed up as sd1 the automatic choice for booting (I solved that by removing the MBR, setting the size of the MBR on the hard drive that showed up as sd0 to 0 and enlarging the OpenBSD part to fill the entire drive). + He goes on to explain the choices he made in the installer and settings made after the reboot to set up his work environment. Peter closes with: If you have any questions on running OpenBSD as a primary working environment, I'm generally happy to answer but in almost all cases I would prefer that you use the mailing lists such as misc@openbsd.org or the OpenBSD Facebook (https://www.facebook.com/groups/2210554563/) group so the question and hopefully useful answers become available to the general public. Browsing the slides for my recent OpenBSD and you (https://home.nuug.no/~peter/openbsd_and_you/) user group talk might be beneficial if you're not yet familiar with the system. And of course, comments on this article are welcome. BSDCan 2017 Trip Report: Roller Angel (https://www.freebsdfoundation.org/blog/2017-bsdcan-trip-report-roller-angel/) We could put this into next week’s show, because we have another trip report already that’s quite long. After dropping off my luggage, I headed straight over to the Goat BoF which took place at The Royal Oak. There were already a number of people there engaged in conversation with food and drink. I sat down at a table and was delighted that the people sitting with me were also into the BSD’s and were happy to talk about it the whole time. I felt right at home from the start as people were very nice to me, and were interested in what I was working on. I honestly didn’t know that I would fit in so well. I had a preconceived notion that people may be a bit hard to approach as they are famous and so technically advanced. At first, people seemed to only be working in smaller circles. Once you get more familiar with the faces, you realize that these circles don’t always contain the same people and that they are just people talking about specific topics. I found that it was easy to participate in the conversation and also found out that people are happy to get your feedback on the subject as well. I was actually surprised how easily I got along with everyone and how included I felt in the activities. I volunteered to help wherever possible and got to work on the video crew that recorded the audio and slides of the talks. The people at BSDCan are incredibly easy to talk to, are actually interested in what you’re doing with BSD, and what they can do to help. It’s nice to feel welcome in the community. It’s like going home. Dan mentioned in his welcome on the first day of BSDCan that the conference is like home for many in the community. The trip report is very detailed and chronicles the two days of the developer summit, and the two days of the conference There was some discussion about a new code of conduct by Benno Rice who mentioned that people are welcome to join a body of people that is forming that helps work out issues related to code of conduct and forwards their recommendations on to core. Next, Allan introduced the idea of creating a process for formally discussing big project changes or similar discussions that is going to be known as FCP or FreeBSD Community Proposal. In Python we have the Python Enhancement Proposal or PEP which is very similar to the idea of FCP. I thought this idea is a great step for FreeBSD to be implementing as it has been a great thing for Python to have. There was some discussion about taking non-code contributions from people and how to recognize those people in the project. There was a suggestion to have a FreeBSD Member status created that can be given to people whose non-code contributions are valuable to the project. This idea seemed to be on a lot of people’s minds as something that should be in place soon. The junior jobs on the FreeBSD Wiki were also brought up as a great place to look for ideas on how to get involved in contributing to FreeBSD. Roller wasted no time, and started contributing to EdgeBSD at the conference. On the first day of BSDCan I arrived at the conference early to coordinate with the team that records the talks. We selected the rooms that each of us would be in to do the recording and set up a group chat via WhatsApp for coordination. Thanks to Roller, Patrick McAvoy, Calvin Hendryx-Parker, and all of the others who volunteered their time to run the video and streaming production at BSDCan, as well as all others who volunteered, even if it was just to carry a box. BSDCan couldn’t happen without the army of volunteers. After the doc lounge, I visited the Hacker Lounge. There were already several tables full of people talking and working on various projects. In fact, there was a larger group of people who were collaborating on the new libtrue library that seemed to be having a great time. I did a little socializing and then got on my laptop and did some more work on the documentation using my new skills. I really enjoyed having a hacker lounge to go to at night. I want to give a big thank you to the FreeBSD Foundation for approving my travel grant. It was a great experience to meet the community and participate in discussions. I’m very grateful that I was able to attend my first BSDCan. After visiting the doc lounge a few times, I managed to get comfortable using the tools required to edit the documentation. By the end of the conference, I had submitted two documentation patches to the FreeBSD Bugzilla with several patches still in progress. Prior to the conference I expected that I would be spending a lot of time working on my Onion Omega and Edge Router Lite projects that I had with me, but I actually found that there was always something fun going on that I would rather do or work on. I can always work on those projects at home anyway. I had a good time working with the FreeBSD community and will continue working with them by editing the documentation and working with Bugzilla. One of the things I enjoy about these trip reports is when they help convince other people to make the trip to their first conference. Hopefully by sharing their experience, it will convince you to come to the next conference: vBSDCon in Virginia, USA: Sept 7-9 EuroBSDCon in Paris, France: Sept 21-24 BSDTW in Taipei, Taiwan: November 11-12 (CFP ends July 31st) *** BSDCan 2017 - Trip report double-p (http://undeadly.org/cgi?action=article&sid=20170629150641) Prologue Most overheard in Tokyo was "see you in Ottawaaaaah", so with additional "personal item" being Groff I returned home to plan the trip to BSDCan. Dan was very helpful with getting all the preparations (immigration handling), thanks for that. Before I could start, I had to fix something: the handling of the goat. With a nicely created harness, I could just hang it along my backpack. Done that it went to the airport of Hamburg and check-in for an itinerary of HAM-MUC-YUL. While the feeder leg was a common thing, boarding to YUL was great - cabin-crew likes Groff :) Arriving in Montreal was like entering a Monsoon zone or something, sad! After the night the weather was still rain-ish but improving and i shuttled to Dorval VIARail station to take me to Ottawa (ever avoid AirCanada, right?). Train was late, but the conductor (or so) was nice to talk to - and wanted to know about Groff's facebook page :-P. Picking a cab in Ottawa to take me to "Residence" was easy at first - just that it was the wrong one. Actually my fault and so I had a "nice, short" walk to the actual one in the rain with wrong directions. Eventually I made it and after unpacking, refreshment it was time to hit the Goat BOF! Day 1 Since this was my first BSDCan I didnt exactly knew what to expect from this BOF. But it was like, we (Keeper, Dan, Allan, ..) would talk about "who's next" and things like that. How mistaken I was :). Besides the sheer amount of BSD people entering the not-so-yuuge Oak some Dexter sneaked in camouflage. The name-giver got a proper position to oversee the mess and I was glad I did not leave him behind after almost too many Creemores. Day 2 Something happened it's crystal blue on the "roof" and sun is trying its best to wake me up. To start the day, I pick breakfast at 'Father+Sons' - I can really recommend that. Very nice home made fries (almost hashbrowns) and fast delivery! Stuffed up I trott along to get to phessler's tutorial about BGP-for-sysadmins-and-developers. Peter did a great job, but the "lab" couldn't happen, since - oh surprise - the wifi was sluggish as hell. Must love the first day on a conference every time. Went to Hackroom in U90 afterwards, just to fix stuff "at home". IPsec giving pains again. Time to pick food+beer afterwards and since it's so easy to reach, we went to the Oak again. Having a nice backyard patio experience it was about time to meet new people. Cheers to Tom, Aaron, Nick, Philip and some more, we'd an awesome night there. I also invited some not-really-computer local I know by other means who was completly overwhelmed by what kind of "nerds" gather around BSD. He planned to stay "a beer" - and it was rather some more and six hours. Looks like "we" made some impression on him :). Day 3 Easy day, no tutorials at hand, so first picking up breakfast at F+S again and moving to hackroom in U90. Since I promised phessler to help with an localized lab-setup, I started to hack on a quick vagrant/ansible setup to mimic his BGP-lab and went quickly through most of it. Plus some more IPsec debugging and finally fixing it, we went early in the general direction of the Red Lion to pick our registration pack. But before that could happen it was called to have shawarma at 3brothers along. Given a tight hangover it wasn't the brightest idea to order a poutine m-(. Might be great the other day, it wasn't for me at the very time and had to throw away most of it :(. Eventually passing on to the Red Lion I made the next failure with just running into the pub - please stay at the front desk until "seated". I never get used to this concept. So after being "properly" seated, we take our beers and the registration can commence after we had half of it. So I register myself; btw it's a great idea to grant "not needed" stuff to charity. So dont pick "just because", think about it if you really need this or that gadget. Then I register Groff - he really needs badges - just to have Dru coming back to me some minutes later one to hand me the badge for Henning. That's just "amazing"; I dont know IF i want to break this vicious circle the other day, since it's so funny. Talked to Theo about the ongoing IPsec problems and he taught me about utrace(2) which looks "complicated" but might be an end of the story the other day. Also had a nice talk to Peter (H.) about some other ideas along books. BTW, did I pay for ongoing beers? I think Tom did - what a guy :). Arriving at the Residence, I had to find my bathroom door locked (special thing).. crazy thing is they dont have a master key at the venue, but to have to call in one from elsewhere. Short night shortened by another 30minutes :(. Day 4 Weather is improving into beach+sun levels - and it's Conference Day! The opening keynote from Geist was very interesting ("citation needed"). Afterwards I went to zfs-over-ssh, nothing really new (sorry Allan). But then Jason had a super interesting talk on how about to apply BSD for the health-care system in Australia. I hope I can help him with the last bits (rdomain!) in the end. While lunch I tried to recall my memories about utrace(2) while talking to Theo. Then it was about to present my talk and I think it was well perceipted. One "not so good" feedback was about not taking the audience more into account. I think I was asking every other five slides or so - but, well. The general feedback (in spoken terms) was quite good. I was a bit "confused" and I did likely a better job in Tokyo, but well. Happened we ended up in the Oak again.. thanks to mwl, shirkdog, sng, pitrh, kurtm for having me there :) Day 5 While the weather had to decide "what next", I rushed to the venue just to gather Reyk's talk about vmd(8). Afterwards it was MSTP from Paeps which was very interesting and we (OpenBSD) should look into it. Then happened BUG BOF and I invite all "coastal Germans" to cbug.de :) I had to run off for other reasons and came back to Dave's talk which was AWESOME. Following was Rod's talk.. well. While I see his case, that was very poor. The auction into closing was awesome again, and I spend $50 on a Tshirt. :) + Epilogue I totally got the exit dates wrong. So first cancel a booking of an Hotel and then rebook the train to YUL. So I have plenty of time "in the morning" to get breakfast with the local guy. After that he drives me to VIARail station and I dig into "business" cussions. Well, see you in Ottawa - or how about Paris, Taipei? Bind Broker (http://www.tedunangst.com/flak/post/bind-broker) Ted Unangst writes about an interesting idea he has He has a single big server, and lots of users who would like to share it, many want to run web servers. This would be great, but alas, archaic decisions made long ago mean that network sockets aren’t really files and there’s this weird concept of privileged ports. Maybe we could assign each user a virtual machine and let them do whatever they want, but that seems wasteful. Think of the megabytes! Maybe we could setup nginx.conf to proxy all incoming connections to a process of the user’s choosing, but that only works for web sites and we want to be protocol neutral. Maybe we could use iptables, but nobody wants to do that. What we need is a bind broker. At some level, there needs to be some kind of broker that assigns IPs to users and resolves conflicts. It should be possible to build something of this nature given just the existing unix tools we have, instead of changing system design. Then we can deploy our broker to existing systems without upgrading or disrupting their ongoing operation. The bind broker watches a directory for the creation, by users, of unix domain sockets. Then it binds to the TCP port of the same name, and transfers traffic between them. A more complete problem specification is as follows. A top level directory, which contains subdirectories named after IP addresses. Each user is assigned a subdirectory, which they have write permission to. Inside each subdirectory, the user may create unix sockets named according to the port they wish to bind to. We might assign user alice the IP 10.0.0.5 and the user bob the IP 10.0.0.10. Then alice could run a webserver by binding to net/10.0.0.5/80 and bob could run a mail server by binding to net/10.0.0.10/25. This maps IP ownership (which doesn’t really exist in unix) to the filesystem namespace (which does have working permissions). So this will be a bit different than jails. The idea is to use filesystem permissions to control which users can bind to which IP addresses and ports The broker is responsible for watching each directory. As new sockets are created, it should respond by binding to the appropriate port. When a socket is deleted, the network side socket should be closed as well. Whenever a connection is accepted on the network side, a matching connection is made on the unix side, and then traffic is copied across. A full set of example code is provided There’s no completely portable way to watch a directory for changes. I’m using a kevent extension. Otherwise we might consider a timeout and polling with fstat, or another system specific interface (or an abstraction layer over such an interface). Otherwise, if one of our mappings is ready to read (accept), we have a new connection to handle. The first half is straightforward. We accept the connection and make a matching connect call to the unix side. Then I broke out the big cheat stick and just spliced the sockets together. In reality, we’d have to set up a read/copy/write loop for each end to copy traffic between them. That’s not very interesting to read though. The full code, below, comes in at 232 lines according to wc. Minus includes, blank lines, and lines consisting of nothing but braces, it’s 148 lines of stuff that actually gets executed by the computer. Add some error handling, and working read/write code, and 200 lines seems about right. A very interesting idea. I wonder about creating a virtual file system that would implement this and maybe do a bit more to fully flesh out this idea. What do you think? *** News Roundup Daemons and friendly Ninjas (https://euroquis.nl/bobulate/?p=1600) There’s quite a lot of software that uses CMake as a (meta-)buildsystem. A quick count in the FreeBSD ports tree shows me 1110 ports (over a thousand) that use it. CMake generates buildsystem files which then direct the actual build — it doesn’t do building itself. There are multiple buildsystem-backends available: in regular usage, CMake generates Makefiles (and does a reasonable job of producing Makefiles that work for GNU Make and for BSD Make). But it can generate Ninja, or Visual Studio, and other buildsystem files. It’s quite flexible in this regard. Recently, the KDE-FreeBSD team has been working on Qt WebEngine, which is horrible. It contains a complete Chromium and who knows what else. Rebuilding it takes forever. But Tobias (KDE-FreeBSD) and Koos (GNOME-FreeBSD) noticed that building things with the Ninja backend was considerably faster for some packages (e.g. Qt WebEngine, and Evolution data-thingy). Tobias wanted to try to extend the build-time improvements to all of the CMake-based ports in FreeBSD, and over the past few days, this has been a success. Ports builds using CMake now default to using Ninja as buildsystem-backend. Here’s a bitty table of build-times. These are one-off build times, so hardly scientifically accurate — but suggestive of a slight improvement in build time. Name Size GMake Ninja liblxt 50kB 0:32 0:31 llvm38 1655kB * 19:43 musescore 47590kB 4:00 3:54 webkit2-gtk3 14652kB 44:29 37:40 Or here’s a much more thorough table of results from tcberner@, who did 5 builds of each with and without ninja. I’ve cut out the raw data, here are just the average-of-five results, showing usually a slight improvement in build time with Ninja. Name av make av ninj Delta D/Awo compiler-rt 00:08 00:07 -00:01 -14% openjpeg 00:06 00:07 +00:01 +17% marble 01:57 01:43 -00:14 -11% uhd 01:49 01:34 -00:15 -13% opencacscade 04:08 03:23 -00:45 -18% avidemux 03:01 02:49 -00:12 – 6% kdevelop 01:43 01:33 -00:10 – 9% ring-libclient 00:58 00:53 -00:05 – 8% Not everything builds properly with Ninja. This is usually due to missing dependencies that CMake does not discover; this shows up when foo depends on bar but no rule is generated for it. Depending on build order and speed, bar may be there already by the time foo gets around to being built. Doxygen showed this, where builds on 1 CPU core were all fine, but 8 cores would blow up occasionally. In many cases, we’ve gone and fixed the missing implicit dependencies in ports and upstreams. But some things are intractable, or just really need GNU Make. For this, the FreeBSD ports infrastructure now has a knob attached to CMake for switching a port build to GNU Make. Normal: USES=cmake Out-of-source: USES=cmake:outsource GNU Make: USES=cmake:noninja gmake OoS, GMake: USES=cmake:outsource,noninja gmake Bad: USES=cmake gmake For the majority of users, this has no effect, but for our package-building clusters, and for KDE-FreeBSD developers who build a lot of CMake-buildsystem software in a day it may add up to an extra coffee break. So I’ll raise a shot of espresso to friendship between daemons and ninjas. Announcing the pkgsrc-2017Q2 release (http://mail-index.netbsd.org/pkgsrc-users/2017/07/10/msg025237.html) For the 2017Q2 release we welcome the following notable package additions and changes to the pkgsrc collection: Firefox 54 GCC 7.1 MATE 1.18 Ruby 2.4 Ruby on Rails 4.2 TeX Live 2017 Thunderbird 52.1 Xen 4.8 We say goodbye to: Ruby 1.8 Ruby 2.1 The following infrastructure changes were introduced: Implement optional new pkgtasks and init infrastructure for pkginstall. Various enhancements and fixes for building with ccache. Add support to USE_LANGUAGES for newer C++ standards. Enhanced support for SSP, FORTIFY, and RELRO. The GitHub mirror has migrated to https://github.com/NetBSD/pkgsrc In total, 210 packages were added, 43 packages were removed, and 1,780 package updates were processed since the pkgsrc-2017Q1 release. *** OpenBSD changes of note 624 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-624) There are a bunch, but here are a few that jump out: Start plugging some leaks. Compile kernels with umask 007. Install them minus read permissions. Pure preprocessor implementation of the roff .ec and .eo requests, though you are warned that very bad things will happen to anybody trying to use these macros in OpenBSD manuals. Random linking for arm64. And octeon. And alpha. And hppa. There’s some variation by platform, because every architecture has the kernel loaded with different flavors of initial physical and virtual mappings. And landisk. And loongson. And sgi. And macppc. And a gap file for sparc64, but nobody yet dares split locore. And arm7. Errata for perl File::Path race condition. Some fixes for potential link attacks against cron. Add pledge violations to acct reporting. Take random linking to the next stage. More about KARL - kernel address randomized link. As noted, a few difficulties with hibernate and such, but the plan is coming together. Add a new function reorder_kernel() that relinks and installs the new kernel in the background on system startup. Add support for the bootblocks to detect hibernate and boot the previous kernel. Remove the poorly described “stuff” from ksh. Replace usage of TIOCSTI in csh using a more common IO loop. Kind of like the stuff in ksh, but part of the default command line editing and parsing code, csh would read too many characters, then send the ones it didn’t like back into the terminal. Which is weird, right? Also, more importantly, eliminating the code that uses TIOCSTI to inject characters into ttys means that maybe TIOCSTI can be removed. Revamp some of the authentication logging in ssh. Add a verbose flag to rm so you can panic immediately upon seeing it delete the wrong file instead of waiting to discover your mistake after the fact. Update libexpat to version 2.2.1 which has some security fixes. Never trust an expat, that’s my motto. Update inteldrm to code based on Linux 4.4.70. This brings us support for Skylake and Cherryview and better support for Broadwell and Valleyview. Also adds MST support. Fun times for people with newish laptops. *** OPNsense 17.1.9 released (https://opnsense.org/opnsense-17-1-9-released/) firewall: move gateway switching from system to firewall advanced settings firewall: keep category selection when changing tabs firewall: do not skip gateway switch parsing too early (contributed by Stephane Lesimple) interfaces: show VLAN description during edit firmware: opnsense-revert can now handle multiple packages at once firmware: opnsense-patch can now handle permission changes from patches dnsmasq: use canned –bogus-priv for noprivatereverse dnsmasq: separate log file, ACL and menu entries dynamic dns: fix update for IPv6 (contributed by Alexander Leisentritt) dynamic dns: remove usage of CURLAUTH_ANY (contributed by Alexander Leisentritt) intrusion detection: suppress “fast mode available” boot warning in PCAP mode openvpn: plugin framework adaption unbound: add local-zone type transparent for PTR zone (contributed by Davide Gerhard) unbound: separate log file, ACL and menu entries wizard: remove HTML from description strings mvc: group relation to something other than uuid if needed mvc: rework “item in” for our Volt templates lang: Czech to 100% translated (contributed by Pavel Borecki) plugins: zabbix-agent 1.1 (contributed by Frank Wall) plugins: haproxy 1.16 (contributed by Frank Wall) plugins: acme-client 1.8 (contributed by Frank Wall) plugins: tinc fix for switch mode (contributed by Johan Grip) plugins: monit 1.3 (contributed by Frank Brendel) src: support dhclient supersede statement for option 54 (contributed by Fabian Kurtz) src: add Intel Atom Cherryview SOC HSUART support src: add the ID for the Huawei ME909S LTE modem src: HardenedBSD Stack Clash mitigations[1] ports: sqlite 3.19.3[2] ports: openvpn 2.4.3[3] ports: sudo 1.8.20p2[4] ports: dnsmasq 2.77[5] ports: openldap 2.4.45[6] ports: php 7.0.20[7] ports: suricata 3.2.2[8] ports: squid 3.5.26[9] ports: carootnss 3.31 ports: bind 9.11.1-P2[10] ports: unbound 1.6.3[11] ports: curl 7.54.1[12] *** Beastie Bits Thinkpad x230 - trying to get TrackPoint / Touchpad working in X (http://lists.dragonflybsd.org/pipermail/users/2017-July/313519.html) FreeBSD deprecates all r-cmds (rcp, rlogin, etc.) (http://marc.info/?l=freebsd-commits-all&m=149918307723723&w=2) Bashfill - art for your terminal (https://max.io/bash.html) Go 1.9 release notes: NetBSD support is broken, please help (https://github.com/golang/go/commit/32002079083e533e11209824bd9e3a797169d1c4) Jest, A ReST api for creating and managing FreeBSD jails written in Go (https://github.com/altsrc-io/Jest) *** Feedback/Questions John - zfs send/receive (http://dpaste.com/3ANETHW#wrap) Callum - laptops (http://dpaste.com/11TV0BJ) & An update (http://dpaste.com/3A14BQ6#wrap) Lars - Snapshot of VM datadisk (http://dpaste.com/0MM37NA#wrap) Daryl - Jail managers (http://dpaste.com/0CDQ9EK#wrap) ***
201: Skip grep, use awk
In which we interview a unicorn, FreeNAS 11.0 is out, show you how to run Nextcloud in a FreeBSD jail, and talk about the connection between oil changes and software patches. This episode was brought to you by Headlines FreeNAS 11.0 is Now Here (http://www.freenas.org/blog/freenas-11-0/) The FreeNAS blog informs us: After several FreeNAS Release Candidates, FreeNAS 11.0 was released today. This version brings new virtualization and object storage features to the World’s Most Popular Open Source Storage Operating System. FreeNAS 11.0 adds bhyve virtual machines to its popular SAN/NAS, jails, and plugins, letting you use host web-scale VMs on your FreeNAS box. It also gives users S3-compatible object storage services, which turns your FreeNAS box into an S3-compatible server, letting you avoid reliance on the cloud. FreeNAS 11.0 also introduces the beta version of a new administration GUI. The new GUI is based on the popular Angular framework and the FreeNAS team expects the GUI to be themeable and feature complete by 11.1. The new GUI follows the same flow as the existing GUI, but looks better. For now, the FreeNAS team has released it in beta form to get input from the FreeNAS community. The new GUI, as well as the classic GUI, are selectable from the login screen. Also new in FreeNAS 11 is an Alert Service page which configures the system to send critical alerts from FreeNAS to other applications and services such as Slack, PagerDuty, AWS, Hipchat, InfluxDB, Mattermost, OpsGenie, and VictorOps. FreeNAS 11.0 has an improved Services menu that adds the ability to manage which services and applications are started at boot. The FreeNAS community is large and vibrant. We invite you to join us on the FreeNAS forum (https://forums.freenas.org/index.php) and the #freenas IRC channel on Freenode. To download FreeNAS and sign-up for the FreeNAS Newsletter, visit freenas.org/download (http://www.freenas.org/download/). Building an IPsec Gateway With OpenBSD (https://www.exoscale.ch/syslog/2017/06/26/building-an-ipsec-gateway-with-openbsd/) Pierre-Yves Ritschard wrote the following blog article: With private networks just released on Exoscale, there are now more options to implement secure access to Exoscale cloud infrastructure. While we still recommend the bastion approach, as detailed in this article (https://www.exoscale.ch/syslog/2016/01/15/secure-your-cloud-computing-architecture-with-a-bastion/), there are applications or systems which do not lend themselves well to working this way. In these cases, the next best thing is building IPsec gateways. IPsec is a protocol which works directly at layer 3. It uses its configuration to determine which network flows should be sent encrypted on the wire. Once IPsec is correctly configured, selected network flows are transparently encrypted and applications do not need to modify anything to benefit from secured traffic. In addition to encryption, IPSec also authenticates the end points, so you can be sure you are exchanging packets with a trusted host For the purposes of this article we will work under the following assumptions: We want a host to network setup, providing access to cloud-hosted infrastructure from a desktop environment. Only stock tooling should be used on desktop environment, no additional VPN client should be needed. In this case, to ensure no additional software is needed on the client, we will configure an L2TP/IPsec gateway. This article will use OpenBSD as the operating system to implement the gateway. While this choice may sound surprising, OpenBSD excels at building gateways of all sorts thanks to its simple configuration formats and inclusion of all necessary software and documentation to do so in the base system. The tutorial assumes you have setup a local network between the hosts in the cloud, and walks through the configuration of an OpenBSD host as a IPsec gateway On the OpenBSD host, all necessary software is already installed. We will configure the system, as well as pf, npppd, and ipsec + Configure L2TP + Configure IPsec + Configure NAT + Enabled services: ipsec isakmpd npppd The tutorial then walks through configuring a OS X client, but other desktops will be very similar *** Running Nextcloud in a jail on FreeBSD (https://ramsdenj.com/2017/06/05/nextcloud-in-a-jail-on-freebsd.html) I recently setup Nextcloud 12 inside a FreeBSD jail in order to allow me access to files i might need while at University. I figured this would be a optimal solution for files that I might need access to unexpectedly, on computers where I am not in complete control. My Nextcloud instance is externally accessible, and yet if someone were to get inside my Jail, I could rest easy knowing they still didn’t have access to the rest of my host server. I chronicled the setup process including jail setup using iocage, https with Lets Encrypt, and full setup of the web stack. Nextcloud has a variety of features such as calendar synchronization, email, collaborative editing, and even video conferencing. I haven’t had time to play with all these different offerings and have only utilized the file synchronization, but even if file sync is not needed, Nextcloud has many offerings that make it worth setting up. MariaDB, PHP 7.0, and Apache 2.4 To manage my jails I’m using iocage. In terms of jail managers it’s a fairly new player in the game of jail management and is being very actively developed. It just had a full rewrite in Python, and while the code in the background might be different, the actual user interface has stayed the same. Iocage makes use of ZFS clones in order to create “base jails”, which allow for sharing of one set of system packages between multiple jails, reducing the amount of resources necessary. Alternatively, jails can be completely independent from each other; however, using a base jail makes it easier to update multiple jails as well. + pkg install iocage + sysrc iocageenable=YES + iocage fetch -r 11.0-RELEASE + iocage create tag="stratus" jailzfs=on vnet=off boot=on ip4_addr="sge0|172.20.0.100/32" -r 11.0-RELEASE + iocage start stratus + iocage console stratus I have chosen to provide storage to the Nextcloud Jail by mounting a dataset over NFS on my host box. This means my server can focus on serving Nextcloud and my storage box can focus on housing the data. The Nextcloud Jail is not even aware of this since the NFS Mount is simply mounted by the host server into the jail. The other benefit of this is the Nextcloud jail doesn’t need to be able to see my storage server, nor the ability to mount the NFS share itself. Using a separate server for storage isn’t necessary and if the storage for my Nextcloud server was being stored on the same server I would have created a ZFS dataset on the host and mounted it into the jail. Next I set up a dataset for the database and delegated it into the jail. Using a separate dataset allows me to specify certain properties that are better for a database, it also makes migration easier in case I ever need to move or backup the database. With most of the requirements in place it was time to start setting up Nextcloud. The requirements for Nextcloud include your basic web stack of a web server, database, and PHP. Also covers the setup of acme.sh for LetsEncrypt. This is now available as a package, and doesn’t need to be manually fetched Install a few more packages, and do a bit of configuration, and you have a NextCloud server *** Historical: My first OpenBSD Hackathon (http://bad.network/historical-my-first-openbsd-hackathon.html) This is a blog post by our friend, and OpenBSD developer: Peter Hessler This is a story about encouragement. Every time I use the word "I", you should think "I as in me, not I as in the author". In 2003, I was invited to my first OpenBSD Hackathon. Way before I was into networking, I was porting software to my favourite OS. Specifically, I was porting games. On the first night most of the hackathon attendees end up at the bar for food and beer, and I'm sitting next to Theo de Raadt, the founder of OpenBSD. At some point during the evening, he's telling me about all of these "crazy" ideas he has about randomizing libraries, and protections that can be done in ld.so. (ld.so is the part of the OS that loads the libraries your program needs. It's, uh, kinda important.) Theo is encouraging me to help implement some of these ideas! At some point I tell Theo "I'm just a porter, I don't know C." Theo responds with "It isn't hard, I'll have Dale (Rahn) show you how ld.so works, and you can do it." I was hoping that all of this would be forgotten by the next day, but sure enough Dale comes by. "Hey, are you Peter? Theo wanted me to show you how ld.so works" Dale spends an hour or two showing me how it works, the code structure, and how to recover in case of failure. At first I had lots of failures. Then more failures. And even more failures. Once, I broke my machine so badly I had to reinstall it. I learned a lot about how an OS works during this. But, I eventually started doing changes without it breaking. And some even did what I wanted! By the end of the hackathon I had came up with a useful patch, that was committed as part of a larger change. I was a nobody. With some encouragement, enough liquid courage to override my imposter syndrome, and a few hours of mentoring, I'm now doing big projects. The next time you're sitting at a table with someone new to your field, ask yourself: how can you encourage them? You just might make the world better. Thank you Dale. And thank you Theo. Everyone has to start somewhere. One of the things that sets the BSDs apart from certain other open source operating systems, is the welcoming community, and the tradition of mentorship. Sure, someone else in the OpenBSD project could have done the bits that Peter did, likely a lot more quickly, but then OpenBSD wouldn’t have gained a new committer. So, if you are interested in working on one of the BSDs, reach out, and we’ll try to help you find a mentor. What part of the system do you want to work on? *** Interview - Dan McDonald - allcoms@gmail.com (mailto:allcoms@gmail.com) (danboid) News Roundup FreeBSD 11.1-RC1 Available (https://lists.freebsd.org/pipermail/freebsd-stable/2017-July/087340.html) 11.1-RC1 Installation images are available for: amd64, i386 powerpc, powerpc64 sparc64 armv6 BANANAPI, BEAGLEBONE, CUBIEBOARD, CUBIEBOARD2, CUBOX-HUMMINGBOARD, GUMSTIX, RPI-B, RPI2, PANDABOARD, WANDBOARD aarch64 (aka arm64), including the RPI3, Pine64, OverDrive 1000, and Cavium Server A summary of changes since BETA3 includes: Several build toolchain related fixes. A use-after-free in RPC client code has been corrected. The ntpd(8) leap-seconds file has been updated. Various VM subsystem fixes. The '_' character is now allowed in newfs(8) labels. A potential sleep while holding a mutex has been corrected in the sa(4) driver. A memory leak in an ioctl handler has been fixed in the ses(4) driver. Virtual Machine Disk Images are available for the amd64 and i386 architectures. Amazon EC2 AMI Images of FreeBSD/amd64 EC2 AMIs are available The freebsd-update(8) utility supports binary upgrades of amd64 and i386 systems running earlier FreeBSD releases. Systems running earlier FreeBSD releases can upgrade as follows: freebsd-update upgrade -r 11.1-RC1 During this process, freebsd-update(8) may ask the user to help by merging some configuration files or by confirming that the automatically performed merging was done correctly. freebsd-update install The system must be rebooted with the newly installed kernel before continuing. shutdown -r now After rebooting, freebsd-update needs to be run again to install the new userland components: freebsd-update install It is recommended to rebuild and install all applications if possible, especially if upgrading from an earlier FreeBSD release, for example, FreeBSD 10.x. Alternatively, the user can install misc/compat10x and other compatibility libraries, afterwards the system must be rebooted into the new userland: shutdown -r now Finally, after rebooting, freebsd-update needs to be run again to remove stale files: freebsd-update install Oil changes, safety recalls, and software patches (http://www.daemonology.net/blog/2017-06-14-oil-changes-safety-recalls-software-patches.html) Every few months I get an email from my local mechanic reminding me that it's time to get my car's oil changed. I generally ignore these emails; it costs time and money to get this done (I'm sure I could do it myself, but the time it would cost is worth more than the money it would save) and I drive little enough — about 2000 km/year — that I'm not too worried about the consequences of going for a bit longer than nominally advised between oil changes. I do get oil changes done... but typically once every 8-12 months, rather than the recommended 4-6 months. From what I've seen, I don't think I'm alone in taking a somewhat lackadaisical approach to routine oil changes. On the other hand, there's another type of notification which elicits more prompt attention: Safety recalls. There are two good reasons for this: First, whether for vehicles, food, or other products, the risk of ignoring a safety recall is not merely that the product will break, but rather that the product will be actively unsafe; and second, when there's a safety recall you don't have to pay for the replacement or fix — the cost is covered by the manufacturer. I started thinking about this distinction — and more specifically the difference in user behaviour — in the aftermath of the "WannaCry" malware. While WannaCry attracted widespread attention for its "ransomware" nature, the more concerning aspect of this incident is how it propagated: By exploiting a vulnerability in SMB for which Microsoft issued patches two months earlier. As someone who works in computer security, I find this horrifying — and I was particularly concerned when I heard that the NHS was postponing surgeries because they couldn't access patient records. Think about it: If the NHS couldn't access patient records due to WannaCry, it suggests WannaCry infiltrated systems used to access patient records — meaning that someone else exploiting the same vulnerabilities could have accessed those records. The SMB subsystem in Windows was not merely broken; until patches were applied, it was actively unsafe. I imagine that most people in my industry would agree that security patches should be treated in the same vein as safety recalls — unless you're certain that you're not affected, take care of them as a matter of urgency — but it seems that far more users instead treat security patches more like oil changes: something to be taken care of when convenient... or not at all, if not convenient. It's easy to say that such users are wrong; but as an industry it's time that we think about why they are wrong rather than merely blaming them for their problems. There are a few factors which I think are major contributors to this problem. First, the number of updates: When critical patches occur frequently enough to become routine, alarm fatigue sets in and people cease to give the attention updates deserve, even if on a conscious level they still recognize the importance of applying updates. Colin also talks about his time as the FreeBSD Security Officer, and the problems in ensuring the patches are correct and do not break the system when installed He also points out the problem of systems like Windows Update, the combines optional updates, and things like its license checking tool, in the same interface that delivers important updates. Or my recent machines, that gets constant popups about how some security updates will not be delivered because my processor is too new. My bank sends me special offers in the mail but phones if my credit card usage trips fraud alarms; this is the sort of distinction in intrusiveness we should see for different types of software updates Finally, I think there is a problem with the mental model most people have of computer security. Movies portray attackers as geniuses who can break into any system in minutes; journalists routinely warn people that "nobody is safe"; and insurance companies offer insurance against "cyberattacks" in much the same way as they offer insurance against tornados. Faced with this wall of misinformation, it's not surprising that people get confused between 400 pound hackers sitting on beds and actual advanced persistent threats. Yes, if the NSA wants to break into your computer, they can probably do it — but most attackers are not the NSA, just like most burglars are not Ethan Hunt. You lock your front door, not because you think it will protect you from the most determined thieves, but because it's an easy step which dramatically reduces your risk from opportunistic attack; but users don't see applying security updates as the equivalent of locking their front door when they leave home. SKIP grep, use AWK (http://blog.jpalardy.com/posts/skip-grep-use-awk/) This is a tip from Jonathan Palardy in a series of blog posts about awk. It is especially helpful for people who write a lot of shell scripts or are using a lot of pipes with awk and grep. Over the years, I’ve seen many people use this pattern (filter-map): $ [data is generated] | grep something | awk '{print $2}' but it can be shortened to: $ [data is generated] | awk '/something/ {print $2}' AWK can take a regular expression (the part between the slashes) and matches that to the input. Anything that matches is being passed to the print $2 action (to print the second column). Why would I do this? I can think of 4 reasons: *it’s shorter to type *it spawns one less process *awk uses modern (read “Perl”) regular expressions, by default – like grep -E *it’s ready to “augment” with more awk How about matching the inverse (search for patterns that do NOT match)? But “grep -v” is OK… Many people have pointed out that “grep -v” can be done more concisely with: $ [data is generated] | awk '! /something/' See if you have such combinations of grep piped to awk and fix those in your shell scripts. It saves you one process and makes your scripts much more readable. Also, check out the other intro links on the blog if you are new to awk. *** vim Adventures (https://vim-adventures.com) This website, created by Doron Linder, will playfully teach you how to use vim. Hit any key to get started and follow the instructions on the playing field by moving the cursor around. There is also a menu in the bottom left corner to save your game. Try it out, increase your vim-fu, and learn how to use a powerful text editor more efficiently. *** Beastie Bits Slides from PkgSrcCon (http://pkgsrc.org/pkgsrcCon/2017/talks.html) OpenBSD’s doas adds systemd compat shim (http://marc.info/?l=openbsd-tech&m=149902196520920&w=2) Deadlock Empire -- “Each challenge below is a computer program of two or more threads. You take the role of the Scheduler - and a cunning one! Your objective is to exploit flaws in the programs to make them crash or otherwise malfunction.” (https://deadlockempire.github.io/) EuroBSDcon 2017 Travel Grant Application Now Open (https://www.freebsdfoundation.org/blog/eurobsdcon-2017-travel-grant-application-now-open/) Registration for vBSDCon is open (http://www.vbsdcon.com/) - Registration is only $100 if you register before July 31. Discount hotel rooms arranged at the Hyatt for only $100/night while supplies last. BSD Taiwan call for papers opens, closes July 31st (https://bsdtw.org/)Windows Application Versand *** Feedback/Questions Joseph - Server Monitoring (http://dpaste.com/2AM6C2H#wrap) Paulo - Updating Jails (http://dpaste.com/1Z4FBE2#wrap) Kevin - openvpn server (http://dpaste.com/2MNM9GJ#wrap) Todd - several questions (http://dpaste.com/17BVBJ3#wrap) ***
200: Getting Scrubbed to Death
The NetBSD 8.0 release process is underway, we try to measure the weight of an electron, and look at stack clashing. This episode was brought to you by Headlines NetBSD 8.0 release process underway (https://mail-index.netbsd.org/netbsd-announce/2017/06/06/msg000267.html) Soren Jacobsen writes on NetBSD-announce: If you've been reading source-changes@, you likely noticed the recent creation of the netbsd-8 branch. If you haven't been reading source-changes@, here's some news: the netbsd-8 branch has been created, signaling the beginning of the release process for NetBSD 8.0. We don't have a strict timeline for the 8.0 release, but things are looking pretty good at the moment, and we expect this release to happen in a shorter amount of time than the last couple major releases did. At this point, we would love for folks to test out netbsd-8 and let us know how it goes. A couple of major improvements since 7.0 are the addition of USB 3 support and an overhaul of the audio subsystem, including an in-kernel mixer. Feedback about these areas is particularly desired. To download the latest binaries built from the netbsd-8 branch, head to [http://daily-builds.NetBSD.org/pub/NetBSD-daily/netbsd-8/(]http://daily-builds.NetBSD.org/pub/NetBSD-daily/netbsd-8/) Thanks in advance for helping make NetBSD 8.0 a stellar release! OpenIndiana Hipster 2017.04 is here (https://www.openindiana.org/2017/05/03/openindiana-hipster-2017-04-is-here/) Desktop software and libraries Xorg was updated to 1.18.4, xorg libraries and drivers were updated. Mate was updated to 1.16 Intel video driver was updated, the list of supported hardware has significantly extended (https://wiki.openindiana.org/oi/Intel+KMS+driver) libsmb was updated to 4.4.6 gvfs was updated to 1.26.0 gtk3 was updated to 3.18.9 Major text editors were updated (we ship vim 8.0.104, joe 4.4, emacs 25.2, nano 2.7.5 pulseaudio was updated to 10.0 firefox was updated to 45.9.0 thunderbird was updated to 45.8.0 critical issue in enlightenment was fixed, now it's operational again privoxy was updated to 3.0.26 Mesa was updated to 13.0.6 Nvidia driver was updated to 340.102 Development tools and libraries GCC 6 was added. Patches necessary to compile illumos-gate with GCC 6 were added (note, compiling illumos-gate with version other than illumos-gcc-4.4.4 is not supported) GCC 7.1 added to Hipster (https://www.openindiana.org/2017/05/05/gcc-7-1-added-the-hipster-and-rolling-forward/) Bison was updated to 3.0.4 Groovy 2.4 was added Ruby 1.9 was removed, Ruby 2.3 is the default Ruby now Perl 5.16 was removed. 64-bit Perl 5.24 is shipped. 64-bit OpenJDK 8 is the default OpenJDK version now. Mercurial was updated to 4.1.3 Git was updated to 2.12.2 ccache was updated to 3.3.3 QT 5.8.0 was added Valgrind was updated to 3.12.0 Server software PostgreSQL 9.6 was added, PostgreSQL 9.3-9.5 were updated to latest minor versions MongoDB 3.4 was added MariaDB 10.1 was added NodeJS 7 was added Percona Server 5.5/5.6/5.7 and MariaDB 5.5 were updated to latest minor versions OpenVPN was updated to 2.4.1 ISC Bind was updated to 9.10.4-P8 Squid was updated to 3.5.25 Nginx was updated to 1.12.0 Apache 2.4 was updated to 2.4.25. Apache 2.4 is the default Apache server now. Apache 2.2 will be removed before the next snapshot. ISC ntpd was updated to 4.2.8p10 OpenSSH was updated to 7.4p1 Samba was updated to 4.4.12 Tcpdump was updated to 4.9.0 Snort was updated to 2.9.9.0 Puppet was updated to 3.8.6 A lot of other bug fixes and minor software updates included. *** PKGSRC at The University of Wisconsin–Milwaukee (https://uwm.edu/hpc/software-management/) This piece is from the University of Wisconsin, Milwaukee Why Use Package Managers? Why Pkgsrc? Portability Flexibility Modernity Quality and Security Collaboration Convenience Growth Binary Packages for Research Computing The University of Wisconsin — Milwaukee provides binary pkgsrc packages for selected operating systems as a service to the research computing community. Unlike most package repositories, which have a fixed prefix and frequently upgraded packages, these packages are available for multiple prefixes and remain unchanged for a given prefix. Additional packages may be added and existing packages may be patched to fix bugs or security issues, but the software versions will not be changed. This allows researchers to keep older software in-place indefinitely for long-term studies while deploying newer software in later snapshots. Contributing to Pkgsrc Building Your Own Binary Packages Check out the full article and consider using pkgsrc for your own research purposes. PKGSrc Con is this weekend! (http://www.pkgsrc.org/pkgsrcCon/2017/) *** Measuring the weight of an electron (https://deftly.net/posts/2017-06-01-measuring-the-weight-of-an-electron.html) An interesting story of the struggles of one person, aided only by their pet Canary, porting Electron to OpenBSD. This is a long rant. A rant intended to document lunacy, hopefully aid others in the future and make myself feel better about something I think is crazy. It may seem like I am making an enemy of electron, but keep in mind that isn’t my intention! The enemy here, is complexity! My friend Henry, a canary, is coming along for the ride! Getting the tools At first glance Electron seems like a pretty solid app, it has decent docs, it’s consolidated in a single repository, has a lot of visibility, porting it shouldn’t be a big deal, right? After cloning the repo, trouble starts: Reading through the doc, right off the bat there are a few interesting things: At least 25GB disk space. Huh, OK, some how this ~47M repository is going to blow up to 25G? Continuing along with the build, I know I have two versions of clang installed on OpenBSD, one from ports and one in base. Hopefully I will be able to tell the build to use one of these versions. Next, it’s time to tell the bootstrap that OpenBSD exists as a platform. After that is fixed, the build-script runs. Even though cloning another git repo fails, the build happily continues. Wait. Another repository failed to clone? At least this time the build failed after trying to clone boto.. again. I am guessing it tried twice because something might have changed between now and the last clone? Off in the distance we catch a familiar tune, it almost sounds like Gnarls Barkley’s song Crazy, can’t tell for sure. As it turns out, if you are using git-fsck, you are unable to clone boto and requests. Obviously the proper fix for his is to not care about the validity of the git objects! So we die a little inside and comment out fsckobjects in our ~/.gitconfig. Next up, chromium-58 is downloaded… Out of curiosity we look at vendor/libchromiumcontent/script/update, it seems its purpose is to download / extract chromium clang and node, good thing we already specified --clang_dir or it might try to build clang again! 544 dots and 45 minutes later, we have an error! The chromium-58.0.3029.110.tar.xz file is mysteriously not there anymore.. Interesting. Wut. “Updating Clang…”. Didn’t I explicitly say not to build clang? At this point we have to shift projects, no longer are we working on Electron.. It’s libchromiumcontent that needs our attention. Fixing sub-tools Ahh, our old friends the dots! This is the second time waiting 45+ minutes for a 500+ MB file to download. We are fairly confident it will fail, delete the file out from under itself and hinder the process even further, so we add an explicit exit to the update script. This way we can copy the file somewhere safe! Another 45 minute chrome build and saving the downloaded executable to a save space seems in order. Fixing another 50 occurrences of error conditions let’s the build continue - to another clang build. We remove the call to update_clang, because.. well.. we have two copies of it already and the Electron doc said everything would be fine if we had >= clang 3.4! More re-builds and updates of clang and chromium are being commented out, just to get somewhere close to the actual electron build. Fixing sub-sub-tools Ninja needs to be build and the script for that needs to be told to ignore this “unsupported OS” to continue. No luck. At this point we are faced with a complex web of python scripts that execute gn on GN files to produce ninja files… which then build the various components and somewhere in that cluster, something doesn’t know about OpenBSD… I look at Henry, he is looking a photo of his wife and kids. They are sitting on a telephone wire, the morning sun illuminating their beautiful faces. Henry looks back at me and says “It’s not worth it.” We slam the laptop shut and go outside. Interview - Dan McDonald - allcoms@gmail.com (mailto:allcoms@gmail.com) (danboid) News Roundup g4u 2.6 (ghosting for unix) released 18th birthday (https://mail-index.netbsd.org/netbsd-users/2017/06/08/msg019625.html) Hubert Feyrer writes in his mail to netbsd-users: After a five-year period for beta-testing and updating, I have finally released g4u 2.6. With its origins in 1999, I'd like to say: Happy 18th Birthday, g4u! About g4u: g4u ("ghosting for unix") is a NetBSD-based bootfloppy/CD-ROM that allows easy cloning of PC harddisks to deploy a common setup on a number of PCs using FTP. The floppy/CD offers two functions. The first is to upload the compressed image of a local harddisk to a FTP server, the other is to restore that image via FTP, uncompress it and write it back to disk. Network configuration is fetched via DHCP. As the harddisk is processed as an image, any filesystem and operating system can be deployed using g4u. Easy cloning of local disks as well as partitions is also supported. The past: When I started g4u, I had the task to install a number of lab machines with a dual-boot of Windows NT and NetBSD. The hype was about Microsoft's "Zero Administration Kit" (ZAK) then, but that did barely work for the Windows part - file transfers were slow, depended on the clients' hardware a lot (requiring fiddling with MS DOS network driver disks), and on the ZAK server the files for installing happened do disappear for no good reason every now and then. Not working well, and leaving out NetBSD (and everything else), I created g4u. This gave me the (relative) pain of getting things working once, but with the option to easily add network drivers as they appeared in NetBSD (and oh they did!), plus allowed me to install any operating system. The present: We've used g4u successfully in our labs then, booting from CDROM. I also got many donations from public and private institutions plus companies from many sectors, indicating that g4u does make a difference. In the meantime, the world has changed, and CDROMs aren't used that much any more. Network boot and USB sticks are today's devices of choice, cloning of a full disk without knowing its structure has both advantages but also disadvantages, and g4u's user interface is still command-line based with not much space for automation. For storage, FTP servers are nice and fast, but alternatives like SSH/SFTP, NFS, iSCSI and SMB for remote storage plus local storage (back to fun with filesystems, anyone? avoiding this was why g4u was created in the first place!) should be considered these days. Further aspects include integrity (checksums), confidentiality (encryption). This leaves a number of open points to address either by future releases, or by other products. The future: At this point, my time budget for g4u is very limited. I welcome people to contribute to g4u - g4u is Open Source for a reason. Feel free to get back to me for any changes that you want to contribute! The changes: Major changes in g4u 2.6 include: Make this build with NetBSD-current sources as of 2017-04-17 (shortly before netbsd-8 release branch), binaries were cross-compiled from Mac OS X 10.10 Many new drivers, bugfixes and improvements from NetBSD-current (see beta1 and beta2 announcements) Go back to keeping the disk image inside the kernel as ramdisk, do not load it as separate module. Less error prone, and allows to boot the g4u (NetBSD) kernel from a single file e.g. via PXE (Testing and documentation updates welcome!) Actually DO provide the g4u (NetBSD) kernel with the embedded g4u disk image from now on, as separate file, g4u-kernel.gz In addition to MD5, add SHA512 checksums Congratulation, g4u. Check out the g4u website (http://fehu.org/~feyrer/g4u/) and support the project if you are using it. *** Fixing FreeBSD Networking on Digital Ocean (https://wycd.net/posts/2017-05-19-fixing-freebsd-networking-on-digital-ocean.html) Most cloud/VPS providers use some form of semi-automated address assignment, rather than just regular static address configuration, so that newly created virtual machines can configure themselves. Sometimes, especially during the upgrade process, this can break. This is the story of one such user: I decided it was time to update my FreeBSD Digital Ocean droplet from the end-of-life version 10.1 (shame on me) to the modern version 10.3 (good until April 2018), and maybe even version 11 (good until 2021). There were no sensitive files on the VM, so I had put it off. Additionally, cloud providers tend to have shoddy support for BSDs, so breakages after messing with the kernel or init system are rampant, and I had been skirting that risk. The last straw for me was a broken pkg: /usr/local/lib/libpkg.so.3: Undefined symbol "openat" So the user fires up freebsd-update and upgrades to FreeBSD 10.3 I rebooted, and of course, it happened: no ssh access after 30 seconds, 1 minute, 2 minutes…I logged into my Digital Ocean account and saw green status lights for the instance, but something was definitely wrong. Fortunately, Digital Ocean provides console access (albeit slow, buggy, and crashes my browser every time I run ping). ifconfig revealed that the interfaces vtnet0 (public) and vtnet1 (private) haven’t been configured with IP addresses. Combing through files in /etc/rc.*, I found a file called /etc/rc.digitalocean.d/${DROPLETID}.conf containing static network settings for this droplet (${DROPLETID} was something like 1234567). It seemed that FreeBSD wasn’t picking up the Digital Ocean network settings config file. The quick and dirty way would have been to messily append the contents of this file to /etc/rc.conf, but I wanted a nicer way. Reading the script in /etc/rc.d/digitalocean told me that /etc/rc.digitalocean.d/${DROPLET_ID}.conf was supposed to have a symlink at /etc/rc.digitalocean.d/droplet.conf. It was broken and pointed to /etc/rc.digitalocean.d/.conf, which could happen when the curl command in /etc/rc.d/digitalocean fails Maybe the curl binary was also in need for an upgrade so failed to fetch the droplet ID Using grep to fish for files containing droplet.conf, I discovered that it was hacked into the init system via loadrcconfig() in /etc/rc.subr I would prefer if Digital Ocean had not customized the version of FreeBSD they ship quite so much I could fix that symlink and restart the services: set DROPLET_ID=$(curl -s http://169.254.169.254/metadata/v1/id) ln -s -f /etc/rc.digitalocean.d/${DROPLET_ID}.conf /etc/rc.digitalocean.d/droplet.conf /etc/rc.d/netif restart /etc/rc.d/routing restart Networking was working again, and I could then ssh into my server and run the following to finish the upgrade: freebsd-update install At this point, I decided that I didn’t want to deal with this mess again until at least 2021, so I decided to go for 11.0-RELEASE freebsd-update -r 11.0-RELEASE update freebsd-update install reboot freebsd-update install pkg-static install -f pkg pkg update pkg upgrade uname -a FreeBSD hostname 11.0-RELEASE-p9 FreeBSD 11.0-RELEASE-p9 pkg -v 1.10.1 The problem was solved correctly, and my /etc/rc.conf remains free of generated cruft. The Digital Ocean team can make our lives easier by having their init scripts do more thorough system checking, e.g., catching broken symlinks and bad network addresses. I’m hopeful that collaboration of the FreeBSD team and cloud providers will one day result in automatic fixing of these situations, or at least a correct status indicator. The Digital Ocean team didn’t really know many FreeBSD people when they made the first 10.1 images, they have improved a lot, but they of course could always use more feedback from BSD users ** Stack Clash (https://www.qualys.com/2017/06/19/stack-clash/stack-clash.txt) A 12-year-old question: "If the heap grows up, and the stack grows down, what happens when they clash? Is it exploitable? How? In 2005, Gael Delalleau presented "Large memory management vulnerabilities" and the first stack-clash exploit in user-space (against mod_php 4.3.0 on Apache 2.0.53) (http://cansecwest.com/core05/memory_vulns_delalleau.pdf) In 2010, Rafal Wojtczuk published "Exploiting large memory management vulnerabilities in Xorg server running on Linux", the second stack-clash exploit in user-space (CVE-2010-2240) (http://www.invisiblethingslab.com/resources/misc-2010/xorg-large-memory-attacks.pdf) Since 2010, security researchers have exploited several stack-clashes in the kernel-space, In user-space, however, this problem has been greatly underestimated; the only public exploits are Gael Delalleau's and Rafal Wojtczuk's, and they were written before Linux introduced a protection against stack-clashes (a "guard-page" mapped below the stack) (https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2010-2240) In this advisory, we show that stack-clashes are widespread in user-space, and exploitable despite the stack guard-page; we discovered multiple vulnerabilities in guard-page implementations, and devised general methods for: "Clashing" the stack with another memory region: we allocate memory until the stack reaches another memory region, or until another memory region reaches the stack; "Jumping" over the stack guard-page: we move the stack-pointer from the stack and into the other memory region, without accessing the stack guard-page; "Smashing" the stack, or the other memory region: we overwrite the stack with the other memory region, or the other memory region with the stack. So this advisory itself, is not a security vulnerability. It is novel research showing ways to work around the mitigations against generic vulnerability types that are implemented on various operating systems. While this issue with the mitigation feature has been fixed, even without the fix, successful exploitation requires another application with its own vulnerability in order to be exploited. Those vulnerabilities outside of the OS need to be fixed on their own. FreeBSD-Security post (https://lists.freebsd.org/pipermail/freebsd-security/2017-June/009335.html) The issue under discussion is a limitation in a vulnerability mitigation technique. Changes to improve the way FreeBSD manages stack growth, and mitigate the issue demonstrated by Qualys' proof-of-concept code, are in progress by FreeBSD developers knowledgeable in the VM subsystem. FreeBSD address space guards (https://svnweb.freebsd.org/base?view=revision&revision=320317) HardenedBSD Proof of Concept for FreeBSD (https://github.com/lattera/exploits/blob/master/FreeBSD/StackClash/001-stackclash.c) HardenedBSD implementation: https://github.com/HardenedBSD/hardenedBSD/compare/de8124d3bf83d774b66f62d11aee0162d0cd1031...91104ed152d57cde0292b2dc09489fd1f69ea77c & https://github.com/HardenedBSD/hardenedBSD/commit/00ad1fb6b53f63d6e9ba539b8f251b5cf4d40261 Qualys PoC: freebsd_cve-2017-fgpu.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-fgpu.c) Qualys PoC: freebsd_cve-2017-fgpe.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-fgpe.c) Qualys PoC: freebsd_cve-2017-1085.c (https://www.qualys.com/2017/06/19/stack-clash/freebsd_cve-2017-1085.c) Qualys PoC: OpenBSD (https://www.qualys.com/2017/06/19/stack-clash/openbsd_at.c) Qualys PoC: NetBSD (https://www.qualys.com/2017/06/19/stack-clash/netbsd_cve-2017-1000375.c) *** Will ZFS and non-ECC RAM kill your data? (http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/) TL;DR: ECC is good, but even without, having ZFS is better than not having ZFS. What’s ECC RAM? Is it a good idea? What’s ZFS? Is it a good idea? Is ZFS and non-ECC worse than not-ZFS and non-ECC? What about the Scrub of Death? The article walks through ZFS folk lore, and talks about what can really go wrong, and what is just the over-active imagination of people on the FreeNAS forums But would using any other filesystem that isn’t ZFS have protected that data? ‘Cause remember, nobody’s arguing that you can lose data to evil RAM – the argument is about whether evil RAM is more dangerous with ZFS than it would be without it. I really, really want to use the Scrub Of Death in a movie or TV show. How can I make it happen? I don’t care about your logic! I wish to appeal to authority! OK. “Authority” in this case doesn’t get much better than Matthew Ahrens, one of the cofounders of ZFS at Sun Microsystems and current ZFS developer at Delphix. In the comments to one of my filesystem articles on Ars Technica, Matthew said “There’s nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem.” Beastie Bits EuroBSDcon 2017 Travel Grant Application Now Open (https://www.freebsdfoundation.org/blog/eurobsdcon-2017-travel-grant-application-now-open/) FreeBSD 11.1-BETA3 is out, please give it a test (https://lists.freebsd.org/pipermail/freebsd-stable/2017-June/087303.html) Allan and Lacey let us know the video to the Postgresql/ZFS talk is online (http://dpaste.com/1FE80FJ) Trapsleds (https://marc.info/?l=openbsd-tech&m=149792179514439&w=2) BSD User group in North Rhine-Westphalia, Germany (https://bsd.nrw/) *** Feedback/Questions Joe - Home Server Suggestions (http://dpaste.com/2Z5BJCR#wrap) Stephen - general BSD (http://dpaste.com/1VRQYAM#wrap) Eduardo - ZFS Encryption (http://dpaste.com/2TWADQ8#wrap) Joseph - BGP Kernel Error (http://dpaste.com/0SC0GAC#wrap) ***
199: Read the source, KARL
FreeBSD 11.1-Beta1 is out, we discuss Kernel address randomized link (KARL), and explore the benefits of daily OpenBSD source code reading This episode was brought to you by Headlines FreeBSD 11.1-Beta1 now available (https://lists.freebsd.org/pipermail/freebsd-stable/2017-June/087242.html) Glen Barber, of the FreeBSD release engineering team has announced that FreeBSD 11.1-Beta1 is now available for the following architectures: 11.1-BETA1 amd64 GENERIC 11.1-BETA1 i386 GENERIC 11.1-BETA1 powerpc GENERIC 11.1-BETA1 powerpc64 GENERIC64 11.1-BETA1 sparc64 GENERIC 11.1-BETA1 armv6 BANANAPI 11.1-BETA1 armv6 BEAGLEBONE 11.1-BETA1 armv6 CUBIEBOARD 11.1-BETA1 armv6 CUBIEBOARD2 11.1-BETA1 armv6 CUBOX-HUMMINGBOARD 11.1-BETA1 armv6 GUMSTIX 11.1-BETA1 armv6 RPI-B 11.1-BETA1 armv6 RPI2 11.1-BETA1 armv6 PANDABOARD 11.1-BETA1 armv6 WANDBOARD 11.1-BETA1 aarch64 GENERIC Note regarding arm/armv6 images: For convenience for those without console access to the system, a freebsd user with a password of freebsd is available by default for ssh(1) access. Additionally, the root user password is set to root. It is strongly recommended to change the password for both users after gaining access to the system. The full schedule (https://www.freebsd.org/releases/11.1R/schedule.html) for 11.1-RELEASE is here, the final release is expected at the end of July It was also announced there will be a 10.4-RELEASE scheduled for October (https://www.freebsd.org/releases/10.4R/schedule.html) *** KARL – kernel address randomized link (https://marc.info/?l=openbsd-tech&m=149732026405941&w=2) Over the last three weeks I've been working on a new randomization feature which will protect the kernel. The situation today is that many people install a kernel binary from OpenBSD, and then run that same kernel binary for 6 months or more. We have substantial randomization for the memory allocations made by the kernel, and for userland also of course. Previously, the kernel assembly language bootstrap/runtime locore.S was compiled and linked with all the other .c files of the kernel in a deterministic fashion. locore.o was always first, then the .c files order specified by our config(8) utility and some helper files. In the new world order, locore is split into two files: One chunk is bootstrap, that is left at the beginning. The assembly language runtime and all other files are linked in random fashion. There are some other pieces to try to improve the randomness of the layout. As a result, every new kernel is unique. The relative offsets between functions and data are unique. It still loads at the same location in KVA. This is not kernel ASLR! ASLR is a concept where the base address of a module is biased to a random location, for position-independent execution. In this case, the module itself is perturbed but it lands at the same location, and does not need to use position-independent execution modes. LLDB: Sanitizing the debugger's runtime (https://blog.netbsd.org/tnf/entry/lldb_sanitizing_the_debugger_s) The good Besides the greater enhancements this month I performed a cleanup in the ATF ptrace(2) tests again. Additionally I have managed to unbreak the LLDB Debug build and to eliminate compiler warnings in the NetBSD Native Process Plugin. It is worth noting that LLVM can run tests on NetBSD again, the patch in gtest/LLVM has been installed by Joerg Sonnenberg and a more generic one has been submitted to the upstream googletest repository. There was also an improvement in ftruncate(2) on the LLVM side (authored by Joerg). Since LLD (the LLVM linker) is advancing rapidly, it improved support for NetBSD and it can link a functional executable on NetBSD. I submitted a patch to stop crashing it on startup anymore. It was nearly used for linking LLDB/NetBSD and it spotted a real linking error... however there are further issues that need to be addressed in the future. Currently LLD is not part of the mainline LLDB tasks - it's part of improving the work environment. This linker should reduce the linking time - compared to GNU linkers - of LLDB by a factor of 3x-10x and save precious developer time. As of now LLDB linking can take minutes on a modern amd64 machine designed for performance. Kernel correctness I have researched (in pkgsrc-wip) initial support for multiple threads in the NetBSD Native Process Plugin. This code revealed - when running the LLDB regression test-suite - new kernel bugs. This unfortunately affects the usability of a debugger in a multithread environment in general and explains why GDB was never doing its job properly in such circumstances. One of the first errors was asserting kernel panic with PT*STEP, when a debuggee has more than a single thread. I have narrowed it down to lock primitives misuse in the doptrace() kernel code. The fix has been committed. The bad Unfortunately this is not the full story and there is further mandatory work. LLDB acceleration The EV_SET() bug broke upstream LLDB over a month ago, and during this period the debugger was significantly accelerated and parallelized. It is difficult to declare it definitely, but it might be the reason why the tracer's runtime broke due to threading desynchronization. LLDB behaves differently when run standalone, under ktruss(1) and under gdb(1) - the shared bug is that it always fails in one way or another, which isn't trivial to debug. The ugly There are also unpleasant issues at the core of the Operating System. Kernel troubles Another bug with single-step functions that affects another aspect of correctness - this time with reliable execution of a program - is that processes die in non-deterministic ways when single-stepped. My current impression is that there is no appropriate translation between process and thread (LWP) states under a debugger. These issues are sibling problems to unreliable PTRESUME and PTSUSPEND. In order to be able to appropriately address this, I have diligently studied this month the Solaris Internals book to get a better image of the design of the NetBSD kernel multiprocessing, which was modeled after this commercial UNIX. Plan for the next milestone The current troubles can be summarized as data races in the kernel and at the same time in LLDB. I have decided to port the LLVM sanitizers, as I require the Thread Sanitizer (tsan). Temporarily I have removed the code for tracing processes with multiple threads to hide the known kernel bugs and focus on the LLDB races. Unfortunately LLDB is not easily bisectable (build time of the LLVM+Clang+LLDB stack, number of revisions), therefore the debugging has to be performed on the most recent code from upstream trunk. d2K17 Hackathon Reports d2k17 Hackathon Report: Ken Westerback on XSNOCCB removal and dhclient link detection (http://undeadly.org/cgi?action=article&sid=20170605225415) d2k17 Hackathon Report: Antoine Jacoutot on rc.d, syspatch, and more (http://undeadly.org/cgi?action=article&sid=20170608074033) d2k17 Hackathon Report: Florian Obser on slaacd(8) (http://undeadly.org/cgi?action=article&sid=20170609013548) d2k17 Hackathon Report: Stefan Sperling on USB audio, WiFi Progress (http://undeadly.org/cgi?action=article&sid=20170602014048) News Roundup Multi-tenant router or firewall with FreeBSD (https://bsdrp.net/documentation/examples/multi-tenant_router_and_firewall) Setting-up a virtual lab Downloading BSD Router Project images Download BSDRP serial image (prevent to have to use an X display) on Sourceforge. Download Lab scripts More information on these BSDRP lab scripts available on How to build a BSDRP router lab (https://bsdrp.net/documentation/examples/how_to_build_a_bsdrp_router_lab). Start the lab with full-meshed 5 routers and one shared LAN, on this example using bhyve lab script on FreeBSD: [root@FreeBSD]~# tools/BSDRP-lab-bhyve.sh -i BSDRP-1.71-full-amd64-serial.img.xz -n 5 -l 1 Configuration Router 4 (R4) hosts the 3 routers/firewalls for each 3 customers. Router 1 (R1) belongs to customer 1, router 2 (R2) to customer 2 and router 3 (R3) to customer 3. Router 5 (R5) simulates a simple Internet host Using pf firewall in place of ipfw pf need a little more configuration because by default /dev/pf is hidden from jail. Then, on the host we need to: In place of loading the ipfw/ipfw-nat modules we need to load the pf module (but still disabling pf on our host for this example) Modify default devd rules for allowing jails to see /dev/pf (if you want to use tcpdump inside your jail, you should use bpf device too) Replacing nojail tag by nojailvnet tag into /etc/rc.d/pf (already done into BSDRP (https://github.com/ocochard/BSDRP/blob/master/BSDRP/patches/freebsd.pf.rc.jail.patch)) Under the hood: jails-on-nanobsd BSDRP's tenant shell script (https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/sbin/tenant) creates jail configuration compliant with a host running nanobsd. Then these jails need to be configured for a nanobsd: Being nullfs based for being hosted on a read-only root filesystem Have their /etc and /var into tmpfs disks (then we need to populate these directory before each start) Configuration changes need to be saved with nanobsd configuration tools, like “config save” on BSDRP And on the host: autosave daemon (https://github.com/ocochard/BSDRP/blob/master/BSDRP/Files/usr/local/sbin/autosave) need to be enabled: Each time a customer will issue a “config save” inside a jail, his configuration diffs will be save into host's /etc/jails/. And this directory is a RAM disk too, then we need to automatically save hosts configuration on changes. *** OpenBSD Daily Source Reading (https://blog.tintagel.pl/2017/06/09/openbsd-daily.html) Adam Wołk writes: I made a new year's resolution to read at least one C source file from OpenBSD daily. The goal was to both get better at C and to contribute more to the base system and userland development. I have to admit that initially I wasn’t consistent with it at all. In the first quarter of the year I read the code of a few small base utilities and nothing else. Still, every bit counts and it’s never too late to get better. Around the end of May, I really started reading code daily - no days skipped. It usually takes anywhere between ten minutes (for small base utils) and one and a half hour (for targeted reads). I’m pretty happy with the results so far. Exploring the system on a daily basis, looking up things in the code that I don’t understand and digging as deep as possible made me learn a lot more both about C and the system than I initially expected. There’s also one more side effect of reading code daily - diffs. It’s easy to spot inconsistencies, outdated code or an incorrect man page. This results in opportunities for contributing to the project. With time it also becomes less opportunitstic and more goal oriented. You might start with a https://marc.info/?l=openbsd-tech&m=149591302814638&w=2 (drive by diff to kill) optional compilation of an old compatibility option in chown that has been compiled in by default since 1995. Soon the contributions become more targeted, for example using a new API for encrypting passwords in the htpasswd utility after reading the code of the utility and the code for htpasswd handling in httpd. Similarly it can take you from discussing a doas feature idea with a friend to implementing it after reading the code. I was having a lot of fun reading code daily and started to recommend it to people in general discussions. There was one particular twitter thread that ended up starting something new. This is still a new thing and the format is not yet solidified. Generally I make a lot of notes reading code, instead of slapping them inside a local file I drop the notes on the IRC channel as I go. Everyone on the channel is encouraged to do the same or share his notes in any way he/she seems feasable. Check out the logs from the IRC discussions. Start reading code from other BSD projects and see whether you can replicate their results! *** Become FreeBSD User: Find Useful Tools (https://bsdmag.org/become-freebsd-user-find-useful-tools/) BSD Mag has the following article by David Carlier: If you’re usually programming on Linux and you consider a potential switch to FreeBSD, this article will give you an overview of the possibilities. How to Install the Dependencies FreeBSD comes with either applications from binary packages or compiled from sources (ports). They are arranged according to software types (programming languages mainly in lang (or java specifically for Java), libraries in devel, web servers in www …) and the main tool for modern FreeBSD versions is pkg, similar to Debian apt tools suite. Hence, most of the time if you are looking for a specific application/library, simply pkg search without necessarily knowing the fully qualified name of the package. It is somehow sufficient. For example pkg search php7 will display php7 itself and the modules. Furthermore, php70 specific version and so on. Web Development Basically, this is the easiest area to migrate to. Most Web languages do not use specific platform features. Thus, most of the time, your existing projects might just be “drop-in” use cases. If your language of choice is PHP, you are lucky as this scripting language is workable on various operating systems, on most Unixes and Windows. In the case of FreeBSD, you have even many different ports or binary package versions (5.6 to 7.1). In this case, you may need some specific PHP modules enabled, luckily they are available atomically, or if the port is the way you chose, it is via the www/php70-extensions’s one. Of course developing with Apache (both 2.2 and 2.4 series are available, respectively www/apache22 and www/apache24 packages), or even better with Nginx (the last stable or the latest development versions could be used, respectively www/nginx and www/nginx-devel packages) via php-fpm is possible. In terms of databases, we have the regular RDMBS like MySQL and PostgreSQL (client and server are distinct packages … databases/(mysql/portgresql)-client, and databases/(mysql/postgresql)-server). Additionally, a more modern concept of NoSQL with CouchDB, for example (databases/couchdb), MongoDB (databases/mongodb), and Cassandra (databases/cassandra), to name but a few. Low-level Development The BSDs are shipped with C and C++ compilers in the base. In the case of FreeBSD 11.0, it is clang 3.8.0 (in x86 architectures) otherwise, modern versions of gcc exist for developing with C++11. Examples are of course available too (lang/gcc … until gcc 7.0 devel). Numerous libraries for various topics are also present, web services SOAP with gsoap through User Interfaces with GTK (x11-toolkits/gtk), QT4 or QT 5 (devel/qt), malware libraries with Yara (security/yara), etc. Android / Mobile Development To be able to do Android development, to a certain degree, the Linux’s compatibility layer (aka linuxulator) needs to be enabled. Also, x11-toolkits/swt and linux-f10-gtk2 port/package need to be installed (note that libswt-gtk-3550.so and libswt-pi-gtk-3550.so are necessary. The current package is versioned as 3557 and can be solved using symlinks). In the worst case scenario, remember that bhyve (or Virtualbox) is available, and can run any Linux distribution efficiently. Source Control Management FreeBSD comes in base with a version of subversion. As FreeBSD source is in a subversion repository, a prefixed svnlite command prevents conflicts with the package/port. Additionally, Git is present but via the package/port system with various options (with or without a user interface, subversion support). Conclusion FreeBSD has made tremendous improvements over the years to fill the gap created by Linux. FreeBSD still maintains its interesting specificities; hence there will not be too much blockers if your projects are reasonably sized to allow a migration to FreeBSD. Notes from project Aeronix, part 10 (https://martin.kopta.eu/blog/#2017-06-11-16-07-26) Prologue It is almost two years since I finished building Aeronix and it has served me well during that time. Only thing that ever broke was Noctua CPU fan, which I have replaced with the same model. However, for long time, I wanted to run Aeronix on OpenBSD instead of GNU/Linux Debian. Preparation I first experimented with RAID1 OpenBSD setup in VirtualBox, plugging and unplugging drives and learned that OpenBSD RAID1 is really smooth. When I finally got the courage, I copied all the data on two drives outside of Aeronix. One external HDD I regulary use to backup Aeronix and second internal drive in my desktop computer. Copying the data took about two afternoons. Aeronix usually has higher temperatures (somewhere around 55°C or 65°C depending on time of the year), and when stressed, it can go really high (around 75°C). During full speed copy over NFS and to external drive it went as high as 85°C, which made me a bit nervous. After the data were copied, I temporarily un-configured computers on local network to not touch Aeronix, plugged keyboard, display and OpenBSD 6.1 thumb drive. Installing OpenBSD 6.1 on full disk RAID1 was super easy. Configuring NFS Aeronix serves primarily as NAS, which means NFS and SMB. NFS is used by computers in local network with persistent connection (via Ethernet). SMB is used by other devices in local network with volatile connection (via WiFi). When configuring NFS, I expected similar configuration to what I had in Debian, but on OpenBSD, it is very different. However, after reading through exports(5), it was really easy to put it together. Putting the data back Copying from the external drive took few days, since the transfer speed was something around 5MB/s. I didn't really mind. It was sort of a good thing, because Aeronix wasn't overheating that way. I guess I need to figure new backup strategy though. One interesting thing happened with one of my local desktops. It was connecting Aeronix with default NFS mount options (on Archlinux) and had really big troubles with reading anything. Basically it behaved as if the network drive had horrible access times. After changing the default mount options, it started working perfectly. Conclusion Migrating to OpenBSD was way easier than I anticipated. There are various benefits like more security, realiable RAID1 setup (which I know how will work when drive dies), better documentation and much more. However, the true benefit for me is just the fact I like OpenBSD and makes me happy to have one more OpenBSD machine. On to the next two years of service! Beastie Bits Running OpenBSD on Azure (http://undeadly.org/cgi?action=article&sid=20170609121413&mode=expanded&count=0) Mondieu - portable alternative for freebsd-update (https://github.com/skoef/mondieu) Plan9-9k: 64-bit Plan 9 (https://bitbucket.org/forsyth/plan9-9k) Installing OpenBSD 6.1 on your laptop is really hard (not) (http://sohcahtoa.org.uk/openbsd.html) UbuntuBSD is dead (http://www.ubuntubsd.org/) OPNsense 17.1.8 released (https://opnsense.org/opnsense-17-1-8-released/) *** Feedback/Questions Patrick - Operating System Textbooks (http://dpaste.com/2DKXA0T#wrap) Brian - snapshot retention (http://dpaste.com/3CJGW22#wrap) Randy - FreeNAS to FreeBSD (http://dpaste.com/2X3X6NR#wrap) Florian - Bootloader Resolution (http://dpaste.com/1AE2SPS#wrap) ***
198: BSDNorth or You can’t handle the libtruth
This episode gives you the full dose of BSDCan 2017 recap as well as a blog post on conference speaking advice. Headlines Pre-conference activities: Goat BoF, FreeBSD Foundation Board Meeting, and FreeBSD Journal Editorial Board Meeting The FreeBSD Foundation has a new President as Justin Gibbs is busy this year with building a house, so George Neville-Neil took up the task to serve as President, with Justin Gibbs as Secretary. Take a look at the updated Board of Directors (https://www.freebsdfoundation.org/about/board-of-directors/). We also have a new staff member (https://www.freebsdfoundation.org/about/staff/): Scott Lamons joined the Foundation team as senior program manager. Scott’s work for the Foundation will focus on managing and evangelizing programs for advanced technologies in FreeBSD including preparing project plans, coordinating resources, and facilitating interactions between commercial vendors, the Foundation, and the FreeBSD community. The Foundation also planned various future activities, visits of upcoming conferences, and finding new ways to support and engage the community. The Foundation now has interns in the form of co-op students from the University of Waterloo, Canada. This is described further in the May 2017 Development Projects Update (https://www.freebsdfoundation.org/blog/may-2017-development-projects-update/). Both students (Siva and Charlie) were also the conference, helping out at the Foundation table, demonstrating the tinderbox dashboard. Follow the detailed instructions (https://www.freebsdfoundation.org/news-and-events/blog/blog-post/building-a-physical-freebsd-build-status-dashboard/) to build one of your own. The Foundation put out a call for Project Proposal Solicitation for 2017 (https://www.freebsdfoundation.org/blog/freebsd-foundation-2017-project-proposal-solicitation/). If you think you have a good proposal for work relating to any of the major subsystems or infrastructure for FreeBSD, we’d be happy to review it. Don’t miss the deadlines for travel grants to some of the upcoming conferences. You can find the necessary forms and deadlines at the Travel Grant page (https://www.freebsdfoundation.org/what-we-do/travel-grants/travel-grants/) on the Foundation website. Pictures from the Goat BoF can be found on Keltia.net (https://assets.keltia.net/photos/BSDCan-2017/Royal%20Oak/index.html) Overlapping with the GoatBoF, members of the FreeBSD Journal editorial board met in a conference room in the Novotel to plan the upcoming issues. Topics were found, authors identified, and new content was discussed to appeal to even more readers. Check out the FreeBSD Journal website (https://www.freebsdfoundation.org/journal/) and subscribe if you like to support the Foundation in that way. FreeBSD Devsummit Day 1 & 2 (https://wiki.freebsd.org/DevSummit/201706) The first day of the Devsummit began with introductory slides by Gordon Tetlow, who organized the devsummit very well. Benno Rice of the FreeBSD core team presented the work done on the new Code of Conduct, which will become effective soon. A round of Q&A followed, with positive feedback from the other devsummit attendees supporting the new CoC. After that, Allan Jude joined to talk about the new FreeBSD Community Proposal (FCP) (https://github.com/freebsd/fcp) process. Modelled after IETF RFCs, Joyent RFDs, and Python PEP, it is a new way for the project to reach consensus on the design or implementation of new features or processes. The FCP repo contains FCP#0 that describes the process, and a template for writing a proposal. Then, the entire core team (except John Baldwin, who could not make it this year) and core secretary held a core Q&A session, Answering questions, gathering feedback and suggestions. After the coffee break, we had a presentation about Intel’s QAT integration in FreeBSD. When the lunch was over, people spread out into working groups about BearSSL, Transport (TCP/IP), and OpenZFS. OpenZFS working group (https://pbs.twimg.com/media/DBu_IMsWAAId2sN.jpg:large): Matt Ahrens lead the group, and spent most of the first session providing a status update about what features have been recently committed, are out for review, on the horizon, or in the design phase. Existing Features Compressed ARC Compressed Send/Recv Recently Upstreamed A recent commit improved RAID-Z write speeds by declaring writes to padding blocks to be optional, and to always write them if they can be aggregated with the next write. Mostly impacts large record sizes. ABD (ARC buffer scatter/gather) Upstreaming In Progress Native Encryption Channel Programs Device Removal (Mirrors and Stripes) Redacted Send/recv Native TRIM Support (FreeBSD has its own, but this is better and applies to all ZFS implementations) Faster (mostly sequential) scrub/resilver DRAID (A great deal of time was spent explaining how this works, with diagrams on the chalk board) vdev metadata classes (store metadata on SSDs with data is on HDDs, or similar setups. Could also be modified to do dedup to SSD) Multi-mount protection (“safe import”, for dual-headed storage shelves) zpool checkpoint (rollback an entire pool, including zfs rename and zfs destroy) Further Out Import improvements Import with missing top-level vdevs (some blocks unreadable, but might let you get some data) Improved allocator performance -- vdev spacemap log ZIL performance Persistent L2ARC ZSTD Compression Day 2 Day two started with the Have/Want/Need session for FreeBSD 12.0. A number of features that various people have or are in the process of building, were discussed with an eye towards upstreaming them. Features we want to have in time for 12.0 (early 2019) were also discussed. After the break was the Vendor summit, which continued the discussion of how FreeBSD and its vendors can work together to make a better operating system, and better products based on it After lunch, the group broke up into various working groups: Testing/CI, Containers, Hardening UFS, and GELI Improvements Allan lead the GELI Improvements session. The main thrust of the discussions was fixing an outstanding bug in GELI when using both key slots with passphrases. To solve this, and make GELI more extensible, the metadata format will be extended to allow it to store more than 512 bytes of data (currently 511 bytes are used). The new format will allow arbitrarily large metadata, defined at creation time by selecting the number of user key slots desired. The new extended metadata format will contain mostly the same fields, except the userkey will no longer be a byte array of IV-key, Data-key, HMAC, but a struct that will contain all data about that key This new format will store the number of pkcs5v2 iterations per key, instead of only having a single location to store this number for all keys (the source of the original bug) A new set of flags per key, to control some aspects of the key (does it require a keyfile, etc), as well as possibly the role of the key. An auxdata field related to the flags, this would allow a specific key with a specific flag set, to boot a different partition, rather than decrypt the main partition. A URI to external key material is also stored per key, allowing GELI to uniquely identify the correct data to load to be able to use a specific decryption key And the three original parts of the key are stored in separate fields now. The HMAC also has a type field, allowing for a different HMAC algorithm to be used in the future. The main metadata is also extended to include a field to store the number of user keys, and to provide an overall HMAC of the metadata, so that it can be verified using the master key (provide any of the user keys) Other topics discussed: Ken Merry presented sedutil, a tool for managing Self Encrypting Drives, as may be required by certain governments and other specific use cases. Creating a deniable version of GELI, where the metadata is also encrypted The work to implemented GELI in the UEFI loader was discussed, and a number of developers volunteered to review and test the code Following the end of the Dev Summit, the “Newcomers orientation and mentorship” session was run by Michael W. Lucas, which attempts to pair up first time attendees with oldtimers, to make sure they always know a few people they can ask if they have questions, or if they need help getting introduced to the right people. News Roundup Conference Day 1 (http://www.bsdcan.org/2017/schedule/day_2017-06-09.en.html) The conference opened with some short remarks from Dan Langille, and then the opening keynote by Dr Michael Geist, a law professor at the University of Ottawa where he holds the Canada Research Chair in Internet and E-commerce Law. The keynote focused on what some of the currently issues are, and how the technical community needs to get involved at all levels. In Canada especially, contacting your representatives is quite effective, and when it does not happen, they only hear the other side of the story, and often end up spouting talking points from lobbyists as if they were facts. The question period for the keynote ran well overtime because of the number of good questions the discussion raised, including how do we fight back against large telcos with teams of lawyers and piles of money. Then the four tracks of talks started up for the day The day wrapped up with the Work In Progress (WIP) session. Allan Jude presented work on ZSTD compression in ZFS Drew Gallatin presented about work at Netflix on larger mbufs, to avoid the need for chaining and to allow more data to be pushed at once. Results in an 8% CPU time reduction when pushing 90 gbps of TLS encrypted traffic Dan Langille presented about letsencrypt (the acme.sh tool specifically), and bacula Samy Al Bahra presented about Concurrency Kit *** Conference Day 2 (http://www.bsdcan.org/2017/schedule/day_2017-06-10.en.html) Because Dan is a merciful soul, BSDCan starts an hour later on the second day Another great round of talks and BoF sessions over lunch The hallway track was great as always, and I spent most of the afternoon just talking with people Then the final set of talks started, and I was torn between all four of them Then there was the auction, and the closing party *** BSDCan 2017 Auction Swag (https://blather.michaelwlucas.com/archives/2962) Groff Fundraiser Pins: During the conference, You could get a unique Groff pin, by donating more than the last person to either the FreeBSD or OpenBSD foundation Michael W. Lucas and his wife Liz donated some interesting home made and local items to the infamous Charity Auction I donated the last remaining copy of the “Canadian Edition” of “FreeBSD Mastery: Advanced ZedFS”, and a Pentium G4400 (Skylake) CPU (Supports ECC or non-ECC) Peter Hessler donated his pen (Have you read “Git Commit Murder” yet?) Theo De Raadt donated his autographed conference badge David Maxwell donated a large print of the group photo from last years FreeBSD Developers Summit, which was purchased by Allan There was also a FreeBSD Dev Summit T-Shirt (with the Slogan: What is Core doing about it?) autographed by all of the attending members of core, with a forged jhb@ signature. Lastly, someone wrote “I <3 FreeBSD” on a left over conference t-shirt with magic marker, and the bidding began to make OpenBSD developer Henning Brauer wear it to the closing party. The top bid was $150 by Kristof Provost, the FreeBSD pf maintainer. *** Henning Brauer loves FreeBSD (https://twitter.com/henningBrauer) In addition to the $150 donation that resulted in Henning wearing the I love FreeBSD t-shirt, he also took selfies with people in exchange for an additional donation of $10. A total of over $500 was raised. Michael W. Lucas (https://twitter.com/mwlauthor/status/874656462433386497) Michael Dexter (https://twitter.com/michaeldexter/status/874344686885904384) FreeBSD Foundation Interns + Ed Maste + Eric Joyner (https://twitter.com/yzgyyang/status/873714734343880705) Pierre Ponchery (https://twitter.com/khorben/status/873673295903825925) Nick Danger (https://twitter.com/niqdanger/status/873697176513380353) Michael Shirk (https://twitter.com/shirkdog/status/873687910175866881) Calvin Hendryx-Parker (https://twitter.com/calvinhp/status/873686591692255233) Reyk Floeter (https://twitter.com/reykfloeter/status/873673717884346368) Rod Grimes (https://twitter.com/akpoff/status/873673432751370240) Jim Thompson (https://twitter.com/gonzopancho/status/873700951651233792) Sean Chittenden and Sam Gwydir, Henning wearing Theo de Raadt’s badge (https://twitter.com/SeanChittenden/status/873750297113501697) David Duncan (https://twitter.com/davdunc/status/873807305162334208) *** libtrue (https://github.com/libtrue/libtrue) At the hacker lounge, a joke email was sent to the FreeBSD developers list, making it look like a change to true(1)’s manpage had been committed to “document that true(1) supports libxo” While the change was not actually made, as you might expect this started a discussion about if this was really necessary. This spawned a new github repo While this all started as a joke, it then became an example of how rapid collaboration can happen, and an example of implementing a number of modern technologies in FreeBSD, including libxo (json output), and capsicum (security sandboxing) The project has an large number of open issues and enhancement suggestions, and a number of active pull requests including: Add Vagrantfile and Ansible playbooks for VM DTrace Support (Add the trueprov provider to allow tracing of true. A Code of Conduct libtrue.xyz website as a git submodule a false binary Python and Go bindings *** On Conference Speaking (https://hynek.me/articles/speaking/) Phase 1: Idea Until now I’ve never had to sit down and ponder what I could speak about. Over the year, I run into at least one topic I know something about that appears to be interesting to the wider public. I’m positive that’s true for almost anyone if they keep their minds open and keep looking for it. Phase 2: Call for Proposals In the end I have to come up with a good pitch that speaks to as many people as possible and with a speculative outline. Since there are always many more submissions than talk slots, this is the first critical point. There are many reasons why a proposal can be refused, so put effort into not giving the program committee any additional, that are entirely avoidable. Phase 3: Waiting for the CFP Result ...do passive research: if I see something relevant, I add it to my mind map. At this point my mind map looks atypical though: it has a lot of unordered root nodes. I just throw in everything that looks interesting and add some of my own thoughts to it. In the case of the reliability topic, I spend a lot of time to stay on top it anyway so a lot of material emerged. Phase 4: The Road to an Outline Once the talk is accepted, research intensifies. Books and articles I’ve written down for further research are read and topics extracted. In 2017, this started on January 23. But before I start writing, I get the mind map into a shape that makes sense and that will support me. This is the second critical point: I have to come up with a compelling story using the material I’ve collected. Just enumerating facts and wisdom a good talk doesn’t make. It has to have a good flow that makes sense and that keeps people engaged. Phase 5: Slides I have a very strong opinion on slides: use few, big words. Don’t make people read your slides unless it’s code samples. Otherwise they’ll be distracted from what you say. You’ll see me rarely use fonts smaller than 100pt (code ~40–60pt) which is readable from everywhere and forces me to be as brief as possible. Phase 6: Polishing I firmly believe that this phase makes or breaks a talk. Only by practicing again and again you’ll notice rough spots, weak transitions, and redundancies. Each iteration makes the talk a bit better both regarding the slides and my ability to present it. Each iteration adds impressions that my subconscious mind chews on and makes things fall in place and give me inspiration in unlikely moments. Phase 7: Sneak Preview In the past years I was blessed with the opportunity to test my talks in front of a smaller audiences. Interestingly, I’ve come to take smaller events more seriously than the big ones. If a small conference pays for my travels and gives me a prominent slot, I have both more responsibility and attention than if I paid my way to an event where I’m one of many speakers. Phase 8: Refinement and More Polishing The first session is always just going through all slides, reacquainting myself with my deck. Then it always takes quite a bit of willpower until I do a full practice run again: the first time always pretty brutal because I tend to forget pretty fast. On the other hand, it rather quickly comes back too. Which makes it even harder to motivate myself to start. Ideally I’d have access to a video recording from phase 7 to have a closer look at what work could be improved. But since it’s usually smaller conferences, I don’t. Phase 9: Travel Whenever I travel to conferences I bring everything I need for my talk and then some. To make sure I don’t forget anything essential, I have a packing list for my business trips (and vacations too – the differences are so minimal that I use a unified list). My epic packing list for business trips. I print it out the day before departure and cross stuff off as I pack. I highly recommend to anyone to emulate this since packing is a lot less stressful if you just follow a checklist. Phase 10: Showtime! So this is it. The moment everything else led to. People who suffer from fear of public speaking think this is the worst part. But if you scroll back you’ll realize: this is the payoff! This is what you worked toward. This is the fun, easy part. Once you stand in front of the audience, the work is done and you get to enjoy the ride. Beastie Bits Kris and Ken Moore - TrueOS Q&A (https://www.trueos.org/blog/discourse-trueos-qa-61617/) New home for the repository conversions (NetBSD Git mirror is now on GitHub.com/NetBSD) (https://mail-index.netbsd.org/tech-repository/2017/06/10/msg000637.html) Tab completion in OpenBSD's ksh (https://deftly.net/posts/2017-05-01-openbsd-ksh-tab-complete.html) pkgsrcCon July 1&2 (http://pkgsrc.org/pkgsrcCon/2017/) OpenBSD 6.1 syspatch installed SP kernel on MP system (https://www.mail-archive.com/misc@openbsd.org/msg153421.html) KNOXBug meeting this Friday (http://knoxbug.org/content/join-us-freebsd-day) *** Feedback/Questions Rob - FreeNAS Corral (http://dpaste.com/2XEE9JA#wrap) Brad - ZFS snapshot strategy (http://dpaste.com/27GSJK0#wrap) Phil - ZFS Send via Snail Mail (http://dpaste.com/3D02RYZ#wrap) Phillip - Network Limits for Public NTP Server (http://dpaste.com/0ZSMVWH#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
197: Relaying the good news
We’re at BSDCan, but we have an interview with Michael W. Lucas which you don’t want to miss. This episode was brought to you by Headlines We are off to BSDCan but we have an interview and news roundup for you. Interview - Michael W. Lucas - mwlucas@michaelwlucas.com (mailto:mwlucas@michaelwlucas.com) / @mwlauthor (https://twitter.com/mwlauthor) Books, conferences & how these two combine *** News Roundup In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play (http://bsdly.blogspot.no/2012/05/in-name-of-sane-email-setting-up-spamd.html) “The Grumpy BSD Guy”, Peter Hansteen is at it again, they have produced an updated version of a full recipe for OpenBSD’s spamd for your primary AND secondary mail servers Recipes in our field are all too often offered with little or no commentary to help the user understand the underlying principles of how a specific configuration works. To counter the trend and offer some free advice on a common configuration, here is my recipe for a sane mail setup. Mailing lists can be fun. Most of the time the discussions on lists like openbsd-misc are useful, entertaining or both. But when your battle with spam fighting technology ends up blocking your source of information and entertainment (like in the case of the recent thread titled "spamd greylisting: false positives" - starting with this message), frustration levels can run high, and in the process it emerged that some readers out there place way too much trust in a certain site offering barely commented recipes (named after a rare chemical compound Cl-Hg-Hg-Cl). 4 easy steps: Make sure your MXes (both primary and secondary) are able to receive mail for your domains Set set up content filtering for all MXes, since some spambots actually speak SMTP Set up spamd in front of all MXes Set up synchronization between your spamds These are the basic steps. If you want to go even further, you can supplement your greylisting and publicly available blacklists with your own greytrapping, but greytrapping is by no means required. Once you have made sure that your mail exchangers will accept mail for your domains (checking that secondaries do receive and spool mail when you stop the SMTP service on the primary), it's time to start setting up the content filtering. The post provides links if you need help getting the basic mail server functionality going At this point you will more likely than not discover that any differences in filtering setups between the hosts that accept and deliver mail will let spam through via the weakest link. Tune accordingly, or at least until you are satisfied that you have a fairly functional configuration. As you will have read by now in the various sources I cited earlier, you need to set up rules to redirect traffic to your spamd as appropriate. Now let's take a peek at what I have running at my primary site's gateway. The articles provides a few different sets of rules The setup includes running all outgoing mail through spamd to auto-populate the whitelists, allowing replies to your emails to get through without greylisting At this point, you have seen how to set up two spamds, each running in front of a mail exchanger. You can choose to run with the default spamd.conf, or you can edit in your own customizations. There is also a link to Peter’s spamd.conf if you want to use “what works for me” The fourth and final required step for a spamd setup with backup mail exchangers it to set up synchronization between the spamds. The synchronization keeps your greylists in sync and transfers information on any greytrapped entries to the partner spamds. As the spamd man page explains, the synchronization options -y and -Y are command line options to spamd. The articles steps through the process of configuring spamd to listen for synchronization, and to send synchronization messages to its peer With these settings in place, you have more or less completed step four of our recipe. The article also shows you how to configure spamd to log to a separate log file, to make the messages easier to find and consolidate between your mail servers After noting the system load on your content filtering machines, restart your spamds. Then watch the system load values on the content filterers and take a note of them from time to time, say every 30 minutes or so Step 4) is the last required step for building a multi-MX configuration. You may want to just leave the system running for a while and watch any messages that turn up in the spamd logs or the mail exchanger's logs The final embellishment is to set up local greytrapping. The principle is simple: If you have one or more addresses in your domain that you know will never be valid, you add them to your list of trapping addresses any host that tries to deliver mail to noreply@mydomain.nx will be added to the local blacklist spamd-greytrap to be stuttered at for as long as it takes. Greytrapping can be fun, you can search for posts here tagged with the obvious keywords. To get you started, I offer up my published list of trap addresses, built mainly from logs of unsuccessful delivery attempts here, at The BSDly.net traplist page, while the raw list of trap email addresses is available here. If you want to use that list in a similar manner for your site, please do, only remember to replace the domain names with one or more that you will be receiving mail for. Let us know how this affects your inbox *** Beastie Bits Status of FreeBSD’s capsicum on Linux (http://www.capsicum-linux.org/) How to build a gateway, from 1979 (http://www.networksorcery.com/enp/ien/ien109.txt) Linux escapee Hamza Sheikh on “Why FreeBSD?” (https://bsdmag.org/why_freebsd/) UNIX is still as relevant as ever (https://blog.opengroup.org/2012/05/17/unix-is-still-as-relevant-as-ever/) Upcoming Summer 2017 FreeBSD Foundation Events (https://www.freebsdfoundation.org/blog/upcoming-summer-2017-freebsd-foundation-events/) ***
196: PostgreZFS
This week on BSD Now, we review the EuroBSDcon schedule, we explore the mysteries of Docker on OpenBSD, and show you how to run PostgreSQL on ZFS. This episode was brought to you by Headlines EuroBSDcon 2017 - Talks & Schedule published (https://2017.eurobsdcon.org/2017/05/26/talks-schedule-published/) The EuroBSDcon website was updated with the tutorial and talk schedule for the upcoming September conference in Paris, France. Tutorials on the 1st day: Kirk McKusick - An Introduction to the FreeBSD Open-Source Operating System, George Neville-Neil - DTrace for Developers, Taylor R Campbell - How to untangle your threads from a giant lock in a multiprocessor system Tutorials on the 2nd day: Kirk continues his Introduction lecture, Michael Lucas - Core concepts of ZFS (half day), Benedict Reuschling - Managing BSD systems with Ansible (half day), Peter Hessler - BGP for developers and sysadmins Talks include 3 keynotes (2 on the first day, beginning and end), another one at the end of the second day by Brendan Gregg Good mixture of talks of the various BSD projects Also, a good amount of new names and faces Check out the full talk schedule (https://2017.eurobsdcon.org/talks-schedule/). Registration is not open yet, but will be soon. *** OpenBSD on the Xiaomi Mi Air 12.5" (https://jcs.org/2017/05/22/xiaomiair) The Xiaomi Mi Air 12.5" (https://xiaomi-mi.com/notebooks/xiaomi-mi-notebook-air-125-silver/) is a basic fanless 12.5" Ultrabook with good build quality and decent hardware specs, especially for the money: while it can usually be had for about $600, I got mine for $489 shipped to the US during a sale about a month ago. Xiaomi offers this laptop in silver and gold. They also make a 13" version but it comes with an NVidia graphics chip. Since these laptops are only sold in China, they come with a Chinese language version of Windows 10 and only one or two distributors that carry them ship to the US. Unfortunately that also means they come with practically no warranty or support. Hardware > The Mi Air 12.5" has a fanless, 6th generation (Skylake) Intel Core m3 processor, 4Gb of soldered-on RAM, and a 128Gb SATA SSD (more on that later). It has a small footprint of 11.5" wide, 8" deep, and 0.5" thick, and weighs 2.3 pounds. > A single USB-C port on the right-hand side is used to charge the laptop and provide USB connectivity. A USB-C ethernet adapter I tried worked fine in OpenBSD. Whether intentional or not, a particular design touch I appreciated was that the USB-C port is placed directly to the right of the power button on the keyboard, so you don't have to look or feel around for the port when plugging in the power cable. > A single USB 3 type-A port is also available on the right side next to the USB-C port. A full-size HDMI port and a headphone jack are on the left-hand side. It has a soldered-on Intel 8260 wireless adapter and Bluetooth. The webcam in the screen bezel attaches internally over USB. > The chassis is all aluminum and has sufficient rigidity in the keyboard area. The 12.5" 1920x1080 glossy IPS screen has a fairly small bezel and while its hinge is properly weighted to allow opening the lid with one hand (if you care about that kind of thing), the screen does have a bit of top-end wobble when open, especially when typing on another laptop on the same desk. > The keyboard has a roomy layout and a nice clicky tactile with good travel. It is backlit, but with only one backlight level. When enabled via Fn+F10 (which is handled by the EC, so no OpenBSD support required), it will automatically shut off after not typing for a short while, automatically turning back once a key is pressed. Upgrades > An interesting feature of the Mi Air is that it comes with a 128Gb SATA SSD but also includes an open PCI-e slot ready to accept an NVMe SSD. > I upgraded mine with a Samsung PM961 256Gb NVMe SSD (left), and while it is possible to run with both drives in at the same time, I removed the Samsung CM871a 128Gb SATA (right) drive to save power. > The bottom case can be removed by removing the seven visible screws, in addition to the one under the foot in the middle back of the case, which just pries off. A spudger tool is needed to release all of the plastic attachment clips along the entire edge of the bottom cover. > Unfortunately this upgrade proved to be quite time consuming due to the combination of the limited UEFI firmware on the Mi Air and a bug in OpenBSD. A Detour into UEFI Firmware Variables > Unlike a traditional BIOS where one can boot into a menu and configure the boot order as well as enabling and disabling options such as "USB Hard Drive", the InsydeH2O UEFI firmware on the Xiaomi Air only provides the ability to adjust the boot order of existing devices. Any change or addition of boot devices must be done from the operating system, which is not possible under OpenBSD. > I booted to a USB key with OpenBSD on it and manually partitioned the new NVME SSD, then rsynced all of the data over from the old drive, but the laptop would not boot to the new NVME drive, instead showing an error message that there was no bootable OS. > Eventually I figured out that the GPT table that OpenBSD created on the NVMe disk was wrong due to a [one-off bug in the nvme driver](https://github.com/openbsd/src/commit/dc8298f669ea2d7e18c8a8efea509eed200cb989) which was causing the GPT table to be one sector too large, causing the backup GPT table to be written in the wrong location (and other utilities under Linux to write it over the OpenBSD area). I'm guessing the UEFI firmware would fail to read the bad GPT table on the disk that the boot variable pointed to, then declare that disk as missing, and then remove any variables that pointed to that disk. OpenBSD Support > The Mi Air's soldered-on Intel 8260 wireless adapter is supported by OpenBSD's iwm driver, including 802.11n support. The Intel sound chip is recognized by the azalia driver. > The Synaptics touchpad is connected via I2C, but is not yet supported. I am actively hacking on my dwiic driver to make this work and the touchpad will hopefully operate as a Windows Precision Touchpad via imt so I don't have to write an entirely new Synaptics driver. > Unfortunately since OpenBSD's inteldrm support that is ported from Linux is lagging quite a bit behind, there is no kernel support for Skylake and Kaby Lake video chips. Xorg works at 1920x1080 through efifb so the machine is at least usable, but X is not very fast and there is a noticeable delay when doing certain redrawing operations in xterm. Screen backlight can be adjusted through my OpenBSD port of intel_backlight. Since there is no hardware graphics support, this also means that suspend and resume do not work because nothing is available to re-POST the video after resume. Having to use efifb also makes it impossible to adjust the screen gamma, so for me, I can't use redshift for comfortable night-time hacking. Flaws > Especially taking into account the cheap price of the laptop, it's hard to find faults with the design. One minor gripe is that the edges of the case along the bottom are quite sharp, so when carrying the closed laptop, it can feel uncomfortable in one's hands. > While all of those things could be overlooked, unfortunately there is also a critical flaw in the rollover support in the keyboard/EC on the laptop. When typing certain combinations of keys quickly, such as holding Shift and typing "NULL", one's fingers may actually hold down the Shift, N, and U keys at the same time for a very brief moment before releasing N. Normally the keyboard/EC would recognize U being pressed after N is already down and send an interrupt for the U key. Unfortunately on this laptop, particular combinations of three keys do not interrupt for the third key at all until the second key is lifted, usually causing the third key not to register at all if typed quickly. I've been able to reproduce this problem in OpenBSD, Linux, and Windows, with the combinations of at least Shift+N+U and Shift+D+F. Holding Shift and typing the two characters in sequence quickly enough will usually fail to register the final character. Trying the combinations without Shift, using Control or Alt instead of Shift, or other character pairs does not trigger the problem. This might be a problem in the firmware on the Embedded Controller, or a defect in the keyboard circuitry itself. As I mentioned at the beginning, getting technical support for this machine is difficult because it's only sold in China. Docker on OpenBSD 6.1-current (https://medium.com/@dave_voutila/docker-on-openbsd-6-1-current-c620513b8110) Dave Voutila writes: So here’s the thing. I’m normally a macOS user…all my hardware was designed in Cupertino, built in China. But I’m restless and have been toying with trying to switch my daily machine over to a non-macOS system sort of just for fun. I find Linux messy, FreeBSD not as Apple-laptop-friendly as it should be, and Windows a non-starter. Luckily, I found a friend in Puffy. Switching some of my Apple machines over to dual-boot OpenBSD left a gaping hole in my workflow. Luckily, all the hard work the OpenBSD team has done over the last year seems to have plugged it nicely! OpenBSD’s hypervisor support officially made it into the 6.1 release, but after some experimentation it was rather time consuming and too fragile to get a Linux guest up and running (i.e. basically the per-requisite for Docker). Others had reported some success starting with QEMU and doing lots of tinkering, but after a wasted evening I figured I’d grab the latest OpenBSD snapshot and try what the openbsd-misc list suggested was improved Linux support in active development. 10 (11) Steps to docker are provided Step 0 — Install the latest OpenBSD 6.1 snapshot (-current) Step 1 — Configure VMM/VMD Step 2 — Grab an Alpine Linux ISO Step 3 — Make a new virtual disk image Step 4 — Boot Alpine’s ISO Step 5 — Inhale that fresh Alpine air Step 6 — Boot Alpine for Reals Step 7 — Install Docker Step 8 — Make a User Step 9 — Ditch the Serial Console Step 10 — Test out your Docker instance I haven’t done it yet, but I plan on installing docker-compose via Python’s pip package manager. I prefer defining containers in the compose files. PostgreSQL + ZFS Best Practices and Standard Procedures (https://people.freebsd.org/~seanc/postgresql/scale15x-2017-postgresql_zfs_best_practices.pdf) Slides from Sean Chittenden’s talk about PostgreSQL and ZFS at Scale 15x this spring Slides start with a good overview of Postgres and ZFS, and how to use them together To start, it walks through the basics of how PostgreSQL interacts with the filesystem (any filesystem) Then it shows the steps to take a good backup of PostgreSQL, then how to do it even better with ZFS Then an intro to ZFS, and how Copy-on-Write changes host PostgreSQL interacts with the filesystem Overview of how ZFS works ZFS Tuning tips: Compression, Recordsize, atime, when to use mostly ARC vs sharedbuffer, plus pgrepack Followed by a discussion of the reliability of SSDs, and their Bit Error Rate (BER) A good SSD has a 4%/year chance of returning the wrong data. A cheap SSD 34% If you put 20 SSDs in a database server, that means 58% (Good SSDs) to 99.975% (Lowest quality commercially viable SSD) chance of an error per year Luckily, ZFS can detect and correct these errors This applies to all storage, not just SSDs, every device fails More Advice: Use quotas and reservations to avoid running out of space Schedule Periodic Scrubs One dataset per database Backups: Live demo of rm -rf’ing the database and getting it back Using clones to test upgrades on real data Naming Conventions: Use a short prefix not on the root filesystem (e.g. /db) Encode the PostgreSQL major version into the dataset name Give each PostgreSQL cluster its own dataset (e.g. pgdb01) Optional but recommended: one database per cluster Optional but recommended: one app per database Optional but recommended: encode environment into DB name Optional but recommended: encode environment into DB username using ZFS Replication Check out the full detailed PDF and implement a similar setup for your database needs *** News Roundup TrueOS Evolving Its "Stable" Release Cycle (https://www.trueos.org/blog/housekeeping-update-infrastructure-trueos-changes/) TrueOS is reformulating its Stable branch based on feedback from users. The goal is to have a “release” of the stable branch every 6 months, for those who do not want to live on the edge with the rapid updates of the full rolling release Most of the TrueOS developers work for iX Systems in their Tennessee office. Last month, the Tennessee office was moved to a different location across town. As part of the move, we need to move all our servers. We’re still getting some of the infrastructure sorted before moving the servers, so please bear with us as we continue this process. As we’ve continued working on TrueOS, we’ve heard a significant portion of the community asking for a more stable “STABLE” release of TrueOS, maybe something akin to an old PC-BSD version release. In order to meet that need, we’re redefining the TrueOS STABLE branch a bit. STABLE releases are now expected to follow a six month schedule, with more testing and lots of polish between releases. This gives users the option to step back a little from the “cutting edge” of development, but still enjoy many of the benefits of the “rolling release” style and the useful elements of FreeBSD Current. Critical updates like emergency patches and utility bug fixes are still expected to be pushed to STABLE on a case-by-case basis, but again with more testing and polish. This also applies to version updates of the Lumina and SysAdm projects. New, released work from those projects will be tested and added to STABLE outside the 6 month window as well. The UNSTABLE branch continues to be our experimental “cutting edge” track, and users who want to follow along with our development and help us or FreeBSD test new features are still encouraged to follow the UNSTABLE track by checking that setting in their TrueOS Update Manager. With boot environments, it will be easy to switch back and forth, so you can have the best of both worlds. Use the latest bleeding edge features, but knowing you can fall back to the stable branch with just a reboot As TrueOS evolves, it is becoming clearer that one role of the system is to function as a “test platform” for FreeBSD. In order to better serve this role, TrueOS will support both OpenRC and the FreeBSD RC init systems, giving users the choice to use either system. While the full functionality isn’t quite ready for the next STABLE update, it is planned for addition after the last bit of work and testing is complete. Stay tuned for an upcoming blog post with all the details of this change, along with instructions how to switch between RC and OpenRC. This is the most important change for me. I used TrueOS as an easy way to run the latest version of -CURRENT on my laptop, to use it as a user, but also to do development. When TrueOS deviates from FreeBSD too much, it lessens the power of my expertise, and complicates development and debugging. Being able to switch back to RC, even if it takes another minute to boot, will bring TrueOS back to being FreeBSD + GUI and more by default, instead of a science project. We need both of those things, so having the option, while more work for the TrueOS team, I think will be better for the entire community *** Logical Domains on SunFire T2000 with OpenBSD/sparc64 (http://www.h-i-r.net/2017/05/logical-domains-on-sunfire-t2000-with.html) A couple of years ago, I picked up a Sun Fire T2000. This is a 2U rack mount server. Mine came with four 146GB SAS drives, a 32-core UltraSPARC T1 CPU and 32GB of RAM. Sun Microsystems incorporated Logical Domains (LDOMs) on this class of hardware. You don't often need 32 threads and 32GB of RAM in a single server. LDOMs are a kind of virtualization technology that's a bit closer to bare metal than vmm, Hyper-V, VirtualBox or even Xen. It works a bit like Xen, though. You can allocate processor, memory, storage and other resources to virtual servers on-board, with a blend of firmware that supports the hardware allocation, and some software in userland (on the so-called primary or control domain, similar to Xen DomU) to control it. LDOMs are similar to what IBM calls Logical Partitions (LPARs) on its Mainframe and POWER series computers. My day job from 2006-2010 involved working with both of these virtualization technologies, and I've kind of missed it. While upgrading OpenBSD to 6.1 on my T2000, I decided to delve into LDOM support under OpenBSD. This was pretty easy to do, but let's walk through it Resources: The ldomctl(8) man page (http://man.openbsd.org/OpenBSD-current/man8/sparc64/ldomctl.8) tedu@'s write-up on Flak (for a different class of server) (http://www.tedunangst.com/flak/post/OpenBSD-on-a-Sun-T5120) A Google+ post by bmercer@ (https://plus.google.com/101694200911870273983/posts/jWh4rMKVq97) Once you get comfortable with the fact that there's a little-tiny computer (the ALOM) powered by VXWorks inside that's acting as the management system and console (there's no screen or keyboard/mouse input), Installing OpenBSD on the base server is pretty straightforward. The serial console is an RJ-45 jack, and, yes, the ubiquitous blue-colored serial console cables you find for certain kinds of popular routers will work fine. OpenBSD installs quite easily, with the same installer you find on amd64 and i386. I chose to install to /dev/sd0, the first SAS drive only, leaving the others unused. It's possible to set them up in a hardware RAID configuration using tools available only under Solaris, or use softraid(4) on OpenBSD, but I didn't do this. I set up the primary LDOM to use the first ethernet port, em0. I decided I wanted to bridge the logical domains to the second ethernet port. You could also use a bridge and vether interface, with pf and dhcpd to create a NAT environment, similar to how I networked the vmm(4) systems. Create an LDOM configuration file. You can put this anywhere that's convenient. All of this stuff was in a "vm" subdirectory of my home. I called it ldom.conf: domain primary { vcpu 8 memory 8G } domain puffy { vcpu 8 memory 4G vdisk "/home/axon/vm/ldom1" vnet } Make as many disk images as you want, and make as many additional domain clauses as you wish. Be mindful of system resources. I couldn't actually allocate a full 32GB of RAM across all the LDOMs I eventually provisioned seven LDOMs (in addition to the primary) on the T2000, each with 3GB of RAM and 4 vcpu cores. If you get creative with use of network interfaces, virtual ethernet, bridges and pf rules, you can run a pretty complex environment on a single chassis, with services that are only exposed to other VMs, a DMZ segment, and the internal LAN. A nice tutorial, and an interesting look at an alternative platform that was ahead of its time *** documentation is thoroughly hard (http://www.tedunangst.com/flak/post/documentation-is-thoroughly-hard) Ted Unangst has a new post this week about documentation: Documentation is good, so therefore more documentation must be better, right? A few examples where things may have gotten out of control A fine example is the old OpenBSD install instructions. Once you’ve installed OpenBSD once or twice, the process is quite simple, but you’d never know this based on reading the instructions. Compare the files for 4.8 INSTALL and 5.8 INSTALL. Both begin with a brief intro to the project. Then 4.8 has an enormous list of mirrors, which seems fairly redundant if you’ve already found the install file. Followed by an enormous list of every supported variant of every supported device. Including a table of IO port configurations for ISA devices. Finally, after 1600 lines of introduction we get to the actual installation instructions. (Compared to line 231 for 5.8.) This includes a full page of text about how to install from tape, which nobody ever does. It took some time to recognize that all this documentation was actually an impediment to new users. Attempting to answer every possible question floods the reader with information for questions they were never planning to ask. Part of the problem is how the information is organized. Theoretically it makes sense to list supported hardware before instructions. After all, you can’t install anything if it’s not supported, right? I’m sure that was considered when the device list was originally inserted above the install instructions. But as a practical matter, consulting a device list is neither the easiest nor fastest way to determine what actually works. In the FreeBSD docs tree, we have been doing a facelift project, trying to add ‘quick start’ sections to each chapter to let you get to the more important information first. It is also helpful to move data in the forms of lists and tables to appendices or similar, where they can easily be references, but are not blocking your way to the information you are actually hunting for An example of nerdview signage (http://languagelog.ldc.upenn.edu/nll/?p=29866). “They have in effect provided a sign that will tell you exactly what the question is provided you can already supply the answer.” That is, the logical minds of technical people often decide to order information in an order that makes sense to them, rather than in the order that will be most useful to the reader In the end, I think “copy diskimage to USB and follow prompts” is all the instructions one should need, but it’s hard to overcome the unease of actually making the jump. What if somebody is confused or uncertain? Why is this paragraph more redundant than that paragraph? (And if we delete both, are we cutting too much?) Sometimes we don’t need to delete the information. Just hide it. The instructions to upgrade to 4.8 and upgrade to 5.8 are very similar, with a few differences because every release is a little bit different. The pages look very different, however, because the not at all recommended kernel free procedure, which takes up half the page, has been hidden from view behind some javascript and only expanded on demand. A casual browser will find the page and figure the upgrade process will be easy, as opposed to some long ordeal. This is important as well, it was my original motivation for working on the FreeBSD Handbook’s ZFS chapter. The very first section of the chapter was the custom kernel configuration required to run ZFS on i386. That scared many users away. I moved that to the very end, and started with why you might want to use ZFS. Much more approachable. Sometimes it’s just a tiny detail that’s overspecified. The apmd manual used to explain exactly which CPU idle time thresholds were used to adjust frequency. Those parameters, and the algorithm itself, were adjusted occasionally in response to user feedback, but sometimes the man page lagged behind. The numbers are of no use to a user. They’re not adjustable without recompiling. Knowing that the frequency would be reduced at 85% idle vs 90% idle doesn’t really offer much guidance as to whether to enable auto scaling or not. Deleting this detail ensured the man page was always correct and spares the user the cognitive load of trying to solve an unnecessary math problem. For fun: For another humorous example, it was recently observed that the deja-dup package provides man page translations for Australia, Canada, and Great Britain. I checked, the pages are in fact not quite identical. Some contain typo fixes that didn’t propagate to other translations. Project idea: attempt to identify which country has the most users, or most fastidious users, by bug fixes to localized man pages. lldb on BeagleBone Black (https://lists.freebsd.org/pipermail/freebsd-arm/2017-May/016260.html) I reliably managed to build (lldb + clang/lld) from the svn trunk of LLVM 5.0.0 on my Beaglebone Black running the latest snapshot (May 20th) of FreeBSD 12.0-CURRENT, and the lldb is working very well, and this includes single stepping and ncurses-GUI mode, while single stepping with the latest lldb 4.0.1 from the ports does not work. In order to reliably build LLVM 5.0.0 (svn), I set up a 1 GB swap partition for the BBB on a NFSv4 share on a FreeBSD fileserver in my network - I put a howto of the procedure on my BLog: https://obsigna.net/?p=659 The prerequesites on the Beaglebone are: ``` pkg install tmux pkg install cmake pkg install python pkg install libxml2 pkg install swig30 pkg install ninja pkg install subversion ``` On the FreeBSD fileserver: ``` /pathtothe/bbb_share svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm cd llvm/tools svn co http://llvm.org/svn/llvm-project/cfe/trunk clang svn co http://llvm.org/svn/llvm-project/lld/trunk lld svn co http://llvm.org/svn/llvm-project/lldb/trunk lldb ``` + On the Beaglebone Black: # mount_nfs -o noatime,readahead=4,intr,soft,nfsv4 server:/path_to_the/bbb_share /mnt # cd /mnt # mkdir build # cmake -DLLVM_TARGETS_TO_BUILD="ARM" -DCMAKE_BUILD_TYPE="MinSizeRel" \ -DLLVM_PARALLEL_COMPILE_JOBS="1" -DLLVM_PARALLEL_LINK_JOBS="1" -G Ninja .. I execute the actual build command from within a tmux session, so I may disconnect during the quite long (40 h) build: ``` tmux new "ninja lldb install" ``` When debugging in GUI mode using the newly build lldb 5.0.0-svn, I see only a minor issue, namely UTF8 strings are not displayed correctly. This happens in the ncurses-GUI only, and this is an ARM issue, since it does not occur on x86 machines. Perhaps this might be related to the signed/unsigned char mismatch between ARM and x86. Beastie Bits Triangle BSD Meetup on June 27th (https://www.meetup.com/Triangle-BSD-Users-Group/events/240247251/) Support for Controller Area Networks (CAN) in NetBSD (http://www.feyrer.de/NetBSD/bx/blosxom.cgi/nb_20170521_0113.html) Notes from Monday's meeting (http://mailman.uk.freebsd.org/pipermail/ukfreebsd/2017-May/014104.html) RunBSD - A site about the BSD family of operating systems (http://runbsd.info/) BSDCam(bridge) 2017 Travel Grant Application Now Open (https://www.freebsdfoundation.org/blog/bsdcam-2017-travel-grant-application-now-open/) New BSDMag has been released (https://bsdmag.org/download/nearly-online-zpool-switching-two-freebsd-machines/) *** Feedback/Questions Philipp - A show about byhve (http://dpaste.com/390F9JN#wrap) Jake - byhve Support on AMD (http://dpaste.com/0DYG5BD#wrap) CY - Pledge and Capsicum (http://dpaste.com/1YVBT12#wrap) CY - OpenSSL relicense Issue (http://dpaste.com/3RSYV23#wrap) Andy - Laptops (http://dpaste.com/0MM09EX#wrap) ***
195: I don’t WannaCry
A pledge of love to OpenBSD, combating ransomware like WannaCry with OpenZFS, and using PFsense to maximize your non-gigabit Internet connection This episode was brought to you by Headlines ino64 project committed to FreeBSD 12-CURRENT (https://svnweb.freebsd.org/base?view=revision&revision=318736) The ino64 project has been completed and merged into FreeBSD 12-CURRENT Extend the inot, devt, nlinkt types to 64-bit ints. Modify struct dirent layout to add doff, increase the size of dfileno to 64-bits, increase the size of dnamlen to 16-bits, and change the required alignment. Increase struct statfs fmntfromname[] and fmntonname[] array length MNAMELEN to 1024 This means the length of a mount point (MNAMELEN) has been increased from 88 byte to 1024 bytes. This allows longer ZFS dataset names and more nesting, and generally improves the usefulness of nested jails It also allow more than 4 billion files to be stored in a single file system (both UFS and ZFS). It also deals with a number of NFS problems, such as Amazon’s EFS (cloud NFS), which uses 64 bit IDs even with small numbers of files. ABI breakage is mitigated by providing compatibility using versioned symbols, ingenious use of the existing padding in structures, and by employing other tricks. Unfortunately, not everything can be fixed, especially outside the base system. For instance, third-party APIs which pass struct stat around are broken in backward and forward incompatible ways. A bug in poudriere that may cause some packages to not rebuild is being fixed. Many packages like perl will need to be rebuilt after this change Update note: strictly follow the instructions in UPDATING. Build and install the new kernel with COMPAT_FREEBSD11 option enabled, then reboot, and only then install new world. So you need the new GENERIC kernel with the COMPAT_FREEBSD11 option, so that your old userland will work with the new kernel, and you need to build, install, and reboot onto the new kernel before attempting to install world. The usual process of installing both and then rebooting will NOT WORK Credits: The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb). Kirk McKusick (mckusick) then picked up and updated the patch, and acted as a flag-waver. Feedback, suggestions, and discussions were carried by Ed Maste (emaste), John Baldwin (jhb), Jilles Tjoelker (jilles), and Rick Macklem (rmacklem). Kris Moore (kmoore) performed an initial ports investigation followed by an exp-run by Antoine Brodin (antoine). Essential and all-embracing testing was done by Peter Holm (pho). The heavy lifting of coordinating all these efforts and bringing the project to completion were done by Konstantin Belousov (kib). Sponsored by: The FreeBSD Foundation (emaste, kib) Why I love OpenBSD (https://medium.com/@h3artbl33d/why-i-love-openbsd-ca760cf53941) Jeroen Janssen writes: I do love open source software. Oh boy, I really do love open source software. It’s extendable, auditable, and customizable. What’s not to love? I’m astonished by the idea that tens, hundreds, and sometimes even thousands of enthusiastic, passionate developers collaborate on an idea. Together, they make the world a better place, bit by bit. And this leads me to one of my favorite open source projects: the 22-year-old OpenBSD operating system. The origins of my love affair with OpenBSD From Linux to *BSD The advantages of OpenBSD It’s extremely secure It’s well documented It’s open source > It’s neat and clean My take on OpenBSD ** DO ** Combating WannaCry and Other Ransomware with OpenZFS Snapshots (https://www.ixsystems.com/blog/combating-ransomware/) Ransomware attacks that hold your data hostage using unauthorized data encryption are spreading rapidly and are particularly nefarious because they do not require any special access privileges to your data. A ransomware attack may be launched via a sophisticated software exploit as was the case with the recent “WannaCry” ransomware, but there is nothing stopping you from downloading and executing a malicious program that encrypts every file you have access to. If you fail to pay the ransom, the result will be indistinguishable from your simply deleting every file on your system. To make matters worse, ransomware authors are expanding their attacks to include just about any storage you have access to. The list is long, but includes network shares, Cloud services like DropBox, and even “shadow copies” of data that allow you to open previous versions of files. To make matters even worse, there is little that your operating system can do to prevent you or a program you run from encrypting files with ransomware just as it can’t prevent you from deleting the files you own. Frequent backups are touted as one of the few effective strategies for recovering from ransomware attacks but it is critical that any backup be isolated from the attack to be immune from the same attack. Simply copying your files to a mounted disk on your computer or in the Cloud makes the backup vulnerable to infection by virtue of the fact that you are backing up using your regular permissions. If you can write to it, the ransomware can encrypt it. Like medical workers wearing hazmat suits for isolation when combating an epidemic, you need to isolate your backups from ransomware. OpenZFS snapshots to the rescue OpenZFS is the powerful file system at the heart of every storage system that iXsystems sells and of its many features, snapshots can provide fast and effective recovery from ransomware attacks at both the individual user and enterprise level as I talked about in 2015. As a copy-on-write file system, OpenZFS provides efficient and consistent snapshots of your data at any given point in time. Each snapshot only includes the precise delta of changes between any two points in time and can be cloned to provide writable copies of any previous state without losing the original copy. Snapshots also provide the basis of OpenZFS replication or backing up of your data to local and remote systems. Because an OpenZFS snapshot takes place at the block level of the file system, it is immune to any file-level encryption by ransomware that occurs over it. A carefully-planned snapshot, replication, retention, and restoration strategy can provide the low-level isolation you need to enable your storage infrastructure to quickly recover from ransomware attacks. OpenZFS snapshots in practice While OpenZFS is available on a number of desktop operating systems such as TrueOS and macOS, the most effective way to bring the benefits of OpenZFS snapshots to the largest number of users is with a network of iXsystems TrueNAS, FreeNAS Certified and FreeNAS Mini unified NAS and SAN storage systems. All of these can provide OpenZFS-backed SMB, NFS, AFP, and iSCSI file and block storage to the smallest workgroups up through the largest enterprises and TrueNAS offers available Fibre Channel for enterprise deployments. By sharing your data to your users using these file and block protocols, you can provide them with a storage infrastructure that can quickly recover from any ransomware attack thrown at it. To mitigate ransomware attacks against individual workstations, TrueNAS and FreeNAS can provide snapshotted storage to your VDI or virtualization solution of choice. Best of all, every iXsystems TrueNAS, FreeNAS Certified, and FreeNAS Mini system includes a consistent user interface and the ability to replicate between one another. This means that any topology of individual offices and campuses can exchange backup data to quickly mitigate ransomware attacks on your organization at all levels. Join us for a free webinar (http://www.onlinemeetingnow.com/register/?id=uegudsbc75) with iXsystems Co-Founder Matt Olander and learn more about why businesses everywhere are replacing their proprietary storage platforms with TrueNAS then email us at info@ixsystems.com or call 1-855-GREP-4-IX (1-855-473-7449), or 1-408-493-4100 (outside the US) to discuss your storage needs with one of our solutions architects. Interview - Michael W. Lucas - mwlucas@michaelwlucas.com (mailto:mwlucas@michaelwlucas.com) / @twitter (https://twitter.com/mwlauthor) Books, conferences, and how these two combine + BR: Welcome back. Tell us what you’ve been up to since the last time we interviewed you regarding books and such. + AJ: Tell us a little bit about relayd and what it can do. + BR: What other books do you have in the pipeline? + AJ: What are your criteria that qualifies a topic for a mastery book? + BR: Can you tell us a little bit about these writing workshops that you attend and what happens there? + AJ: Without spoiling too much: How did you come up with the idea for git commit murder? + BR: Speaking of BSDCan, can you tell the first timers about what to expect in the http://www.bsdcan.org/2017/schedule/events/890.en.html (Newcomers orientation and mentorship) session on Thursday? + AJ: Tell us about the new WIP session at BSDCan. Who had the idea and how much input did you get thus far? + BR: Have you ever thought about branching off into a new genre like children’s books or medieval fantasy novels? + AJ: Is there anything else before we let you go? News Roundup Using LLDP on FreeBSD (https://tetragir.com/freebsd/networking/using-lldp-on-freebsd.html) LLDP, or Link Layer Discovery Protocol allows system administrators to easily map the network, eliminating the need to physically run the cables in a rack. LLDP is a protocol used to send and receive information about a neighboring device connected directly to a networking interface. It is similar to Cisco’s CDP, Foundry’s FDP, Nortel’s SONMP, etc. It is a stateless protocol, meaning that an LLDP-enabled device sends advertisements even if the other side cannot do anything with it. In this guide the installation and configuration of the LLDP daemon on FreeBSD as well as on a Cisco switch will be introduced. If you are already familiar with Cisco’s CDP, LLDP won’t surprise you. It is built for the same purpose: to exchange device information between peers on a network. While CDP is a proprietary solution and can be used only on Cisco devices, LLDP is a standard: IEEE 802.3AB. Therefore it is implemented on many types of devices, such as switches, routers, various desktop operating systems, etc. LLDP helps a great deal in mapping the network topology, without spending hours in cabling cabinets to figure out which device is connected with which switchport. If LLDP is running on both the networking device and the server, it can show which port is connected where. Besides physical interfaces, LLDP can be used to exchange a lot more information, such as IP Address, hostname, etc. In order to use LLDP on FreeBSD, net-mgmt/lldpd has to be installed. It can be installed from ports using portmaster: #portmaster net-mgmt/lldpd Or from packages: #pkg install net-mgmt/lldpd By default lldpd sends and receives all the information it can gather , so it is advisable to limit what we will communicate with the neighboring device. The configuration file for lldpd is basically a list of commands as it is passed to lldpcli. Create a file named lldpd.conf under /usr/local/etc/ The following configuration gives an example of how lldpd can be configured. For a full list of options, see %man lldpcli To check what is configured locally, run #lldpcli show chassis detail To see the neighbors run #lldpcli show neighbors details Check out the rest of the article about enabling LLDP on a Cisco switch experiments with prepledge (http://www.tedunangst.com/flak/post/experiments-with-prepledge) Ted Unangst takes a crack at a system similar to the one being designed for Capsicum, Oblivious Sandboxing (See the presentation at BSDCan), where the application doesn’t even know it is in the sandbox MP3 is officially dead, so I figure I should listen to my collection one last time before it vanishes entirely. The provenance of some of these files is a little suspect however, and since I know one shouldn’t open files from strangers, I’d like to take some precautions against malicious malarkey. This would be a good use for pledge, perhaps, if we can get it working. At the same time, an occasional feature request for pledge is the ability to specify restrictions before running a program. Given some untrusted program, wrap its execution in a pledge like environment. There are other system call sandbox mechanisms that can do this (systrace was one), but pledge is quite deliberately designed not to support this. But maybe we can bend it to our will. Our pledge wrapper can’t be an external program. This leaves us with the option of injecting the wrapper into the target program via LD_PRELOAD. Before main even runs, we’ll initialize what needs initializing, then lock things down with a tight pledge set. Our eventual target will be ffplay, but hopefully the design will permit some flexibility and reuse. So the new code is injected to override the open syscall, and reads a list of files from an environment variable. Those files are opened and the path and file descriptor are put into a linked list, and then pledge is used to restrict further access to the file system. The replacement open call now searches just that linked list, returning the already opened file descriptors. So as long as your application only tries to open files that you have preopened, it can function without modification within the sandbox. Or at least that is the goal... ffplay tries to dlopen() some things, and because of the way dlopen() works, it doesn’t go via the libc open() wrapper, so it doesn’t get overridden ffplay also tries to call a few ioctl’s, not allowed After stubbing both of those out, it still doesn’t work and it is just getting worse Ted switches to a new strategy, using ffmpeg to convert the .mp3 to a .wav file and then just cat it to /dev/audio A few more stubs for ffmpeg, including access(), and adding tty access to the list of pledges, and it finally works This point has been made from the early days, but I think this exercise reinforces it, that pledge works best with programs where you understand what the program is doing. A generic pledge wrapper isn’t of much use because the program is going to do something unexpected and you’re going to have a hard time wrangling it into submission. Software is too complex. What in the world is ffplay doing? Even if I were working with the source, how long would it take to rearrange the program into something that could be pledged? One can try using another program, but I would wager that as far as multiformat media players go, ffplay is actually on the lower end of the complexity spectrum. Most of the trouble comes from using SDL as an abstraction layer, which performs a bunch of console operations. On the flip side, all of this early init code is probably the right design. Once SDL finally gets its screen handle setup, we could apply pledge and sandbox the actual media decoder. That would be the right way to things. Is pledge too limiting? Perhaps, but that’s what I want. I could have just kept adding permissions until ffplay had full access to my X socket, but what kind of sandbox is that? I don’t want naughty MP3s scraping my screen and spying on my keystrokes. The sandbox I created had all the capabilities one needs to convert an MP3 to audible sound, but the tool I wanted to use wasn’t designed to work in that environment. And in its defense, these were new post hoc requirements. Other programs, even sed, suffer from less than ideal pledge sets as well. The best summary might be to say that pledge is designed for tomorrow’s programs, not yesterday’s (and vice versa). There were a few things I could have done better. In particular, I gave up getting audio to work, even though there’s a nice description of how to work with pledge in the sio_open manual. Alas, even going back and with a bit more effort I still haven’t succeeded. The requirements to use libsndio are more permissive than I might prefer. How I Maximized the Speed of My Non-Gigabit Internet Connection (https://medium.com/speedtest-by-ookla/engineer-maximizes-internet-speed-story-c3ec0e86f37a) We have a new post from Brennen Smith, who is the Lead Systems Engineer at Ookla, the company that runs Speedtest.net, explaining how he used pfSense to maximize his internet connection I spend my time wrangling servers and internet infrastructure. My daily goals range from designing high performance applications supporting millions of users and testing the fastest internet connections in the world, to squeezing microseconds from our stack —so at home, I strive to make sure that my personal internet performance is running as fast as possible. I live in an area with a DOCSIS ISP that does not provide symmetrical gigabit internet — my download and upload speeds are not equal. Instead, I have an asymmetrical plan with 200 Mbps download and 10 Mbps upload — this nuance considerably impacted my network design because asymmetrical service can more easily lead to bufferbloat. We will cover bufferbloat in a later article, but in a nutshell, it’s an issue that arises when an upstream network device’s buffers are saturated during an upload. This causes immense network congestion, latency to rise above 2,000 ms., and overall poor quality of internet. The solution is to shape the outbound traffic to a speed just under the sending maximum of the upstream device, so that its buffers don’t fill up. My ISP is notorious for having bufferbloat issues due to the low upload performance, and it’s an issue prevalent even on their provided routers. They walk through a list of router devices you might consider, and what speeds they are capable of handling, but ultimately ended up using a generic low power x86 machine running pfSense 2.3 In my research and testing, I also evaluated IPCop, VyOS, OPNSense, Sophos UTM, RouterOS, OpenWRT x86, and Alpine Linux to serve as the base operating system, but none were as well supported and full featured as PFSense. The main setting to look at is the traffic shaping of uploads, to keep the pipe from getting saturated and having a large buffer build up in the modem and further upstream. This build up is what increases the latency of the connection As with any experiment, any conclusions need to be backed with data. To validate the network was performing smoothly under heavy load, I performed the following experiment: + Ran a ping6 against speedtest.net to measure latency. + Turned off QoS to simulate a “normal router”. + Started multiple simultaneous outbound TCP and UDP streams to saturate my outbound link. + Turned on QoS to the above settings and repeated steps 2 and 3. As you can see from the plot below, without QoS, my connection latency increased by ~1,235%. However with QoS enabled, the connection stayed stable during the upload and I wasn’t able to determine a statistically significant delta. That’s how I maximized the speed on my non-gigabit internet connection. What have you done with your network? FreeBSD on 11″ MacBook Air (https://www.geeklan.co.uk/?p=2214) Sevan Janiyan writes in his tech blog about his experiences running FreeBSD on an 11’’ MacBook Air This tiny machine has been with me for a few years now, It has mostly run OS X though I have tried OpenBSD on it (https://www.geeklan.co.uk/?p=1283). Besides the screen resolution I’m still really happy with it, hardware wise. Software wise, not so much. I use an external disk containing a zpool with my data on it. Among this data are several source trees. CVS on a ZFS filesystem on OS X is painfully slow. I dislike that builds running inside Terminal.app are slow at the expense of a responsive UI. The system seems fragile, at the slightest push the machine will either hang or become unresponsive. Buggy serial drivers which do not implement the break signal and cause instability are frustrating. Last week whilst working on Rump kernel (http://rumpkernel.org/) builds I introduced some new build issues in the process of fixing others, I needed to pick up new changes from CVS by updating my copy of the source tree and run builds to test if issues were still present. I was let down on both counts, it took ages to update source and in the process of cross compiling a NetBSD/evbmips64-el release, the system locked hard. That was it, time to look what was possible elsewhere. While I have been using OS X for many years, I’m not tied to anything exclusive on it, maybe tweetbot, perhaps, but that’s it. On the BSDnow podcast they’ve been covering changes coming in to TrueOS (formerly PC-BSD – a desktop focused distro based on FreeBSD), their experiments seemed interesting, the project now tracks FreeBSD-CURRENT, they’ve replaced rcng with OpenRC as the init system and it comes with a pre-configured desktop environment, using their own window manager (Lumina). Booting the USB flash image it made it to X11 without any issue. The dock has a widget which states the detected features, no wifi (Broadcom), sound card detected and screen resolution set to 1366×768. I planned to give it a try on the weekend. Friday, I made backups and wiped the system. TrueOS installed without issue, after a short while I had a working desktop, resuming from sleep worked out of the box. I didn’t spend long testing TrueOS, switching out NetBSD-HEAD only to realise that I really need ZFS so while I was testing things out, might as well give stock FreeBSD 11-STABLE a try (TrueOS was based on -CURRENT). Turns out sleep doesn’t work yet but sound does work out of the box and with a few invocations of pkg(8) I had xorg, dwm, firefox, CVS and virtuabox-ose installed from binary packages. VirtualBox seems to cause the system to panic (bug 219276) but I should be able to survive without my virtual machines over the next few days as I settle in. I’m considering ditching VirtualBox and converting the vdi files to raw images so that they can be written to a new zvol for use with bhyve. As my default keyboard layout is Dvorak, OS X set the EFI settings to this layout. The first time I installed FreeBSD 11-STABLE, I opted for full disk encryption but ran into this odd issue where on boot the keyboard layout was Dvorak and password was accepted, the system would boot and as it went to mount the various filesystems it would switch back to QWERTY. I tried entering my password with both layout but wasn’t able to progress any further, no bug report yet as I haven’t ruled myself out as the problem. Thunderbolt gigabit adapter –bge(4) (https://www.freebsd.org/cgi/man.cgi?query=bge) and DVI adapter both worked on FreeBSD though the gigabit adapter needs to be plugged in at boot to be detected. The trackpad bind to wsp(4) (https://www.freebsd.org/cgi/man.cgi?query=wsp), left, right and middle clicks are available through single, double and tripple finger tap. Sound card binds to snd_hda(4) (https://www.freebsd.org/cgi/man.cgi?query=snd_hda) and works out of the box. For wifi I’m using a urtw(4) (https://www.freebsd.org/cgi/man.cgi?query=urtw) Alfa adapter which is a bit on the large side but works very reliably. A copy of the dmesg (https://www.geeklan.co.uk/files/macbookair/freebsd-dmesg.txt) is here. Beastie Bits OPNsense - call-for-testing for SafeStack (https://forum.opnsense.org/index.php?topic=5200.0) BSD 4.4: cat (https://www.rewritinghistorycasts.com/screencasts/bsd-4.4:-cat) Continuous Unix commit history from 1970 until today (https://github.com/dspinellis/unix-history-repo) Update on Unix Architecture Evolution Diagrams (https://www.spinellis.gr/blog/20170510/) “Relayd and Httpd Mastery” is out! (https://blather.michaelwlucas.com/archives/2951) Triangle BSD User Group Meeting -- libxo (https://www.meetup.com/Triangle-BSD-Users-Group/events/240247251/) *** Feedback/Questions Carlos - ASUS Tinkerboard (http://dpaste.com/1GJHPNY#wrap) James - Firewall question (http://dpaste.com/0QCW933#wrap) Adam - ZFS books (http://dpaste.com/0GMG5M2#wrap) David - Managing zvols (http://dpaste.com/2GP8H1E#wrap) ***
194: Daemonic plans
This week on BSD Now we cover the latest FreeBSD Status Report, a plan for Open Source software development, centrally managing bhyve with Ansible, libvirt, and pkg-ssh, and a whole lot more. This episode was brought to you by Headlines FreeBSD Project Status Report (January to March 2017) (https://www.freebsd.org/news/status/report-2017-01-2017-03.html) While a few of these projects indicate they are a "plan B" or an "attempt III", many are still hewing to their original plans, and all have produced impressive results. Please enjoy this vibrant collection of reports, covering the first quarter of 2017. The quarterly report opens with notes from Core, The FreeBSD Foundation, the Ports team, and Release Engineering On the project front, the Ceph on FreeBSD project had made considerable advances, and is now usable as the net/ceph-devel port via the ceph-fuse module. Eventually they hope to have a kernel RADOS block device driver, so fuse is not required CloudABI update, including news that the Bitcoin reference implementation is working on a port to CloudABI eMMC Flash and SD card updates, allowing higher speeds (max speed changes from ~40 to ~80 MB/sec). As well, the MMC Stack can now also be backed by the CAM framework. Improvements to the Linuxulator More detail on the pNFS Server plan B that we discussed in a previous week Snow B.V. is sponsoring a dutch translation of the FreeBSD Handbook using the new .po system *** A plan for open source software maintainers (http://www.daemonology.net/blog/2017-05-11-plan-for-foss-maintainers.html) Colin Percival describes in his blog “a plan for open source software maintainers”: I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular: Free software is expensive. Software is expensive to begin with; but good quality open source software tends to be written by people who are recognized as experts in their fields (partly thanks to that very software) and can demand commensurate salaries. While that expensive developer time is donated (either by the developers themselves or by their employers), this influences what their time is used for: Individual developers like doing things which are fun or high-status, while companies usually pay developers to work specifically on the features those companies need. Maintaining existing code is important, but it is neither fun nor high-status; and it tends to get underweighted by companies as well, since maintenance is inherently unlikely to be the most urgent issue at any given time. Open source software is largely a "throw code over the fence and walk away" exercise. Over the past 15 years I've written freebsd-update, bsdiff, portsnap, scrypt, spiped, and kivaloo, and done a lot of work on the FreeBSD/EC2 platform. Of these, I know bsdiff and scrypt are very widely used and I suspect that kivaloo is not; but beyond that I have very little knowledge of how widely or where my work is being used. Anecdotally it seems that other developers are in similar positions: At conferences I've heard variations on "you're using my code? Wow, that's awesome; I had no idea" many times. I have even less knowledge of what people are doing with my work or what problems or limitations they're running into. Occasionally I get bug reports or feature requests; but I know I only hear from a very small proportion of the users of my work. I have a long list of feature ideas which are sitting in limbo simply because I don't know if anyone would ever use them — I suspect the answer is yes, but I'm not going to spend time implementing these until I have some confirmation of that. A lot of mid-size companies would like to be able to pay for support for the software they're using, but can't find anyone to provide it. For larger companies, it's often easier — they can simply hire the author of the software (and many developers who do ongoing maintenance work on open source software were in fact hired for this sort of "in-house expertise" role) — but there's very little available for a company which needs a few minutes per month of expertise. In many cases, the best support they can find is sending an email to the developer of the software they're using and not paying anything at all — we've all received "can you help me figure out how to use this" emails, and most of us are happy to help when we have time — but relying on developer generosity is not a good long-term solution. Every few months, I receive email from people asking if there's any way for them to support my open source software contributions. (Usually I encourage them to donate to the FreeBSD Foundation.) Conversely, there are developers whose work I would like to support (e.g., people working on FreeBSD wifi and video drivers), but there isn't any straightforward way to do this. Patreon has demonstrated that there are a lot of people willing to pay to support what they see as worthwhile work, even if they don't get anything directly in exchange for their patronage. It seems to me that this is a case where problems are in fact solutions to other problems. To wit: Users of open source software want to be able to get help with their use cases; developers of open source software want to know how people are using their code. Users of open source software want to support the the work they use; developers of open source software want to know which projects users care about. Users of open source software want specific improvements; developers of open source software may be interested in making those specific changes, but don't want to spend the time until they know someone would use them. Users of open source software have money; developers of open source software get day jobs writing other code because nobody is paying them to maintain their open source software. I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact. He poses three questions to users about whether or not people (users and software developers alike) would be interested in this and whether payment (giving and receiving, respectively) is interesting Check out the comments (and those on https://news.ycombinator.com/item?id=14313804 (reddit.com)) as well for some suggestions and discussion on the topic *** OpenBSD vmm hypervisor: Part 2 (http://www.h-i-r.net/2017/04/openbsd-vmm-hypervisor-part-2.html) We asked for people to write up their experience using OpenBSD’s VMM. This blog post is just that This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm. A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1. Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here. The command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access. Let us know how VMM works for you *** News Roundup openbsd changes of note 621 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-621) More stuff, more fun. Fix script to not perform tty operations on things that aren’t ttys. Detected by pledge. Merge libdrm 2.4.79. After a forced unmount, also unmount any filesystems below that mount point. Flip previously warm pages in the buffer cache to memory above the DMA region if uvm tells us it is available. Pages are not automatically promoted to upper memory. Instead it’s used as additional memory only for what the cache considers long term buffers. I/O still requires DMA memory, so writing to a buffer will pull it back down. Makefile support for systems with both gcc and clang. Make i386 and amd64 so. Take a more radical approach to disabling colours in clang. When the data buffered for write in tmux exceeds a limit, discard it and redraw. Helps when a fast process is running inside tmux running inside a slow terminal. Add a port of witness(4) lock validation tool from FreeBSD. Use it with mplock, rwlock, and mutex in the kernel. Properly save and restore FPU context in vmm. Remove KGDB. It neither compiles nor works. Add a constant time AES implementation, from BearSSL. Remove SSHv1 from ssh. and more... *** Digging into BSD's choice of Unix group for new directories and files (https://utcc.utoronto.ca/~cks/space/blog/unix/BSDDirectoryGroupChoice) I have to eat some humble pie here. In comments on my entry on an interesting chmod failure, Greg A. Woods pointed out that FreeBSD's behavior of creating everything inside a directory with the group of the directory is actually traditional BSD behavior (it dates all the way back to the 1980s), not some odd new invention by FreeBSD. As traditional behavior it makes sense that it's explicitly allowed by the standards, but I've also come to think that it makes sense in context and in general. To see this, we need some background about the problem facing BSD. In the beginning, two things were true in Unix: there was no mkdir() system call, and processes could only be in one group at a time. With processes being in only one group, the choice of the group for a newly created filesystem object was easy; it was your current group. This was felt to be sufficiently obvious behavior that the V7 creat(2) manpage doesn't even mention it. Now things get interesting. 4.1c BSD seems to be where mkdir(2) is introduced and where creat() stops being a system call and becomes an option to open(2). It's also where processes can be in multiple groups for the first time. The 4.1c BSD open(2) manpage is silent about the group of newly created files, while the mkdir(2) manpage specifically claims that new directories will have your effective group (ie, the V7 behavior). This is actually wrong. In both mkdir() in sysdirectory.c and maknode() in ufssyscalls.c, the group of the newly created object is set to the group of the parent directory. Then finally in the 4.2 BSD mkdir(2) manpage the group of the new directory is correctly documented (the 4.2 BSD open(2) manpage continues to say nothing about this). So BSD's traditional behavior was introduced at the same time as processes being in multiple groups, and we can guess that it was introduced as part of that change. When your process can only be in a single group, as in V7, it makes perfect sense to create new filesystem objects with that as their group. It's basically the same case as making new filesystem objects be owned by you; just as they get your UID, they also get your GID. When your process can be in multiple groups, things get less clear. A filesystem object can only be in one group, so which of your several groups should a new filesystem object be owned by, and how can you most conveniently change that choice? One option is to have some notion of a 'primary group' and then provide ways to shuffle around which of your groups is the primary group. Another option is the BSD choice of inheriting the group from context. By far the most common case is that you want your new files and directories to be created in the 'context', ie the group, of the surrounding directory. If you fully embrace the idea of Unix processes being in multiple groups, not just having one primary group and then some number of secondary groups, then the BSD choice makes a lot of sense. And for all of its faults, BSD tended to relatively fully embrace its changes While it leads to some odd issues, such as the one I ran into, pretty much any choice here is going to have some oddities. Centrally managed Bhyve infrastructure with Ansible, libvirt and pkg-ssh (http://www.shellguardians.com/2017/05/centrally-managed-bhyve-infrastructure.html) At work we've been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal. After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream "AUTOMATION!" and so did we. Therefore, we started looking for different ways to improve our deployments. We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations. The following building blocks are used: The ZFS snapshot of an existing VM. This will be our VM template. A modified version of oneoff-pkg-create to package the ZFS snapshots. pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail. libvirt to manage our Bhyve VMs. The ansible modules virt, virtnet and virtpool. Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel. NetBSD maintainer in the QEMU project (https://blog.netbsd.org/tnf/entry/netbsd_maintainer_in_the_qemu) QEMU - the FAST! processor emulator - is a generic, Open Source, machine emulator and virtualizer. It defines state of the art in modern virtualization. This software has been developed for multiplatform environments with support for NetBSD since virtually forever. It's the primary tool used by the NetBSD developers and release engineering team. It is run with continuous integration tests for daily commits and execute regression tests through the Automatic Test Framework (ATF). The QEMU developers warned the Open Source community - with version 2.9 of the emulator - that they will eventually drop support for suboptimally supported hosts if nobody will step in and take the maintainership to refresh the support. This warning was directed to major BSDs, Solaris, AIX and Haiku. Thankfully the NetBSD position has been filled - making NetBSD to restore official maintenance. Beastie Bits OpenBSD Community Goes Gold (http://undeadly.org/cgi?action=article&sid=20170510012526&mode=flat&count=0) CharmBUG’s Tor Hack-a-thon has been pushed back to July due to scheduling difficulties (https://www.meetup.com/CharmBUG/events/238218840/) Direct Rendering Manager (DRM) Driver for i915, from the Linux kernel to Haiku with the help of DragonflyBSD’s Linux Compatibility layer (https://www.haiku-os.org/blog/vivek/2017-05-05_[gsoc_2017]_3d_hardware_acceleration_in_haiku/) TomTom lists OpenBSD in license (https://twitter.com/bsdlme/status/863488045449977864) London Net BSD Meetup on May 22nd (https://mail-index.netbsd.org/regional-london/2017/05/02/msg000571.html) KnoxBUG meeting May 30th, 2017 - Introduction to FreeNAS (http://knoxbug.org/2017-05-30) *** Feedback/Questions Felix - Home Firewall (http://dpaste.com/35EWVGZ#wrap) David - Docker Recipes for Jails (http://dpaste.com/0H51NX2#wrap) Don - GoLang & Rust (http://dpaste.com/2VZ7S8K#wrap) George - OGG feed (http://dpaste.com/2A1FZF3#wrap) Roller - BSDCan Tips (http://dpaste.com/3D2B6J3#wrap) ***
193: Fire up the 802.11 AC
This week on BSD Now, Adrian Chadd on bringing up 802.11ac in FreeBSD, a PFsense and OpenVPN tutorial, and we talk about an interesting ZFS storage pool checkpoint project. This episode was brought to you by Headlines Bringing up 802.11ac on FreeBSD (http://adrianchadd.blogspot.com/2017/04/bringing-up-80211ac-on-freebsd.html) Adrian Chadd has a new blog post about his work to bring 802.11ac support to FreeBSD 802.11ac allows for speeds up to 500mbps and total bandwidth into multiple gigabits The FreeBSD net80211 stack has reasonably good 802.11n support, but no 802.11ac support. I decided a while ago to start adding basic 802.11ac support. It was a good exercise in figuring out what the minimum set of required features are and another excuse to go find some of the broken corner cases in net80211 that needed addressing. 802.11ac introduces a few new concepts that the stack needs to understand. I decided to use the QCA 802.11ac parts because (a) I know the firmware and general chip stuff from the first generation 11ac parts well, and (b) I know that it does a bunch of stuff (like rate control, packet scheduling, etc) so I don't have to do it. If I chose, say, the Intel 11ac parts then I'd have to implement a lot more of the fiddly stuff to get good behaviour. Step one - adding VHT channels. I decided in the shorter term to cheat and just add VHT channels to the already very large ieee80211channel map. The linux way of there being a channel context rather than hundreds of static channels to choose from is better in the long run, but I wanted to get things up and running. So, that's what I did first - I added VHT flags for 20, 40, 80, 80+80 and 160MHz operating modes and I did the bare work required to populate the channel lists with VHT channels as well. Then I needed to glue it into an 11ac driver. My ath10k port was far enough along to attempt this, so I added enough glue to say "I support VHT" to the iccaps field and propagated it to the driver for monitor mode configuration. And yes, after a bit of dancing, I managed to get a VHT channel to show up in ath10k in monitor mode and could capture 80MHz wide packets. Success! By far the most fiddly was getting channel promotion to work. net80211 supports the concept of dumb NICs (like atheros 11abgn parts) very well, where you can have multiple virtual interfaces but the "driver" view of the right configuration is what's programmed into the hardware. For firmware NICs which do this themselves (like basically everything sold today) this isn't exactly all that helpful. So, for now, it's limited to a single VAP, and the VAP configuration is partially derived from the global state and partially derived from the negotiated state. It's annoying, but it is adding to the list of things I will have to fix later. the QCA chips/firmware do 802.11 crypto offload. They actually pretend that there's no key - you don't include the IV, you don't include padding, or anything. You send commands to set the crypto keys and then you send unencrypted 802.11 frames (or 802.3 frames if you want to do ethernet only.) This means that I had to teach net80211 a few things: + frames decrypted by the hardware needed to have a "I'm decrypted" bit set, because the 802.11 header field saying "I'm decrypted!" is cleared + frames encrypted don't have the "i'm encrypted" bit set + frames encrypted/decrypted have no padding, so I needed to teach the input path and crypto paths to not validate those if the hardware said "we offload it all." Now comes the hard bit of fixing the shortcomings before I can commit the driver. There are .. lots. The first one is the global state. The ath10k firmware allows what they call 'vdevs' (virtual devices) - for example, multiple SSID/BSSID support is implemented with multiple vdevs. STA+WDS is implemented with vdevs. STA+P2P is implemented with vdevs. So, technically speaking I should go and find all of the global state that should really be per-vdev and make it per-vdev. This is tricky though, because a lot of the state isn't kept per-VAP even though it should be. Anyway, so far so good. I need to do some of the above and land it in FreeBSD-HEAD so I can finish off the ath10k port and commit what I have to FreeBSD. There's a lot of stuff coming - including all of the wave-2 stuff (like multiuser MIMO / MU-MIMO) which I just plainly haven't talked about yet. Viva la FreeBSD wireless! pfSense and OpenVPN Routing (http://www.terrafoundry.net/blog/2017/04/12/pfsense-openvpn/) This article tries to be a simple guide on how to enable your home (or small office) https://www.pfsense.org/ (pfSense) setup to route some traffic via the vanilla Internet, and some via a VPN site that you’ve setup in a remote location. Reasons to Setup a VPN: Control Security Privacy Fun VPNs do not instantly guarantee privacy, they’re a layer, as with any other measure you might invoke. In this example I used a server that’s directly under my name. Sure, it was a country with strict privacy laws, but that doesn’t mean that the outgoing IP address wouldn’t be logged somewhere down the line. There’s also no reason you have to use your own OpenVPN install, there are many, many personal providers out there, who can offer the same functionality, and a degree of anonymity. (If you and a hundred other people are all coming from one IP, it becomes extremely difficult to differentiate, some VPN providers even claim a ‘logless’ setup.) VPNs can be slow. The reason I have a split-setup in this article, is because there are devices that I want to connect to the internet quickly, and that I’m never doing sensitive things on, like banking. I don’t mind if my Reddit-browsing and IRC messages are a bit slower, but my Nintendo Switch and PS4 should have a nippy connection. Services like Netflix can and do block VPN traffic in some cases. This is more of an issue for wider VPN providers (I suspect, but have no proof, that they just blanket block known VPN IP addresses.) If your VPN is in another country, search results and tracking can be skewed. This is arguable a good thing, who wants to be tracked? But it can also lead to frustration if your DuckDuckGo results are tailored to the middle of Paris, rather than your flat in Birmingham. The tutorial walks through the basic setup: Labeling the interfaces, configuring DHCP, creating a VPN: Now that we have our OpenVPN connection set up, we’ll double check that we’ve got our interfaces assigned With any luck (after we’ve assigned our OPENVPN connection correctly, you should now see your new Virtual Interface on the pfSense Dashboard We’re charging full steam towards the sections that start to lose people. Don’t be disheartened if you’ve had a few issues up to now, there is no “right” way to set up a VPN installation, and it may be that you have to tweak a few things and dive into a few man-pages before you’re set up. NAT is tricky, and frankly it only exists because we stretched out IPv4 for much longer than we should have. That being said it’s a necessary evil in this day and age, so let’s set up our connection to work with it. We need NAT here because we’re going to masque our machines on the LAN interface to show as coming from the OpenVPN client IP address, to the OpenVPN server. Head over to Firewall -> NAT -> Outbound. The first thing we need to do in this section, is to change the Outbound NAT Mode to something we can work with, in this case “Hybrid.” Configure the LAN interface to be NAT’d to the OpenVPN address, and the INSECURE interface to use your regular ISP connection Configure the firewall to allow traffic from the LAN network to reach the INSECURE network Then add a second rule allowing traffic from the LAN network to any address, and set the gateway the the OPENVPN connection And there you have it, traffic from the LAN is routed via the VPN, and traffic from the INSECURE network uses the naked internet connection *** Switching to OpenBSD (https://mndrix.blogspot.co.uk/2017/05/switching-to-openbsd.html) After 12 years, I switched from macOS to OpenBSD. It's clean, focused, stable, consistent and lets me get my work done without any hassle. When I first became interested in computers, I thought operating systems were fascinating. For years I would reinstall an operating system every other weekend just to try a different configuration: MS-DOS 3.3, Windows 3.0, Linux 1.0 (countless hours recompiling kernels). In high school, I settled down and ran OS/2 for 5 years until I graduated college. I switched to Linux after college and used it exclusively for 5 years. I got tired of configuring Linux, so I switched to OS X for the next 12 years, where things just worked. But Snow Leopard was 7 years ago. These days, OS X is like running a denial of service attack against myself. macOS has a dozen apps I don't use but can't remove. Updating them requires a restart. Frequent updates to the browser require a restart. A minor XCode update requires me to download a 4.3 GB file. My monitors frequently turn off and require a restart to fix. A system's availability is a function (http://techthoughts.typepad.com/managing_computers/2007/11/availability-mt.html) of mean time between failure and mean time to repair. For macOS, both numbers are heading in the wrong direction for me. I don't hold any hard feelings about it, but it's time for me to get off this OS and back to productive work. I found OpenBSD very refreshing, so I created a bootable thumb drive and within an hour had it up and running on a two-year old laptop. I've been using it for my daily work for the past two weeks and it's been great. Simple, boring and productive. Just the way I like it. The documentation is fantastic. I've been using Unix for years and have learned quite a bit just by reading their man pages. OS releases come like clockwork every 6 months and are supported for 12. Security and other updates seem relatively rare between releases (roughly one small patch per week during 6.0). With syspatch in 6.1, installing them should be really easy too. ZFS Storage Pool Checkpoint Project (https://sdimitro.github.io/post/zpool-checkpoint) During the OpenZFS summit last year (2016), Dan Kimmel and I quickly hacked together the zpool checkpoint command in ZFS, which allows reverting an entire pool to a previous state. Since it was just for a hackathon, our design was bare bones and our implementation far from complete. Around a month later, we had a new and almost complete design within Delphix and I was able to start the implementation on my own. I completed the implementation last month, and we’re now running regression tests, so I decided to write this blog post explaining what a storage pool checkpoint is, why we need it within Delphix, and how to use it. The Delphix product is basically a VM running DelphixOS (a derivative of illumos) with our application stack on top of it. During an upgrade, the VM reboots into the new OS bits and then runs some scripts that update the environment (directories, snapshots, open connections, etc.) for the new version of our app stack. Software being software, failures can happen at different points during the upgrade process. When an upgrade script that makes changes to ZFS fails, we have a corresponding rollback script that attempts to bring ZFS and our app stack back to their previous state. This is very tricky as we need to undo every single modification applied to ZFS (including dataset creation and renaming, or enabling new zpool features). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn’t corrupt your data). It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Otherwise, she discards it. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”. I definitely see value in this for the appliance use case Some usage examples follow, along with some caveats. One of the restrictions is that you cannot attach, detach, or remove a device while a checkpoint exists. However, the zpool add operation is still possible, however if you roll back to the checkpoint, the device will no longer be part of the pool. Rather than a shortcoming, this seems like a nice feature, a way to help users avoid the most common foot shooting (which I witnessed in person at Linux Fest), adding a new log or cache device, but missing a keyword and adding it is a storage vdev rather than a aux vdev. This operation could simply be undone if a checkpoint where taken before the device was added. *** News Roundup Review of TrueOS (https://distrowatch.com/weekly.php?issue=20170501#trueos) TrueOS, which was formerly named PC-BSD, is a FreeBSD-based operating system. TrueOS is a rolling release platform which is based on FreeBSD's "CURRENT" branch, providing TrueOS with the latest drivers and features from FreeBSD. Apart from the name change, TrueOS has deviated from the old PC-BSD project in a number of ways. The system installer is now more streamlined (and I will touch on that later) and TrueOS is a rolling release platform while PC-BSD defaulted to point releases. Another change is PC-BSD used to allow the user to customize which software was installed at boot time, including the desktop environment. The TrueOS project now selects a minimal amount of software for the user and defaults to using the Lumina desktop environment. From the conclusions: What I took away from my time with TrueOS is that the project is different in a lot of ways from PC-BSD. Much more than just the name has changed. The system is now more focused on cutting edge software and features in FreeBSD's development branch. The install process has been streamlined and the user begins with a set of default software rather than selecting desired packages during the initial setup. The configuration tools, particularly the Control Panel and AppCafe, have changed a lot in the past year. The designs have a more flat, minimal look. It used to be that PC-BSD did not have a default desktop exactly, but there tended to be a focus on KDE. With TrueOS the project's in-house desktop, Lumina, serves as the default environment and I think it holds up fairly well. In all, I think TrueOS offers a convenient way to experiment with new FreeBSD technologies and ZFS. I also think people who want to run FreeBSD on a desktop computer may want to look at TrueOS as it sets up a graphical environment automatically. However, people who want a stable desktop platform with lots of applications available out of the box may not find what they want with this project. A simple guide to install Ubuntu on FreeBSD with byhve (https://www.davd.eu/install-ubuntu-on-freebsd-with-bhyve/) David Prandzioch writes in his blog: For some reasons I needed a Linux installation on my NAS. bhyve is a lightweight virtualization solution for FreeBSD that makes that easy and efficient. However, the CLI of bhyve is somewhat bulky and bare making it hard to use, especially for the first time. This is what vm-bhyve solves - it provides a simple CLI for working with virtual machines. More details follow about what steps are needed to setup vm_bhyve on FreeBSD Also check out his other tutorials on his blog: https://www.davd.eu/freebsd/ (https://www.davd.eu/freebsd/) *** Graphical Overview of the Architecture of FreeBSD (https://dspinellis.github.io/unix-architecture/arch.pdf) This diagram tries to show the different components that make up the FreeBSD Operating Systems It breaks down the various utilities, libraries, and components into some categories and sub-categories: User Commands: Development (cc, ld, nm, as, etc) File Management (ls, cp, cmp, mkdir) Multiuser Commands (login, chown, su, who) Number Processing (bc, dc, units, expr) Text Processing (cut, grep, sort, uniq, wc) User Messaging (mail, mesg, write, talk) Little Languages (sed, awk, m4) Network Clients (ftp, scp, fetch) Document Preparation (*roff, eqn, tbl, refer) Administrator and System Commands Filesystem Management (fsck, newfs, gpart, mount, umount) Networking (ifconfig, route, arp) User Management (adduser, pw, vipw, sa, quota*) Statistics (iostat, vmstat, pstat, gstat, top) Network Servers (sshd, ftpd, ntpd, routed, rpc.*) Scheduling (cron, periodic, rc.*, atrun) Libraries (C Standard, Operating System, Peripheral Access, System File Access, Data Handling, Security, Internationalization, Threads) System Call Interface (File I/O, Mountable Filesystems, File ACLs, File Permissions, Processes, Process Tracing, IPC, Memory Mapping, Shared Memory, Kernel Events, Memory Locking, Capsicum, Auditing, Jails) Bootstrapping (Loaders, Configuration, Kernel Modules) Kernel Utility Functions Privilege Management (acl, mac, priv) Multitasking (kproc, kthread, taskqueue, swi, ithread) Memory Management (vmem, uma, pbuf, sbuf, mbuf, mbchain, malloc/free) Generic (nvlist, osd, socket, mbuf_tags, bitset) Virtualization (cpuset, crypto, device, devclass, driver) Synchronization (lock, sx, sema, mutex, condvar_, atomic_*, signal) Operations (sysctl, dtrace, watchdog, stack, alq, ktr, panic) I/O Subsystem Special Devices (line discipline, tty, raw character, raw disk) Filesystems (UFS, FFS, NFS, CD9660, Ext2, UDF, ZFS, devfs, procfs) Sockets Network Protocols (TCP, UDP, UCMP, IPSec, IP4, IP6) Netgraph (50+ modules) Drivers and Abstractions Character Devices CAM (ATA, SATA, SAS, SPI) Network Interface Drivers (802.11, ifae, 100+, ifxl, NDIS) GEOM Storage (stripe, mirror, raid3, raid5, concat) Encryption / Compression (eli, bde, shsec, uzip) Filesystem (label, journal, cache, mbr, bsd) Virtualization (md, nop, gate, virtstor) Process Control Subsystems Scheduler Memory Management Inter-process Communication Debugging Support *** Official OpenBSD 6.1 CD - There's only One! (http://undeadly.org/cgi?action=article&sid=20170503203426&mode=expanded) Ebay auction Link (http://www.ebay.com/itm/The-only-Official-OpenBSD-6-1-CD-set-to-be-made-For-auction-for-the-project-/252910718452) Now it turns out that in fact, exactly one CD set was made, and it can be yours if you are the successful bidder in the auction that ends on May 13, 2017 (About 3 days from when this episode was recorded). The CD set is hand made and signed by Theo de Raadt. Fun Fact: The winning bidder will have an OpenBSD CD set that even Theo doesn't have. *** Beastie Bits Hardware Wanted by OpenBSD developers (https://www.openbsd.org/want.html) Donate hardware to FreeBSD developers (https://www.freebsd.org/donations/index.html#components) Announcing NetBSD and the Google Summer of Code Projects 2017 (https://blog.netbsd.org/tnf/entry/announcing_netbsd_and_the_google) Announcing FreeBSD GSoC 2017 Projects (https://wiki.freebsd.org/SummerOfCode2017Projects) LibreSSL 2.5.4 Released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.4-relnotes.txt) CharmBUG Meeting - Tor Browser Bundle Hack-a-thon (https://www.meetup.com/CharmBUG/events/238218840/) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) Experimental Price Cuts (https://blather.michaelwlucas.com/archives/2931) Linux Fest North West 2017: Three Generations of FreeNAS: The World’s most popular storage OS turns 12 (https://www.youtube.com/watch?v=x6VznQz3VEY) *** Feedback/Questions Don - Reproducible builds & gcc/clang (http://dpaste.com/2AXX75X#wrap) architect - C development on BSD (http://dpaste.com/0FJ854X#wrap) David - Linux ABI (http://dpaste.com/2CCK2WF#wrap) Tom - ZFS (http://dpaste.com/2Z25FKJ#wrap) RAIDZ Stripe Width Myth, Busted (https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz) Ivan - Jails (http://dpaste.com/1Z173WA#wrap) ***
192: SSHv1 Be Gone
This week we have a FreeBSD Foundation development update, tell you about sprinkling in the TrueOS project, Dynamic WDS & a whole lot more! This episode was brought to you by Headlines OpenSSH Removes SSHv1 Support (http://undeadly.org/cgi?action=article&sid=20170501005206) In a series of commits starting here (http://marc.info/?l=openbsd-cvs&m=149359384905651&w=2) and ending with this one (http://marc.info/?l=openbsd-cvs&m=149359530105864&w=2), Damien Miller completed the removal of all support for the now-historic SSHv1 protocol from OpenSSH (https://www.openssh.com/). The final commit message, for the commit that removes the SSHv1 related regression tests, reads: Eliminate explicit specification of protocol in tests and loops over protocol. We only support SSHv2 now. Dropping support for SSHv1 and associated ciphers that were either suspected to or known to be broken has been planned for several releases, and has been eagerly anticipated by many in the OpenBSD camp. In practical terms this means that starting with OpenBSD-current and snapshots as they will be very soon (and further down the road OpenBSD 6.2 with OpenSSH 7.6), the arcane options you used with ssh (http://man.openbsd.org/ssh) to connect to some end-of-life gear in a derelict data centre you don't want to visit anymore will no longer work and you will be forced do the reasonable thing. Upgrade. FreeBSD Foundation April 2017 Development Projects Update (https://www.freebsdfoundation.org/blog/april-2017-development-projects-update/) FreeBSD runs on many embedded boards that provide a USB target or USB On-the-Go (OTG) interface. This allows the embedded target to act as a USB device, and present one or more interfaces (USB device classes) to a USB host. That host could be running FreeBSD, Linux, Mac OS, Windows, Android, or another operating system. USB device classes include audio input or output (e.g. headphones), mass storage (USB flash drives), human interface device (keyboards, mice), communications (Ethernet adapters), and many others. The Foundation awarded a project grant to Edward Tomasz Napierała to develop a USB mass storage target driver, using the FreeBSD CAM Target Layer (CTL) as a backend. This project allows FreeBSD running on an embedded platform, such as a BeagleBone Black or Raspberry Pi Zero, to emulate a USB mass storage target, commonly known as a USB flash stick. The backing storage for the emulated mass storage target is on the embedded board’s own storage media. It can be configured at runtime using the standard CTL configuration mechanism – the ctladm(8) utility, or the ctl.conf(5) file. The FreeBSD target can now present a mass storage interface, a serial interface (for a console on the embedded system), and an Ethernet interface for network access. A typical usage scenario for the mass storage interface is to provide users with documentation and drivers that can be accessed from their host system. This makes it easier for new users to interact with the embedded FreeBSD board, especially in cases where the host operating system may require drivers to access all of the functionality, as with Windows and OS X. They provide instructions on how to configure a BeagleBone Black to act as a flash memory stick attached to a host computer. +Check out the article, test, and report back your experiences with the new USB OTG interface. *** Spring cleaning: Hardware Update and Preview of upcoming TrueOS changes (https://www.trueos.org/blog/spring-cleaning-hardware-update-preview-upcoming-trueos-changes/) The much-abused TrueOS build server is experiencing some technical difficulties, slowing down building new packages and releasing updates. After some investigation, one problem seemed to be a bug with the Poudriere port building software. After updating builders to the new version, some of the instability is resolved. Thankfully, we won’t have to rely on this server so much, because… We’re getting new hardware! A TrueOS/Lumina contributor is donating a new(ish) server to the project. Special thanks to TrueOS contributor/developer q5sys for the awesome new hardware! Preview: UNSTABLE and Upcoming TrueOS STABLE update A fresh UNSTABLE release is dropping today, with a few key changes: Nvidia/graphics driver detection fixes. Boot environment listing fix (FreeBSD boot-loader only) Virtual box issues fixed on most systems. There appears to be a regression in VirtualBox 5.1 with some hardware. New icon themes for Lumina (Preferences -> Appearance -> Theme). Removal of legacy pc-diskmanager. It was broken and unmaintained, so it is time to remove it. Installer/.iso Changes (Available with new STABLE Update): The text installer has been removed. It was broken and unmaintained, so it is time to remove it. There is now a single TrueOS install image. You can still choose to install as either a server or desktop, but both options live in a single install image now. This image is still available as either an .iso or .img file. The size of the .iso and .img files is reduced about 500 Mb to around 2Gb total. We’ve removed Firefox and Thunderbird from the default desktop installation. These have been replaced with Qupzilla and Trojita. Note you can replace Qupzilla and Trojita with Firefox and Thunderbird via the SysAdm Appcafe after completing the TrueOS install. Grub is no longer an installation option. Instead, the FreeBSD boot-loader is always used for the TrueOS partition. rEFInd is used as the master boot-loader for multi-booting; EFI partitioning is required. Qpdfview is now preinstalled for pdf viewing. Included a slideshow during the installation with tips and screenshots. Interview - Patrick M. Hausen - hausen@punkt.de (mailto:hausen@punkt.de) Founder of Punkt.de HAST - Highly Available Storage (https://wiki.freebsd.org/HAST) News Roundup (finally) investigating how to get dynamic WDS (DWDS) working in FreeBSD! (http://adrianchadd.blogspot.com/2017/04/finally-investigating-how-to-get.html) Adrian Chadd writes in his blog: I sat down recently to figure out how to get dynamic WDS working on FreeBSD-HEAD. It's been in FreeBSD since forever, and it in theory should actually have just worked, but it's extremely not documented in any useful way. It's basically the same technology in earlier Apple Airports (before it grew into what the wireless tech world calls "Proxy-STA") and is what the "extender" technology on Qualcomm Atheros chipsets implement. A common question I get from people is "why can't I bridge multiple virtual machines on my laptop and have them show up over wifi? It works on ethernet!" And my response is "when I make dynamic WDS work, you can just make this work on FreeBSD devices - but for now, use NAT." That always makes people sad. + Goes on to explain that normal station/access point setups have up to three addresses and depending on the packet type, these can vary. There are a couple of variations in the addresses, which is more than the number of address fields in a normal 802.11 frame. The big note here is that there's not enough MAC addresses to say "please send this frame to a station MAC address, but then have them forward it to another MAC address attached behind it in a bridge." That would require 4 mac addresses in the 802.11 header, which we don't get. .. except we do. There's a separate address format where from-DS and to-DS bits in the header set to 1, which means "this frame is coming from distribution system to a distribution system", and it has four mac addresses. The RA is then the AP you're sending to, and then a fourth field indicates the eventual destination address that may be an ethernet device connected behind said STA. If you don't configure up WDS, then when you send frames from a station from a MAC address that isn't actually your 802.11 interface MAC address, the system would be confused. The STA wouldn't be able to transmit it easily, and the AP wouldn't know how to get back to your bridged ethernet addresses. The original WDS was a statically configured thing. [...] So for static configurations, this works great. You'd associate your extender AP as a station of the central AP, it'd use wpa_supplicant to setup encryption, then anything between that central AP and that extender AP (as a station) would be encrypted as normal station traffic (but, 4-address frame format.) But that's not very convenient. You have to statically configure everything, including telling your central AP about all of your satellite extender APs. If you want to replace your central AP, you have to reprogram all of your extenders to use the new MAC addresses. So, Sam Leffler came up with "dynamic WDS" - where you don't have to explicitly state the list of central/satellite APs. Instead, you configure a central AP as "dynamic WDS", and when a 4-address frame shows up from an associated station, it "promotes" it to a WDS peer for you. On the satellite AP, it will just find an AP to communicate to, and then assume it'll do WDS and start using 4-address frames. It's still a bit clunky (there's no beacon, probe request, etc IEs that say "I do dynamic WDS!" so you'd better make ALL your central APs a different SSID!) but it certainly is better than what we had. Firstly, there are scripts in src/tools/tools/net80211/ - setup.wdsmain and setup.wdsrelay. These scripts are .. well, the almost complete documentation on a dynamic WDS setup. The manpage doesn't go into anywhere near enough information. So I dug into it. It turns out that dynamic WDS uses a helper daemon - 'wlanwds' - which listens for dynamic WDS configuration changes and will do things for you. This is what runs on the central AP side. Then it started making sense! So far, so good. I followed that script, modified it a bit to use encryption, and .. well, it half worked. Association worked fine, but no traffic was passing. A little more digging showed the actual problem - the dynamic WDS example scripts are for an open/unencrypted network. If you are using an encrypted network, the central AP side needs to enable privacy on the virtual interfaces so traffic gets encrypted with the parent interface encryption keys. Now, I've only done enough testing to show that indeed it is working. I haven't done anything like pass lots of traffic via iperf, or have a mix of DWDS and normal STA peers, nor actually run it for longer than 5 minutes. I'm sure there will be issues to fix. However - I do need it at home, as I can't see the home AP from the upstairs room (and now you see why I care about DWDS!) and so when I start using it daily I'll fix whatever hilarity ensues. Why don't schools teach debugging? (https://danluu.com/teach-debugging/) A friend of mine and I couldn’t understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed. Write down the problem Think real hard Write down the solution The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we’d been asked to tackle before. I understand now why half the class struggled with the earlier assignments. Without an explanation of how to systematically approach problems, anyone who didn’t intuitively grasp the correct solution was in for a semester of frustration. People who were, like me, above average but not great, skated through most of the class and either got lucky or wasted a huge chunk of time on the final project. I’ve even seen people talented enough to breeze through the entire degree without ever running into a problem too big to intuitively understand; those people have a very bad time when they run into a 10 million line codebase in the real world. The more talented the engineer, the more likely they are to hit a debugging wall outside of school. It’s one of the most fundamental skills in engineering: start at the symptom of a problem and trace backwards to find the source. It takes, at most, half an hour to teach the absolute basics – and even that little bit would be enough to save a significant fraction of those who wash out and switch to non-STEM majors. Why do we leave material out of classes and then fail students who can’t figure out that material for themselves? Why do we make the first couple years of an engineering major some kind of hazing ritual, instead of simply teaching people what they need to know to be good engineers? For all the high-level talk about how we need to plug the leaks in our STEM education pipeline, not only are we not plugging the holes, we’re proud of how fast the pipeline is leaking. FreeBSD: pNFS server for testing (https://lists.freebsd.org/pipermail/freebsd-fs/2017-April/024702.html) Rick Macklem has issued a call for testing his new pNFS server: I now have a pNFS server that I think is ready for testing/evaluation. It is basically a patched FreeBSD-current kernel plus nfsd daemon. If you are interested, some very basic notes on how it works and how to set it up are at: http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt (http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt) A Plan B pNFS service consists of a single MetaData Server (MDS) and K Data Servers (DS), all of which would be recent FreeBSD systems. Clients will mount the MDS as they would a single NFS server. When files are created, the MDS creates a file tree identical to what a single NFS server creates, except that all the regular (VREG) files will be empty. As such, if you look at the exported tree on the MDS directly on the MDS server (not via an NFS mount), the files will all be of size == 0. Each of these files will also have two extended attributes in the system attribute name space: pnfsd.dsfile - This extended attrbute stores the information that the MDS needs to find the data storage file on a DS for this file. pnfsd.dsattr - This extended attribute stores the Size, ModifyTime and Change attributes for the file. For each regular (VREG) file, the MDS creates a data storage file on one of the K DSs, in one of the dsNN directories. The name of this file is the file handle of the file on the MDS in hexadecimal. The DSs use 20 subdirectories named "ds0" to "ds19" so that no one directory gets too large. At this time, the MDS generates File Layout layouts to NFSv4.1 clients that know how to do pNFS. For NFS clients that do not support NFSv4.1 pNFS, there will be a performance hit, since the IO RPCs will be proxied by the MDS for the DS server the data storage file resides on. The current setup does not allow for redundant servers. If the MDS or any of the K DS servers fail, the entire pNFS service will be non-functional. Looking at creating mirrored DS servers is planned, but it may be a year or more before that is implemented. I am planning on using the Flex File Layout for this, since it supports client side mirroring, where the client will write to all mirrors concurrently. Beastie Bits Openbsd changes of note 620 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-620) Why Unix commands are short (http://www.catonmat.net/blog/why-unix-commands-are-short/) OPNsense 17.1.5 released (https://opnsense.org/opnsense-17-1-5-released/) Something for Apple dual-GPU users (http://lists.dragonflybsd.org/pipermail/commits/2017-April/625847.html) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) TrueOS/Lumina Dev Q&A: May 5th 2017 (https://discourse.trueos.org/t/trueos-lumina-dev-q-a-5-4-17/1347) Feedback/Questions Peter - Jails (http://dpaste.com/0J14HGJ#wrap) Andrew - Languages and University Courses (http://dpaste.com/31AVFSF#wrap) JuniorJobs (https://wiki.freebsd.org/JuniorJobs) Steve - TrueOS and Bootloader (http://dpaste.com/1BXVZSY#wrap) Ben - ZFS questions (http://dpaste.com/0R7AW2T#wrap) Steve - Linux Emulation (http://dpaste.com/3ZR7NCC#wrap)
191: I Know 64 & A Bunch More
We cover TrueOS/Lumina working to be less dependent on Linux, How the IllumOS network stack works, Throttling the password gropers & the 64 bit inode call for testing. This episode was brought to you by Headlines vBSDCon CFP closed April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) Developer Commentary: Philosophy, Evolution of TrueOS/Lumina, and Other Thoughts. (https://www.trueos.org/blog/developer-commentary-philosophy-evolution-trueoslumina-thoughts/) Philosophy of Development No project is an island. Every single project needs or uses some other external utility, library, communications format, standards compliance, and more in order to be useful. A static project is typically a dead project. A project needs regular upkeep and maintenance to ensure it continues to build and run with the current ecosystem of libraries and utilities, even if the project has no considerable changes to the code base or feature set. “Upstream” decisions can have drastic consequences on your project. Through no fault of yours, your project can be rendered obsolete or broken by changing standards in the global ecosystem that affect your project’s dependencies. Operating system focus is key. What OS is the project originally designed for? This determines how the “upstream” dependencies list appears and which “heartbeat” to monitor. Evolution of PC-BSD, Lumina, and TrueOS. With these principles in mind – let's look at PC-BSD, Lumina, and TrueOS. PC-BSD : PC-BSD was largely designed around KDE on FreeBSD. KDE/Plasma5 has been available for Linux OS’s for well over a year, but is still not generally available on FreeBSD. It is still tucked away in the experimental “area51” repository where people are trying to get it working first. Lumina : As a developer with PC-BSD for a long time, and a tester from nearly the beginning of the project, I was keenly aware the “winds of change” were blowing in the open-source ecosystem. TrueOS : All of these ecosystem changes finally came to a head for us near the beginning of 2016. KDE4 was starting to deteriorate underneath us, and the FreeBSD “Release” branch would never allow us to compete with the rate of graphics driver or standards changes coming out of the Linux camp. The Rename and Next Steps With all of these changes and the lack of a clear “upgrade” path from PC-BSD to the new systems, we decided it was necessary to change the project itself (name and all). To us, this was the only way to ensure people were aware of the differences, and that TrueOS really is a different kind of project from PC-BSD. Note this was not a “hostile takeover” of the PC-BSD project by rabid FreeBSD fanatics. This was more a refocusing of the PC-BSD project into something that could ensure longevity and reliability for the foreseeable future. Does TrueOS have bugs and issues? Of course! That is the nature of “rolling” with upstream changes all the time. Not only do you always get the latest version of something (a good thing), you also find yourself on the “front line” for finding and reporting bugs in those same applications (a bad thing if you like consistency or stability). What you are also seeing is just how much “churn” happens in the open-source ecosystem at any given time. We are devoted to providing our users (and ourselves – don’t forget we use TrueOS every day too!) a stable, reliable, and secure experience. Please be patient as we continue striving toward this goal in the best way possible, not just doing what works for the moment, but the project’s future too. Robert Mustacchi: Excerpts from The Soft Ring Cycle #1 (https://www.youtube.com/watch?v=vnD10WQ2930) The author of the “Turtles on the Wire” post we featured the other week, is back with a video. Joyent has started a new series of lunchtime technical discussions to share information as they grow their engineering team This video focuses on the network stack, how it works, and how it relates to virtualization and multi-tenancy Basically, how the network stack on IllumOS works when you have virtual tenants, be they virtual machines or zones The video describes the many layers of the network stack, how they work together, and how they can be made to work quickly It also talks about the trade-offs between high throughput and low latency How security is enforced, so virtual tenants cannot send packets into VLANs they are not members of, or receive traffic that they are not allowed to by the administrator How incoming packets are classified, and eventually delivered to the intended destination How the system decides if it has enough available resources to process the packet, or if it needs to be dropped How interface polling works on IllumOS (a lot different than on FreeBSD) Then the last 20 minutes are about how the qemu interface of the KVM hypervisor interfaces with the network stack We look forward to seeing more of these videos as they come out *** Forcing the password gropers through a smaller hole with OpenBSD's PF queues (http://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) While preparing material for the upcoming BSDCan PF and networking tutorial (http://www.bsdcan.org/2017/schedule/events/805.en.html), I realized that the pop3 gropers were actually not much fun to watch anymore. So I used the traffic shaping features of my OpenBSD firewall to let the miscreants inflict some pain on themselves. Watching logs became fun again. The actual useful parts of this article follow - take this as a walkthrough of how to mitigate a wide range of threats and annoyances. First, analyze the behavior that you want to defend against. In our case that's fairly obvious: We have a service that's getting a volume of unwanted traffic, and looking at our logs the attempts come fairly quickly with a number of repeated attempts from each source address. I've written about the rapid-fire ssh bruteforce attacks and their mitigation before (and of course it's in The Book of PF) as well as the slower kind where those techniques actually come up short. The traditional approach to ssh bruteforcers has been to simply block their traffic, and the state-tracking features of PF let you set up overload criteria that add the source addresses to the table that holds the addresses you want to block. For the system that runs our pop3 service, we also have a PF ruleset in place with queues for traffic shaping. For some odd reason that ruleset is fairly close to the HFSC traffic shaper example in The Book of PF, and it contains a queue that I set up mainly as an experiment to annoy spammers (as in, the ones that are already for one reason or the other blacklisted by our spamd). The queue is defined like this: queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300 yes, that's right. A queue with a maximum throughput of 1 kilobit per second. I have been warned that this is small enough that the code may be unable to strictly enforce that limit due to the timer resolution in the HFSC code. But that didn't keep me from trying. Now a few small additions to the ruleset are needed for the good to put the evil to the task. We start with a table to hold the addresses we want to mess with. Actually, I'll add two, for reasons that will become clear later: table persist counters table persist counters The rules that use those tables are: block drop log (all) quick from pass in quick log (all) on egress proto tcp from to port pop3 flags S/SA keep state \ (max-src-conn 2, max-src-conn-rate 3/3, overload flush global, pflow) set queue spamd pass in log (all) on egress proto tcp to port pop3 flags S/SA keep state \ (max-src-conn 5, max-src-conn-rate 6/3, overload flush global, pflow) The last one lets anybody connect to the pop3 service, but any one source address can have only open five simultaneous connections and at a rate of six over three seconds. The results were immediately visible. Monitoring the queues using pfctl -vvsq shows the tiny queue works as expected: queue spamd parent rootq bandwidth 1K, max 1K qlimit 300 [ pkts: 196136 bytes: 12157940 dropped pkts: 398350 bytes: 24692564 ] [ qlength: 300/300 ] [ measured: 2.0 packets/s, 999.13 b/s ] and looking at the pop3 daemon's log entries, a typical encounter looks like this: Apr 19 22:39:33 skapet spop3d[44875]: connect from 111.181.52.216 Apr 19 22:39:33 skapet spop3d[75112]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[57116]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[65982]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[58964]: connect from 111.181.52.216 Apr 19 22:40:34 skapet spop3d[12410]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[63573]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[76113]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[23524]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[16916]: autologout time elapsed - 111.181.52.216 here the miscreant comes in way too fast and only manages to get five connections going before they're shunted to the tiny queue to fight it out with known spammers for a share of bandwidth. One important takeaway from this, and possibly the most important point of this article, is that it does not take a lot of imagination to retool this setup to watch for and protect against undesirable activity directed at essentially any network service. You pick the service and the ports it uses, then figure out what are the parameters that determine what is acceptable behavior. Once you have those parameters defined, you can choose to assign to a minimal queue like in this example, block outright, redirect to something unpleasant or even pass with a low probability. 64-bit inodes (ino64) Status Update and Call for Testing (https://lists.freebsd.org/pipermail/freebsd-fs/2017-April/024684.html) Inodes are data structures corresponding to objects in a file system, such as files and directories. FreeBSD has historically used 32-bit values to identify inodes, which limits file systems to somewhat under 2^32 objects. Many modern file systems internally use 64-bit identifiers and FreeBSD needs to follow suit to properly and fully support these file systems. The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb@). After that time several people have had a hand in updating it and addressing regressions, after mckusick@ picked up and updated the patch, and acted as a flag-waver. Overview : The ino64 branch extends the basic system types inot and devt from 32-bit to 64-bit, and nlink_t from 16-bit to 64-bit. Motivation : The main risk of the ino64 change is the uncontrolled ABI breakage. Quirks : We handled kinfo sysctl MIBs, but other MIBs which report structures depended on the changed type, are not handled in general. It was considered that the breakage is either in the management interfaces, where we usually allow ABI slip, or is not important. Testing procedure : The ino64 project can be tested by cloning the project branch from GitHub or by applying the patch to a working tree. New kernel, old world. New kernel, new world, old third-party applications. 32bit compat. Targeted tests. NFS server and client test Other filesystems Test accounting Ports Status with ino64 : A ports exp-run for ino64 is open in PR 218320. 5.1. LLVM : LLVM includes a component called Address Sanitizer or ASAN, which triesto intercept syscalls, and contains knowledge of the layout of many system structures. Since stat and lstat syscalls were removed and several types and structures changed, this has to be reflected in the ASAN hacks. 5.2. lang/ghc : The ghc compiler and parts of the runtime are written in Haskell, which means that to compile ghc, you need a working Haskell compiler for bootstrap. 5.3. lang/rust Rustc has a similar structure to GHC, and same issue. The same solution of patching the bootstrap was done. Next Steps : The tentative schedule for the ino64 project: 2017-04-20 Post wide call for testing : Investigate and address port failures with maintainer support 2017-05-05 Request second exp-run with initial patches applied : Investigate and address port failures with maintainer support 2017-05-19 Commit to HEAD : Address post-commit failures where feasible *** News Roundup Sing, beastie, sing! (http://meka.rs/blog/2017/01/25/sing-beastie-sing/) FreeBSD digital audio workstation, or DAW for short, is now possible. At this very moment it's not user friendly that much, but you'll manage. What I want to say is that I worked on porting some of the audio apps to FreeBSD, met some other people interested in porting audio stuff and became heavily involved with DrumGizmo - drum sampling engine. Let me start with the basic setup. FreeBSD doesn't have hard real-time support, but it's pretty close. For the needs of audio, FreeBSD's implementation of real-time is sufficient and, in my opinion, superior to the one you can get on Linux with RT path (which is ugly, not supported by distributions and breaks apps like VirtualBox). As default install of FreeBSD is concerned with real-time too much, we have to tweak sysctl a bit, so append this to your /etc/sysctl.conf: kern.timecounter.alloweddeviation=0 hw.usb.uaudio.buffer_ms=2 # only on -STABLE for now hw.snd.latency=0 kern.coredump=0 So let me go through the list. First item tells FreeBSD how many events it can aggregate (or wait for) before emitting them. The reason this is the default is because aggregating events saves power a bit, and currently more laptops are running FreeBSD than DAWs. Second one is the lowest possible buffer for USB audio driver. If you're not using USB audio, this won't change a thing. Third one has nothing to do with real-time, but dealing with programs that consume ~3GB of RAM, dumping cores around made a problem on my machine. Besides, core dumps are only useful if you know how to debug the problem, or someone is willing to do that for you. I like to not generate those files by default, but if some app is constantly crashing, I enable dumps, run the app, crash it, and disable dumps again. I lost 30GB in under a minute by examining 10 different drumkits of DrumGizmo and all of them gave me 3GB of core file, each. More setup instructions follow, including jackd setup and PulseAudio using virtual_oss. With this setup I can play OSS, JACK and PulseAudio sound all at the same time, which I was not able to do on Linux. FreeBSD 11 Unbound DNS server (https://itso.dk/?p=499) In FreeBSD, there is a built-in DNS server called Unbound. So why would run a local DNS server? I am in a region where internet traffic is still a bit expensive, that also implies slow, and high response times. To speed that a up a little, you can use own DNS server. It will speed up because for every homepage you visit, there will be several hooks to other domains: commercials, site components, and links to other sites. These, will now all be cached locally on your new DNS server. In my case I use an old PC-Engine Alix board for my home DNS server, but you can use almost everything, Raspberry Pi, old laptop/desktop and others. As long as it runs FreeBSD. Goes into more details about what commands to run and which services to start Try it out if you are in a similar situation *** Why it is important that documentation and tutorials be correct and carefully reviewed (https://arxiv.org/pdf/1704.02786.pdf) A group of researchers found that a lot of online web programming tutorials contain serious security flaws. They decided to do a research project to see how this impacts software that is written possibly based on those tutorials. They used a number of simple google search terms to make a list of tutorials, and manually audited them for common vulnerabilities. They then crawled GitHub to find projects with very similar code snippets that might have been taken from those tutorials. The Web is replete with tutorial-style content on how to accomplish programming tasks. Unfortunately, even top-ranked tutorials suffer from severe security vulnerabilities, such as cross-site scripting (XSS), and SQL injection (SQLi). Assuming that these tutorials influence real-world software development, we hypothesize that code snippets from popular tutorials can be used to bootstrap vulnerability discovery at scale. To validate our hypothesis, we propose a semi-automated approach to find recurring vulnerabilities starting from a handful of top-ranked tutorials that contain vulnerable code snippets. We evaluate our approach by performing an analysis of tens of thousands of open-source web applications to check if vulnerabilities originating in the selected tutorials recur. Our analysis framework has been running on a standard PC, analyzed 64,415 PHP codebases hosted on GitHub thus far, and found a total of 117 vulnerabilities that have a strong syntactic similarity to vulnerable code snippets present in popular tutorials. In addition to shedding light on the anecdotal belief that programmers reuse web tutorial code in an ad hoc manner, our study finds disconcerting evidence of insufficiently reviewed tutorials compromising the security of open-source projects. Moreover, our findings testify to the feasibility of large-scale vulnerability discovery using poorly written tutorials as a starting point The researchers found 117 vulnerabilities, of these, at least 8 appear to be nearly exact copy/pastes of the tutorials that were found to be vulnerable. *** 1.3.0 Development Preview: New icon themes (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) As version 1.3.0 of the Lumina desktop starts getting closer to release, I want to take a couple weeks and give you all some sneak peaks at some of the changes/updates that we have been working on (and are in the process of finishing up). New icon theme (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) Material Design Light/Dark There are a lot more icons available in the reference icon packs which we still have not gotten around to renaming yet, but this initial version satisfies all the XDG standards for an icon theme + all the extra icons needed for Lumina and it’s utilities + a large number of additional icons for application use. This highlights one the big things that I love about Lumina: it gives you an interface that is custom-tailored to YOUR needs/wants – rather than expecting YOU to change your routines to accomodate how some random developer/designer across the world thinks everybody should use a computer. Lumina Media Player (https://lumina-desktop.org/1-3-0-development-preview-lumina-mediaplayer/) This is a small utility designed to provide the ability for the user to play audio and video files on the local system, as well as stream audio from online sources. For now, only the Pandora internet radio service is supported via the “pianobar” CLI utility, which is an optional runtime dependency. However, we hope to gradually add new streaming sources over time. For a long time I had been using another Pandora streaming client on my TrueOS desktop, but it was very fragile with respect to underlying changes: LibreSSL versions for example. The player would regularly stop functioning for a few update cycles until a version of LibreSSL which was “compatible” with the player was used. After enduring this for some time, I was finally frustrated enough to start looking for alternatives. A co-worker pointed me to a command-line utility called “pianobar“, which was also a small client for Pandora radio. After using pianobar for a couple weeks, I was impressed with how stable it was and how little “overhead” it required with regards to extra runtime dependencies. Of course, I started thinking “I could write a Qt5 GUI for that!”. Once I had a few free hours, I started writing what became lumina-mediaplayer. I started with the interface to pianobar itself to see how complicated it would be to interact with, but after a couple days of tinkering in my spare time, I realized I had a full client to Pandora radio basically finished. Beastie Bits vBSDCon CFP closes April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) clang(1) added to base on amd64 and i386 (http://undeadly.org/cgi?action=article&sid=20170421001933) Theo: “Most things come to an end, sorry.” (https://marc.info/?l=openbsd-misc&m=149232307018311&w=2) ASLR, PIE, NX, and other capital letters (https://www.dragonflydigest.com/2017/04/24/19609.html) How SSH got port number 22 (https://www.ssh.com/ssh/port) Netflix Serving 90Gb/s+ From Single Machines Using Tuned FreeBSD (https://news.ycombinator.com/item?id=14128637) Compressed zfs send / receive lands in FreeBSD HEAD (https://svnweb.freebsd.org/base?view=revision&revision=317414) *** Feedback/Questions Steve - FreeBSD Jobs (http://dpaste.com/3QSMYEH#wrap) Mike - CuBox i4Pro (http://dpaste.com/0NNYH22#wrap) Steve - Year of the BSD Desktop? (http://dpaste.com/1QRZBPD#wrap) Brad - Configuration Management (http://dpaste.com/2TFV8AJ#wrap) ***
190: The Moore You Know
This week, we look forward with the latest OpenBSD release, look back with Dennis Ritchie’s paper on the evolution of Unix Time Sharing, have an Interview with Kris This episode was brought to you by OpenBSD 6.1 RELEASED (http://undeadly.org/cgi?action=article&sid=20170411132956) Mailing list post (https://marc.info/?l=openbsd-announce&m=149191716921690&w=2') We are pleased to announce the official release of OpenBSD 6.1. This is our 42nd release. New/extended platforms: New arm64 platform, using clang(1) as the base system compiler. The loongson platform now supports systems with Loongson 3A CPU and RS780E chipset. The following platforms were retired: armish, sparc, zaurus New vmm(4)/ vmd(8) IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8)/ dhcpd(8)/ dhcrelay(8) improvements Assorted improvements OpenSMTPD 6.0.0 OpenSSH 7.4 LibreSSL 2.5.3 mandoc 1.14.1 *** Fuzz Testing OpenSSH (http://vegardno.blogspot.ca/2017/03/fuzzing-openssh-daemon-using-afl.html) Vegard Nossum writes a blog post explaining how to fuzz OpenSSH using AFL It starts by compiling AFL and SSH with LLVM to get extra instrumentation to make the fuzzing process better, and faster Sandboxing, PIE, and other features are disabled to increase debuggability, and to try to make breaking SSH easier Privsep is also disabled, because when AFL does make SSH crash, the child process crashing causes the parent process to exit normally, and AFL then doesn’t realize that a crash has happened. A one-line patch disables the privsep feature for the purposes of testing A few other features are disabled to make testing easier (disabling replay attack protection allows the same inputs to be reused many times), and faster: the local arc4random_buf() is patched to return a buffer of zeros disabling CRC checks disabling MAC checks disabling encryption (allow the NULL cipher for everything) add a call to _AFLINIT(), to enable “deferred forkserver mode” disabling closefrom() “Skipping expensive DH/curve and key derivation operations” Then, you can finally get around to writing some test cases The steps are all described in detail In one day of testing, the author found a few NULL dereferences that have since been fixed. Maybe you can think of some other code paths through SSH that should be tested, or want to test another daemon *** Getting OpenBSD running on Raspberry Pi 3 (http://undeadly.org/cgi?action=article&sid=20170409123528) Ian Darwin writes in about his work deploying the arm64 platform and the Raspberry Pi 3 So I have this empty white birdhouse-like thing in the yard, open at the front. It was intended to house the wireless remote temperature sensor from a low-cost weather station, which had previously been mounted on a dark-colored wall of the house [...]. But when I put the sensor into the birdhouse, the signal is too weak for the weather station to receive it (the mounting post was put in place by a previous owner of our property, and is set deeply in concrete). So the next plan was to pop in a tiny OpenBSD computer with a uthum(4) temperature sensor and stream the temperature over WiFi. The Raspberry Pi computers are interesting in their own way: intending to bring low-cost computing to everybody, they take shortcuts and omit things that you'd expect on a laptop or desktop. They aren't too bright on their own: there's very little smarts in the board compared to the "BIOS" and later firmwares on conventional systems. Some of the "smarts" are only available as binary files. This was part of the reason that our favorite OS never came to the Pi Party for the original rpi, and didn't quite arrive for the rpi2. With the rpi3, though, there is enough availability that our devs were able to make it boot. Some limitations remain, though: if you want to build your own full release, you have to install the dedicated raspberrypi-firmware package from the ports tree. And, the boot disks have to have several extra files on them - this is set up on the install sets, but you should be careful not to mess with these extra files until you know what you're doing! But wait! Before you read on, please note that, as of April 1, 2017, this platform boots up but is not yet ready for prime time: there's no driver for SD/MMC but that's the only thing the hardware can level-0 boot from, so you need both the uSD card and a USB disk, at least while getting started; there is no support for the built-in WiFi (a Broadcom BCM43438 SDIO 802.11), so you have to use wired Ethernet or a USB WiFi dongle (for my project an old MSI that shows up as ural(4) seems to work fine); the HDMI driver isn't used by the kernel (if a monitor is plugged in uBoot will display its messages there), so you need to set up cu with a 3V serial cable, at least for initial setup. the ports tree isn't ready to cope with the base compiler being clang yet, so packages are "a thing of the future" But wait - there's more! The "USB disk" can be a USB thumb drive, though they're generally slower than a "real" disk. My first forays used a Kingston DTSE9, the hardy little steel-cased version of the popular DataTraveler line. I was able to do the install, and boot it, once (when I captured the dmesg output shown below). After that, it failed - the boot process hung with the ever-unpopular "scanning usb for storage devices..." message. I tried the whole thing again with a second DTSE9, and with a 32GB plastic-cased DataTraveler. Same results. After considerable wasted time, I found a post on RPI's own site which dates back to the early days of the PI 3, in which they admit that they took shortcuts in developing the firmware, and it just can't be made to work with the Kingston DataTraveler! Not having any of the "approved" devices, and not living around the corner from a computer store, I switched to a Sabrent USB dock with a 320GB Western Digital disk, and it's been rock solid. Too big and energy-hungry for the final project, but enough to show that the rpi3 can be solid with the right (solid-state) disk. And fast enough to build a few simple ports - though a lot will not build yet. I then found and installed OpenBSD onto a “PNY” brand thumb drive and found it solid - in fact I populated it by dd’ing from one of the DataTraveller drives, so they’re not at fault. Check out the full article for detailed setup instructions *** Dennis M. Ritchie’s Paper: The Evolution of the Unix Time Sharing System (http://www.read.seas.harvard.edu/~kohler/class/aosref/ritchie84evolution.pdf) From the abstract: This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system. During the past few years, the Unix operating system has come into wide use, so wide that its very name has become a trademark of Bell Laboratories. Its important characteristics have become known to many people. It has suffered much rewriting and tinkering since the first publication describing it in 1974 [1], but few fundamental changes. However, Unix was born in 1969 not 1974, and the account of its development makes a little-known and perhaps instructive story. This paper presents a technical and social history of the evolution of the system. High level document structure: Origins The PDP-7 Unix file system Process control IO Redirection The advent of the PDP-11 The first PDP-11 system Pipes High-level languages Conclusion One of the comforting things about old memories is their tendency to take on a rosy glow. The programming environment provided by the early versions of Unix seems, when described here, to be extremely harsh and primitive. I am sure that if forced back to the PDP-7 I would find it intolerably limiting and lacking in conveniences. Nevertheless, it did not seem so at the time; the memory fixes on what was good and what lasted, and on the joy of helping to create the improvements that made life better. In ten years, I hope we can look back with the same mixed impression of progress combined with continuity. Interview - Kris Moore - kris@trueos.org (mailto:kris@trueos.org) | @pcbsdkris (https://twitter.com/pcbsdkris) Director of Engineering at iXSystems FreeNAS News Roundup Compressed zfs send / receive now in FreeBSD’s vendor area (https://svnweb.freebsd.org/base?view=revision&revision=316894) Andriy Gapon committed a whole lot of ZFS updates to FreeBSD’s vendor area This feature takes advantage of the new compressed ARC feature, which means blocks that are compressed on disk, remain compressed in ZFS’ RAM cache, to use the compressed blocks when using ZFS replication. Previously, blocks were uncompressed, sent (usually over the network), then recompressed on the other side. This is rather wasteful, and can make the process slower, not just because of the CPU time wasted decompressing/recompressing the data, but because it means more data has to be sent over the network. This caused many users to end up doing: zfs send | xz -T0 | ssh unxz | zfs recv, or similar, to compress the data before sending it over the network. With this new feature, zfs send with the new -c flag, will transmit the already compressed blocks instead. This change also adds longopts versions of all of the zfs send flags, making them easier to understand when written in shell scripts. A lot of fixes, man page updates, etc. from upstream OpenZFS Thanks to everyone who worked on these fixes and features! We’ll announce when these have been committed to head for testing *** Granting privileges using the FreeBSD MAC framework (https://mysteriouscode.io/blog/granting-privileges-using-mac-framework/) The MAC (Mandatory Access Control) framework allows finer grained permissions than the standard UNIX permissions that exist in the base system FreeBSD’s kernel provides quite sophisticated privilege model that extends the traditional UNIX user-and-group one. Here I’ll show how to leverage it to grant access to specific privileges to group of non-root users. mac(9) allows creating pluggable modules with policies that can extend existing base system security definitions. struct macpolicyops consist of many entry points that we can use to amend the behaviour. This time, I wanted to grant a privilege to change realtime priority to a selected group. While Linux kernel lets you specify a named group, FreeBSD doesn’t have such ability, hence I created this very simple policy. The privilege check can be extended using two user supplied functions: privcheck and privgrant. The first one can be used to further restrict existing privileges, i.e. you can disallow some specific priv to be used in jails, etc. The second one is used to explicitly grant extra privileges not available for the target in base configuration. The core of the macrtprio module is dead simple. I defined sysctl tree for two oids: enable (on/off switch for the policy) and gid (the GID target has to be member of), then I specified our custom version of mpoprivgrant called rtprioprivgrant. Body of my granting function is even simpler. If the policy is disabled or the privilege that is being checked is not PRIVSCHED_RTPRIO, we simply skip and return EPERM. If the user is member of the designated group we return 0 that’ll allow the action – target would change realtime privileges. Another useful thing the MAC framework can be used to grant to non-root users: PortACL: The ability to bind to TCP/UDP ports less than 1024, which is usually restricted to root. Some other uses for the MAC framework are discussed in The FreeBSD Handbook (https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/mac.html) However, there are lots more, and we would really like to see more tutorials and documentation on using MAC to make more secure servers, but allowing the few specific things that normally require root access. *** The Story of the PING Program (http://ftp.arl.army.mil/~mike/ping.html) This is from the homepage of Mike Muuss: Yes, it's true! I'm the author of ping for UNIX. Ping is a little thousand-line hack that I wrote in an evening which practically everyone seems to know about. :-) I named it after the sound that a sonar makes, inspired by the whole principle of cho-location. In college I'd done a lot of modeling of sonar and radar systems, so the "Cyberspace" analogy seemed very apt. It's exactly the same paradigm applied to a new problem domain: ping uses timed IP/ICMP ECHOREQUEST and ECHOREPLY packets to probe the "distance" to the target machine. My original impetus for writing PING for 4.2a BSD UNIX came from an offhand remark in July 1983 by Dr. Dave Mills while we were attending a DARPA meeting in Norway, in which he described some work that he had done on his "Fuzzball" LSI-11 systems to measure path latency using timed ICMP Echo packets. In December of 1983 I encountered some odd behavior of the IP network at BRL. Recalling Dr. Mills' comments, I quickly coded up the PING program, which revolved around opening an ICMP style SOCKRAW AFINET Berkeley-style socket(). The code compiled just fine, but it didn't work -- there was no kernel support for raw ICMP sockets! Incensed, I coded up the kernel support and had everything working well before sunrise. Not surprisingly, Chuck Kennedy (aka "Kermit") had found and fixed the network hardware before I was able to launch my very first "ping" packet. But I've used it a few times since then. grin If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options. The folks at Berkeley eagerly took back my kernel modifications and the PING source code, and it's been a standard part of Berkeley UNIX ever since. Since it's free, it has been ported to many systems since then, including Microsoft Windows95 and WindowsNT. In 1993, ten years after I wrote PING, the USENIX association presented me with a handsome scroll, pronouncing me a Joint recipient of The USENIX Association 1993 Lifetime Achievement Award presented to the Computer Systems Research Group, University of California at Berkeley 1979-1993. ``Presented to honor profound intellectual achievement and unparalleled service to our Community. At the behest of CSRG principals we hereby recognize the following individuals and organizations as CSRG participants, contributors and supporters.'' Wow! The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing: ping goodhost | sed -e 's/.*/ping/' | vocoder He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second, and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure. FreeBSD: /usr/local/lib/libpkg.so.3: Undefined symbol "utimensat" (http://glasz.org/sheeplog/2017/02/freebsd-usrlocalliblibpkgso3-undefined-symbol-utimensat.html) The internet will tell you that, of course, 10.2 is EOL, that packages are being built for 10.3 by now and to better upgrade to the latest version of FreeBSD. While all of this is true and running the latest versions is generally good advise, in most cases it is unfeasible to do an entire OS upgrade just to be able to install a package. Points out the ABI variable being used in /usr/local/etc/pkg/repos/FreeBSD.conf Now, if you have 10.2 installed and 10.3 is the current latest FreeBSD version, this url will point to packages built for 10.3 resulting in the problem that, when running pkg upgrade pkg it’ll go ahead and install the latest version of pkg build for 10.3 onto your 10.2 system. Yikes! FreeBSD 10.3 and pkgng broke the ABI by introducing new symbols, like utimensat. The solution: Have a look at the actual repo url http://pkg.FreeBSD.org/FreeBSD:10:amd64… there’s repo’s for each release! Instead of going through the tedious process of upgrading FreeBSD you just need to Use a repo url that fits your FreeBSD release: Update the package cache: pkg update Downgrade pkgng (in case you accidentally upgraded it already): pkg delete -f pkg pkg install -y pkg Install your package There you go. Don’t fret. But upgrade your OS soon ;) Beastie Bits CPU temperature collectd report on NetBSD (https://imil.net/blog/2017/01/22/collectd_NetBSD_temperature/) Booting FreeBSD 11 with NVMe and ZFS on AMD Ryzen (https://www.servethehome.com/booting-freebsd-11-nvme-zfs-amd-ryzen/) BeagleBone Black Tor relay (https://torbsd.github.io/blog.html#busy-bbb) FreeBSD - Disable in-tree GDB by default on x86, mips, and powerpc (https://reviews.freebsd.org/rS317094) CharmBUG April Meetup (https://www.meetup.com/CharmBUG/events/238218742/) The origins of XXX as FIXME (https://www.snellman.net/blog/archive/2017-04-17-xxx-fixme/) *** Feedback/Questions Felis - L2ARC (http://dpaste.com/2APJE4E#wrap) Gabe - FreeBSD Server Install (http://dpaste.com/0BRJJ73#wrap) FEMP Script (http://dpaste.com/05EYNJ4#wrap) Scott - FreeNAS & LAGG (http://dpaste.com/1CV323G#wrap) Marko - Backups (http://dpaste.com/3486VQZ#wrap) ***
189: Codified Summer
This week on the show we interview Wendell from Level1Techs, cover Google Summer of Code on the different BSD projects, cover YubiKey usage, dive into how NICs work & This episode was brought to you by Headlines Google summer of code for BSDs FreeBSD (https://www.freebsd.org/projects/summerofcode.html) FreeBSD's existing list of GSoC Ideas for potential students (https://wiki.freebsd.org/SummerOfCodeIdeas) FreeBSD/Xen: import the grant-table bus_dma(9) handlers from OpenBSD Add support for usbdump file-format to wireshark and vusb-analyzer Write a new boot environment manager Basic smoke test of all base utilities Port OpenBSD's pf testing framework and tests Userspace Address Space Annotation zstandard integration in libstand Replace mergesort implementation Test Kload (kexec for FreeBSD) Kernel fuzzing suite Integrate MFSBSD into the release building tools NVMe controller emulation for bhyve Verification of bhyve's instruction emulation VGA emulation improvements for bhyve audit framework test suite Add more FreeBSD testing to Xen osstest Lua in bootloader POSIX compliance testing framework coreclr: add Microsoft's coreclr and corefx to the Ports tree. NetBSD (https://wiki.netbsd.org/projects/gsoc/) Kernel-level projects Medium ISDN NT support and Asterisk integration LED/LCD Generic API NetBSD/azure -- Bringing NetBSD to Microsoft Azure OpenCrypto swcrypto(4) enhancements Scalable entropy gathering Userland PCI drivers Hard Real asynchronous I/O Parallelize page queues Tickless NetBSD with high-resolution timers Userland projects Easy Inetd enhancements -- Add new features to inetd Curses library automated testing Medium Make Anita support additional virtual machine systems Create an SQL backend and statistics/query page for ATF test results Light weight precision user level time reading Query optimizer for find(1) Port launchd Secure-PLT - supporting RELRO binaries Sysinst alternative interface Hard Verification tool for NetBSD32 pkgsrc projects Easy Version control config files Spawn support in pkgsrc tools Authentication server meta-package Medium pkgin improvements Unify standard installation tasks Hard Add dependency information to binary packages Tool to find dependencies precisely LLVM (http://llvm.org/OpenProjects.html#gsoc17) Fuzzing the Bitcode reader Description of the project: The optimizer is 25-30% slower when debug info are enabled, it'd be nice to track all the places where we don't do a good job about ignoring them! Extend clang AST to provide information for the type as written in template instantiations. Description of the project: When instantiating a template, the template arguments are canonicalized before being substituted into the template pattern. Clang does not preserve type sugar when subsequently accessing members of the instantiation. Clang should "re-sugar" the type when performing member access on a class template specialization, based on the type sugar of the accessed specialization. Shell auto-completion support for clang. Bash and other shells support typing a partial command and then automatically completing it for the user (or at least providing suggestions how to complete) when pressing the tab key. This is usually only supported for popular programs such as package managers (e.g. pressing tab after typing "apt-get install late" queries the APT package database and lists all packages that start with "late"). As of now clang's frontend isn't supported by any common shell. Clang-based C/C++ diff tool. Description of the project: Every developer has to interact with diff tools daily. The algorithms are usually based on detecting "longest common subsequences", which is agnostic to the file type content. A tool that would understand the structure of the code may provide a better diff experience by being robust against, for example, clang-format changes. Find dereference of pointers. Description of the project: Find dereference of pointer before checking for nullptr. Warn if virtual calls are made from constructors or destructors. Description of the project: Implement a path-sensitive checker that warns if virtual calls are made from constructors and destructors, which is not valid in case of pure virtual calls and could be a sign of user error in non-pure calls. Improve Code Layout Description of the project: The goal for the project is trying to improve the layout/performances of the generated executable. The primary object format considered for the project is ELF but this can be extended to other object formats. The project will touch both LLVM and lld. Why Isn’t OpenBSD in Google Summer of Code 2017? (http://marc.info/?l=openbsd-misc&m=149119308705465&w=2) Hacker News Discussion Thread (https://news.ycombinator.com/item?id=14020814) Turtles on the Wire: Understanding How the OS Uses the Modern NIC (http://dtrace.org/blogs/rm/2016/09/15/turtles-on-the-wire-understanding-how-the-os-uses-the-modern-nic/) The Simple NIC MAC Address Filters and Promiscuous Mode Problem: The Single Busy CPU A Swing and a Miss Nine Rings for Packets Doomed to be Hashed Problem: Density, Density, Density A Brief Aside: The Virtual NIC Always Promiscuous? The Classification Challenge Problem: CPUs are too ‘slow’ Problem: The Interrupts are Coming in too Hot Solution One: Do Less Work Solution Two: Turn Off Interrupts Recapping Future Directions and More Reading Make Dragonfly BSD great again! (http://akat1.pl/?id=3) Recently I spent some time reading Dragonfly BSD code. While doing so I spotted a vulnerability in the sysvsem subsystem that let user to point to any piece of memory and write data through it (including the kernel space). This can be turned into execution of arbitrary code in the kernel context and by exploiting this, we're gonna make Dragonfly BSD great again! Dragonfly BSD is a BSD system which originally comes from the FreeBSD project. In 2003 Matthew Dillon forked code from the 4.x branch of the FreeBSD and started a new flavour. I thought of Dragonfly BSD as just another fork, but during EuroBSDCon 2015 I accidentally saw the talk about graphical stack in the Dragonfly BSD. I confused rooms, but it was too late to escape as I was sitting in the middle of a row, and the exit seemed light years away from me. :-) Anyway, this talk was a sign to me that it's not just a niche of a niche of a niche of a niche operating system. I recommend spending a few minutes of your precious time to check out the HAMMER file system, Dragonfly's approach to MP, process snapshots and other cool features that it offers. Wikipedia article is a good starter With the exploit, they are able to change the name of the operating system back to FreeBSD, and escalate from an unprivileged user to root. The Bug itself is located in the semctl(2) system call implementation. bcopy(3) in line 385 copies semid_ds structure to memory pointed by arg->buf, this pointer is fully controlled by the user, as it's one of the syscall's arguments. So the bad thing here is that we can copy things to arbitrary address, but we have not idea what we copy yet. This code was introduced by wrongly merging code from the FreeBSD project, bah, bug happens. Using this access, the example code shows how to overwrite the function pointers in the kernel used for the open() syscall, and how to overwrite the ostype global, changing the name of the operating system. In the second example, the reference to the credentials of the user trying to open a file are used to overwrite that data, making the user root. The bug was fixed in uber fast manner (within few hours!) by Matthew Dillon, version 4.6.1 released shortly after that seems to be safe. In case you care, you know what to do! Thanks to Mateusz Kocielski for the detailed post, and finding the bug *** Interview - Wendell - wendell@level1techs.com (mailto:wendell@level1techs.com) / @tekwendell (https://twitter.com/tekwendell) Host of Level1Techs website, podcast and YouTube channel News Roundup Using yubikeys everywhere (http://www.tedunangst.com/flak/post/using-yubikeys-everywhere) Ted Unangst is back, with an interesting post about YUBI Keys Everybody is getting real excited about yubikeys recently, so I figured I should get excited, too. I have so far resisted two factor authorizing everything, but this seemed like another fun experiment. There’s a lot written about yubikeys and how you should use one, but nothing I’ve read answered a few of the specific questions I had To begin with, I ordered two yubikeys. One regular sized 4 and one nano. I wanted to play with different form factors to see which is better for various uses, and I wanted to test having a key and a backup key. Everybody always talks about having one yubikey. And then if you lose it, terrible things happen. Can this problem be alleviated with two keys? I’m also very curious what happens when I try to login to a service with my phone after enabling U2F. We’ve got three computers (and operating systems) in the mix, along with a number of (mostly web) services. Wherever possible, I want to use a yubikey both to login to the computer and to authorize myself to remote services. I started my adventure on my chromebook. Ultimate goal would be to use the yubikey for local logins. Either as a second factor, or as an alternative factor. First things first and we need to get the yubikey into the account I use to sign into the chromebook. Alas, there is apparently no way to enroll only a security key for a Google account. Every time I tried, it would ask me for my phone number. That is not what I want. Zero stars. Giving up on protecting the chromebook itself, at least maybe I can use it to enable U2F with some other sites. U2F is currently limited to Chrome, but it sounds like everything I want. Facebook signup using U2F was pretty easy. Go to account settings, security subheading, add the device. Tap the button when it glows. Key added. Note that it’s possible to add a key without actually enabling two factor auth, in which case you can still login with only a password, but no way to login with no password and only a USB key. Logged out to confirm it would check the key, and everything looked good, so I killed all my other active sessions. Now for the phone test. Not quite as smooth. Tried to login, the Facebook app then tells me it has sent me an SMS and to enter the code in the box. But I don’t have a phone number attached. I’m not getting an SMS code. Meanwhile, on my laptop, I have a new notification about a login attempt. Follow the prompts to confirm it’s me and permit the login. This doesn’t have any effect on the phone, however. I have to tap back, return to the login screen, and enter my password again. This time the login succeeds. So everything works, but there are still some rough patches in the flow. Ideally, the phone would more accurately tell me to visit the desktop site, and then automatically proceed after I approve. (The messenger app crashed after telling me my session had expired, but upon restarting it was able to borrow the Facebook app credentials and I was immediately logged back in.) Let’s configure Dropbox next. Dropbox won’t let you add a security key to an account until after you’ve already set up some other mobile authenticator. I already had the Duo app on my phone, so I picked that, and after a short QR scan, I’m ready to add the yubikey. So the key works to access Dropbox via Chrome. Accessing Dropbox via my phone or Firefox requires entering a six digit code. No way to use a yubikey in a three legged configuration I don’t use Github, but I know they support two factors, so let’s try them next. Very similar to Dropbox. In order to set up a key, I must first set up an authenticator app. This time I went with Yubico’s own desktop authenticator. Instead of scanning the QR code, type in some giant number (on my Windows laptop), and it spits out an endless series of six digit numbers, but only while the yubikey is inserted. I guess this is kind of what I want, although a three pound yubikey is kind of unwieldy. As part of my experiment, I noticed that Dropbox verifies passwords before even looking at the second auth. I have a feeling that they should be checked at the same time. No sense allowing my password guessing attack to proceed while I plot how to steal someone’s yubikey. In a sense, the yubikey should serve as a salt, preventing me from mounting such an attack until I have it, thus creating a race where the victim notices the key is gone and revokes access before I learn the password. If I know the password, the instant I grab the key I get access. Along similar lines, I was able to complete a password reset without entering any kind of secondary code. Having my phone turn into a second factor is a big part of what I’m looking to avoid with the yubikey. I’d like to be able to take my phone with me, logged into some sites but not all, and unable to login to the rest. All these sites that require using my phone as mobile authenticator are making that difficult. I bought the yubikey because it was cheaper than buying another phone! Using the Yubico desktop authenticator seems the best way around that. The article also provides instructions for configuring the Yubikey on OpenBSD A few notes about OTP. As mentioned, the secret key is the real password. It’s stored on whatever laptop or server you login to. Meaning any of those machines can take the key and use it to login to any other machine. If you use the same yubikey to login to both your laptop and a remote server, your stolen laptop can trivially be used to login to the server without the key. Be mindful of that when setting up multiple machines. Also, the OTP counter isn’t synced between machines in this setup, which allows limited replay attacks. Ted didn’t switch his SSH keys to the Yubikey, because it doesn’t support ED25519, and he just finished rotating all of his keys and doesn’t want to do it again. I did most of my experimenting with the larger yubikey, since it was easier to move between machines. For operations involving logging into a web site, however, I’d prefer the nano. It’s very small, even smaller than the tiniest wireless mouse transcievers I’ve seen. So small, in fact, I had trouble removing it because I couldn’t find anything small enough to fit through the tiny loop. But probably a good thing. Most other micro USB gadgets stick out just enough to snag when pushing a laptop into a bag. Not the nano. You lose a port, but there’s really no reason to ever take it out. Just leave it in, and then tap it whenever you login to the tubes. It would not be a good choice for authenticating to the local machine, however. The larger device, sized to fit on a keychain, is much better for that. It is possible to use two keys as backups. Facebook and Dropbox allow adding two U2F keys. This is perhaps a little tiresome if there’s lots of sites, as I see no way to clone a key. You have to login to every service. For challenge response and OTP, however, the personalization tool makes it easy to generate lots of yubikeys with the same secrets. On the other hand, a single device supports an infinite number of U2F sites. The programmable interfaces like OTP are limited to only two slots, and the first is already used by the factory OTP setup. What happened to my vlan (http://www.grenadille.net/post/2017/02/13/What-happened-to-my-vlan) A long term goal of the effort I'm driving to unlock OpenBSD's Network Stack is obviously to increase performances. So I'd understand that you find confusing when some of our changes introduce performance regressions. It is just really hard to do incremental changes without introducing temporary regressions. But as much as security is a process, improving performance is also a process. Recently markus@ told me that vlan(4) performances dropped in last releases. He had some ideas why but he couldn't provide evidences. So what really happened? Hrvoje Popovski was kind enough to help me with some tests. He first confirmed that on his Xeon box (E5-2643 v2 @ 3.50GHz), forwarding performances without pf(4) dropped from 1.42Mpps to 880Kpps when using vlan(4) on both interfaces. Together vlaninput() and vlanstart() represent 25% of the time CPU1 spends processing packets. This is not exactly between 33% and 50% but it is close enough. The assumption we made earlier is certainly too simple. If we compare the amount of work done in process context, represented by ifinputprocess() we clearly see that half of the CPU time is not spent in etherinput(). I'm not sure how this is related to the measured performance drop. It is actually hard to tell since packets are currently being processed in 3 different contexts. One of the arguments mikeb@ raised when we discussed moving everything in a single context, is that it is simpler to analyse and hopefully make it scale. With some measurements, a couple of nice pictures, a bit of analysis and some educated guesses we are now in measure of saying that the performances impact observed with vlan(4) is certainly due to the pseudo-driver itself. A decrease of 30% to 50% is not what I would expect from such pseudo-driver. I originally heard that the reason for this regression was the use of SRP but by looking at the profiling data it seems to me that the queuing API is the problem. In the graph above the CPU time spent in ifinput() and ifenqueue() from vlan(4) is impressive. Remember, in the case of vlan(4) these operations are done per packet! When ifinput() has been introduced the queuing API did not exist and putting/taking a single packet on/from an interface queue was cheap. Now it requires a mutex per operation, which in the case of packets received and sent on vlan(4) means grabbing three mutexes per packets. I still can't say if my analysis is correct or not, but at least it could explain the decrease observed by Hrvoje when testing multiple vlan(4) configurations. vlaninput() takes one mutex per packet, so it decreases the number of forwarded packets by ~100Kpps on this machine, while vlanstart() taking two mutexes decreases it by ~200Kpps. An interesting analysis of the routing performance regression on OpenBSD I have asked Olivier Cochard-Labbe about doing a similar comparison of routing performance on FreeBSD when a vlan pseudo interface is added to the forwarding path *** NetBSD: the first BSD introducing a modern process plugin framework in LLDB (https://blog.netbsd.org/tnf/entry/netbsd_the_first_bsd_introducing) Clean up in ptrace(2) ATF tests We have created some maintanance burden for the current ptrace(2) regression tests. The main issues with them is code duplication and the splitting between generic (Machine Independent) and port-specific (Machine Dependent) test files. I've eliminated some of the ballast and merged tests into the appropriate directory tests/lib/libc/sys/. The old location (tests/kernel) was a violation of the tests/README recommendation PTRACE_FORK on !x86 ports Along with the motivation from Martin Husemann we have investigated the issue with PTRACE_FORK ATF regression tests. It was discovered that these tests aren't functional on evbarm, alpha, shark, sparc and sparc64 and likely on other non-x86 ports. We have discovered that there is a missing SIGTRAP emitted from the child, during the fork(2) handshake. The proper order of operations is as follows: parent emits SIGTRAP with sicode=TRAPCHLD and pesetevent=pid of forkee child emits SIGTRAP with sicode=TRAPCHLD and pesetevent=pid of forker Only the x86 ports were emitting the second SIGTRAP signal. PTSYSCALL and PTSYSCALLEMU With the addition of PTSYSCALLEMU we can implement a virtual kernel syscall monitor. It means that we can fake syscalls within a debugger. In order to achieve this feature, we need to use the PTSYSCALL operation, catch SIGTRAP with sicode=TRAPSCE (syscall entry), call PTSYSCALLEMU and perform an emulated userspace syscall that would have been done by the kernel, followed by calling another PTSYSCALL with sicode=TRAPSCX. What has been done in LLDB A lot of work has been done with the goal to get breakpoints functional. This target penetrated bugs in the existing local patches and unveiled missing features required to be added. My initial test was tracing a dummy hello-world application in C. I have sniffed the GDB Remote Protocol packets and compared them between Linux and NetBSD. This helped to streamline both versions and bring the NetBSD support to the required Linux level. Plan for the next milestone I've listed the following goals for the next milestone. watchpoints support floating point registers support enhance core(5) and make it work for multiple threads introduce PTSETSTEP and PTCLEARSTEP in ptrace(2) support threads in the NetBSD Process Plugin research F_GETPATH in fcntl(2) Beyond the next milestone is x86 32-bit support. LibreSSL 2.5.2 released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.2-relnotes.txt) Added the recallocarray(3) memory allocation function, and converted various places in the library to use it, such as CBB and BUFMEMgrow. recallocarray(3) is similar to reallocarray. Newly allocated memory is cleared similar to calloc(3). Memory that becomes unallocated while shrinking or moving existing allocations is explicitly discarded by unmapping or clearing to 0. Added new root CAs from SECOM Trust Systems / Security Communication of Japan. Added EVP interface for MD5+SHA1 hashes. Fixed DTLS client failures when the server sends a certificate request. Correct handling of padding when upgrading an SSLv2 challenge into an SSLv3/TLS connection. Allow protocols and ciphers to be set on a TLS config object in libtls. Improved nc(1) TLS handshake CPU usage and server-side error reporting. Beastie Bits HardenedBSD Stable v46.16 released (http://hardenedbsd.org/article/op/2017-03-30/stable-release-hardenedbsd-stable-11-stable-v4616) KnoxBUG looking for OpenBSD people in Knoxville TN area (https://www.reddit.com/r/openbsd/comments/5vggn7/knoxbug_looking_for_openbsd_people_in_knoxville/) KnoxBUG Tuesday, April 18, 2017 - 6:00pm : Caleb Cooper: Advanced BASH Scripting](http://knoxbug.org/2017-04-18) e2k17 Nano hackathon report from Bob Beck (http://undeadly.org/cgi?action=article&sid=20170405110059) Noah Chelliah, Host of the Linux Action Show calls Linux a ‘Bad Science Project’ and ditches Linux for TrueOS](https://youtu.be/yXB85_olYhQ?t=3238) *** Feedback/Questions James - ZFS Mounting (http://dpaste.com/1H43JGV#wrap) Kevin - Virtualization (http://dpaste.com/18VNAJK#wrap) Ben - Jails (http://dpaste.com/0R7CRZ7#wrap) Florian - ZFS and Migrating Linux userlands (http://dpaste.com/2Z1P23T#wrap) q5sys - question for the community (http://dpaste.com/26M453F#wrap)
188: And then the murders began
Today on BSD Now, the latest Dragonfly BSD release, RaidZ performance, another OpenSSL Vulnerability, and more; all this week on BSD Now. This episode was brought to you by Headlines DragonFly BSD 4.8 is released (https://www.dragonflybsd.org/release48/) Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details. Support for eMMC booting, and mobile and high-performance PCIe SSDs This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available. EFI support The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice. DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system. Improved graphics support The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements. Other user-affecting changes Kernel is now built using -O2. VKernels now use COW, so multiple vkernels can share one disk image. powerd() is now sensitive to time and temperature changes. Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf. *** #8005 poor performance of 1MB writes on certain RAID-Z configurations (https://github.com/openzfs/openzfs/pull/321) Matt Ahrens posts a new patch for OpenZFS Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB. To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 "pad sectors" at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009. Writing the optional zio is done by the io aggregation code in vdevqueue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfsvdevaggregationlimit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfsvdevaggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation. The solution is to aggregate optional zio's regardless of the aggregation size limit. As you can see from the graphs, this can make a large difference in performance. I encourage you to read the entire commit message, it is well written and very detailed. *** Can you spot the OpenSSL vulnerability (https://guidovranken.wordpress.com/2017/01/28/can-you-spot-the-vulnerability/) This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statemsrvr.c, sslbytestocipherlist()), and can easily be reached remotely. Can you spot the vulnerability? So there is a loop, and within that loop we have an ‘if’ statement, that tests a number of conditions. If any of those conditions fail, OPENSSLfree(raw) is called. But raw isn’t the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability. But not quite. None of those checks in the ‘if’ statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKETget1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKETcopybytes and PACKET_forward cannot fail. So, does the code do what the original author thought, or expected it to do? But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code? Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312 (https://github.com/openssl/openssl/pull/2312) PS I’m not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody’s best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes. Thanks to Guido Vranken for the sharp eye and the blog post *** Research Debt (http://distill.pub/2017/research-debt/) I found this article interesting as it relates to not just research, but a lot of technical areas in general Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that’s gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It’s entirely possible to build paths and staircases into these mountains. The climb isn’t something to be proud of. The climb isn’t progress: the climb is a mountain of debt. Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don’t appreciate how much better things could be. Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking. Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they’re bad. For example, an object with extra electrons is negative, and pi is wrong Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there’s no easy way to filter or summarize them. We think noise is the main way experts experience research debt. There’s a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery. + The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools. Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it. Anyway, if that bit piqued your interest, go read the full article and the suggested further reading. *** News Roundup And then the murders began. (https://blather.michaelwlucas.com/archives/2902) A whole bunch of people have pointed me at articles like this one (http://thehookmag.com/2017/03/adding-murders-began-second-sentence-book-makes-instantly-better-125462/), which claim that you can improve almost any book by making the second sentence “And then the murders began.” It’s entirely possible they’re correct. But let’s check, with a sampling of books. As different books come in different tenses and have different voices, I’ve made some minor changes. “Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed “Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery “The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin’s Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS “Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed Netdata now supports FreeBSD (https://github.com/firehol/netdata) netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards. From the release notes: apps.plugin ported for FreeBSD Check out their demo sites (https://github.com/firehol/netdata/wiki) *** Distrowatch Weekly reviews RaspBSD (https://distrowatch.com/weekly.php?issue=20170220#raspbsd) RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers. The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user's shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory. I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi's external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity. Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable. Research UNIX V8, V9 and V10 made public by Alcatel-Lucent (https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%20regarding%20Unix%203-7-17.pdf) Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10. Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page (https://en.wikipedia.org/wiki/Research_Unix) It only took 30+ years, but now they’re public You can grab them from here (http://www.tuhs.org/Archive/Distributions/Research/) If you’re wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9 (http://plan9.bell-labs.com/plan9/); which itself was succeeded by Inferno (http://www.vitanuova.com/inferno/). *** Beastie Bits The BSD Family Tree (https://github.com/freebsd/freebsd/blob/master/share/misc/bsd-family-tree) Unix Permissions Calculator (http://permissions-calculator.org/) NAS4Free release 11.0.0.4 now available (https://sourceforge.net/projects/nas4free/files/NAS4Free-11.0.0.4/11.0.0.4.4141/) Another BSD Mag released for free downloads (https://bsdmag.org/download/simple-quorum-drive-freebsd-ctl-ha-beast-storage-system/) OPNsense 17.1.4 released (https://forum.opnsense.org/index.php?topic=4898.msg19359) *** Feedback/Questions gozes asks via twitter about how get involved in FreeBSD (https://twitter.com/gozes/status/846779901738991620) ***