Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros. The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day.

Similar Podcasts

Elixir Outlaws

Elixir Outlaws
Elixir Outlaws is an informal discussion about interesting things happening in Elixir. Our goal is to capture the spirit of a conference hallway discussion in a podcast.

The Cynical Developer

The Cynical Developer
A UK based Technology and Software Developer Podcast that helps you to improve your development knowledge and career, through explaining the latest and greatest in development technology and providing you with what you need to succeed as a developer.

Programming Throwdown

Programming Throwdown
Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.

194: Daemonic plans

May 17, 2017 1:33:35 67.38 MB Downloads: 0

This week on BSD Now we cover the latest FreeBSD Status Report, a plan for Open Source software development, centrally managing bhyve with Ansible, libvirt, and pkg-ssh, and a whole lot more. This episode was brought to you by Headlines FreeBSD Project Status Report (January to March 2017) (https://www.freebsd.org/news/status/report-2017-01-2017-03.html) While a few of these projects indicate they are a "plan B" or an "attempt III", many are still hewing to their original plans, and all have produced impressive results. Please enjoy this vibrant collection of reports, covering the first quarter of 2017. The quarterly report opens with notes from Core, The FreeBSD Foundation, the Ports team, and Release Engineering On the project front, the Ceph on FreeBSD project had made considerable advances, and is now usable as the net/ceph-devel port via the ceph-fuse module. Eventually they hope to have a kernel RADOS block device driver, so fuse is not required CloudABI update, including news that the Bitcoin reference implementation is working on a port to CloudABI eMMC Flash and SD card updates, allowing higher speeds (max speed changes from ~40 to ~80 MB/sec). As well, the MMC Stack can now also be backed by the CAM framework. Improvements to the Linuxulator More detail on the pNFS Server plan B that we discussed in a previous week Snow B.V. is sponsoring a dutch translation of the FreeBSD Handbook using the new .po system *** A plan for open source software maintainers (http://www.daemonology.net/blog/2017-05-11-plan-for-foss-maintainers.html) Colin Percival describes in his blog “a plan for open source software maintainers”: I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular: Free software is expensive. Software is expensive to begin with; but good quality open source software tends to be written by people who are recognized as experts in their fields (partly thanks to that very software) and can demand commensurate salaries. While that expensive developer time is donated (either by the developers themselves or by their employers), this influences what their time is used for: Individual developers like doing things which are fun or high-status, while companies usually pay developers to work specifically on the features those companies need. Maintaining existing code is important, but it is neither fun nor high-status; and it tends to get underweighted by companies as well, since maintenance is inherently unlikely to be the most urgent issue at any given time. Open source software is largely a "throw code over the fence and walk away" exercise. Over the past 15 years I've written freebsd-update, bsdiff, portsnap, scrypt, spiped, and kivaloo, and done a lot of work on the FreeBSD/EC2 platform. Of these, I know bsdiff and scrypt are very widely used and I suspect that kivaloo is not; but beyond that I have very little knowledge of how widely or where my work is being used. Anecdotally it seems that other developers are in similar positions: At conferences I've heard variations on "you're using my code? Wow, that's awesome; I had no idea" many times. I have even less knowledge of what people are doing with my work or what problems or limitations they're running into. Occasionally I get bug reports or feature requests; but I know I only hear from a very small proportion of the users of my work. I have a long list of feature ideas which are sitting in limbo simply because I don't know if anyone would ever use them — I suspect the answer is yes, but I'm not going to spend time implementing these until I have some confirmation of that. A lot of mid-size companies would like to be able to pay for support for the software they're using, but can't find anyone to provide it. For larger companies, it's often easier — they can simply hire the author of the software (and many developers who do ongoing maintenance work on open source software were in fact hired for this sort of "in-house expertise" role) — but there's very little available for a company which needs a few minutes per month of expertise. In many cases, the best support they can find is sending an email to the developer of the software they're using and not paying anything at all — we've all received "can you help me figure out how to use this" emails, and most of us are happy to help when we have time — but relying on developer generosity is not a good long-term solution. Every few months, I receive email from people asking if there's any way for them to support my open source software contributions. (Usually I encourage them to donate to the FreeBSD Foundation.) Conversely, there are developers whose work I would like to support (e.g., people working on FreeBSD wifi and video drivers), but there isn't any straightforward way to do this. Patreon has demonstrated that there are a lot of people willing to pay to support what they see as worthwhile work, even if they don't get anything directly in exchange for their patronage. It seems to me that this is a case where problems are in fact solutions to other problems. To wit: Users of open source software want to be able to get help with their use cases; developers of open source software want to know how people are using their code. Users of open source software want to support the the work they use; developers of open source software want to know which projects users care about. Users of open source software want specific improvements; developers of open source software may be interested in making those specific changes, but don't want to spend the time until they know someone would use them. Users of open source software have money; developers of open source software get day jobs writing other code because nobody is paying them to maintain their open source software. I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact. He poses three questions to users about whether or not people (users and software developers alike) would be interested in this and whether payment (giving and receiving, respectively) is interesting Check out the comments (and those on https://news.ycombinator.com/item?id=14313804 (reddit.com)) as well for some suggestions and discussion on the topic *** OpenBSD vmm hypervisor: Part 2 (http://www.h-i-r.net/2017/04/openbsd-vmm-hypervisor-part-2.html) We asked for people to write up their experience using OpenBSD’s VMM. This blog post is just that This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm. A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1. Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here. The command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access. Let us know how VMM works for you *** News Roundup openbsd changes of note 621 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-621) More stuff, more fun. Fix script to not perform tty operations on things that aren’t ttys. Detected by pledge. Merge libdrm 2.4.79. After a forced unmount, also unmount any filesystems below that mount point. Flip previously warm pages in the buffer cache to memory above the DMA region if uvm tells us it is available. Pages are not automatically promoted to upper memory. Instead it’s used as additional memory only for what the cache considers long term buffers. I/O still requires DMA memory, so writing to a buffer will pull it back down. Makefile support for systems with both gcc and clang. Make i386 and amd64 so. Take a more radical approach to disabling colours in clang. When the data buffered for write in tmux exceeds a limit, discard it and redraw. Helps when a fast process is running inside tmux running inside a slow terminal. Add a port of witness(4) lock validation tool from FreeBSD. Use it with mplock, rwlock, and mutex in the kernel. Properly save and restore FPU context in vmm. Remove KGDB. It neither compiles nor works. Add a constant time AES implementation, from BearSSL. Remove SSHv1 from ssh. and more... *** Digging into BSD's choice of Unix group for new directories and files (https://utcc.utoronto.ca/~cks/space/blog/unix/BSDDirectoryGroupChoice) I have to eat some humble pie here. In comments on my entry on an interesting chmod failure, Greg A. Woods pointed out that FreeBSD's behavior of creating everything inside a directory with the group of the directory is actually traditional BSD behavior (it dates all the way back to the 1980s), not some odd new invention by FreeBSD. As traditional behavior it makes sense that it's explicitly allowed by the standards, but I've also come to think that it makes sense in context and in general. To see this, we need some background about the problem facing BSD. In the beginning, two things were true in Unix: there was no mkdir() system call, and processes could only be in one group at a time. With processes being in only one group, the choice of the group for a newly created filesystem object was easy; it was your current group. This was felt to be sufficiently obvious behavior that the V7 creat(2) manpage doesn't even mention it. Now things get interesting. 4.1c BSD seems to be where mkdir(2) is introduced and where creat() stops being a system call and becomes an option to open(2). It's also where processes can be in multiple groups for the first time. The 4.1c BSD open(2) manpage is silent about the group of newly created files, while the mkdir(2) manpage specifically claims that new directories will have your effective group (ie, the V7 behavior). This is actually wrong. In both mkdir() in sysdirectory.c and maknode() in ufssyscalls.c, the group of the newly created object is set to the group of the parent directory. Then finally in the 4.2 BSD mkdir(2) manpage the group of the new directory is correctly documented (the 4.2 BSD open(2) manpage continues to say nothing about this). So BSD's traditional behavior was introduced at the same time as processes being in multiple groups, and we can guess that it was introduced as part of that change. When your process can only be in a single group, as in V7, it makes perfect sense to create new filesystem objects with that as their group. It's basically the same case as making new filesystem objects be owned by you; just as they get your UID, they also get your GID. When your process can be in multiple groups, things get less clear. A filesystem object can only be in one group, so which of your several groups should a new filesystem object be owned by, and how can you most conveniently change that choice? One option is to have some notion of a 'primary group' and then provide ways to shuffle around which of your groups is the primary group. Another option is the BSD choice of inheriting the group from context. By far the most common case is that you want your new files and directories to be created in the 'context', ie the group, of the surrounding directory. If you fully embrace the idea of Unix processes being in multiple groups, not just having one primary group and then some number of secondary groups, then the BSD choice makes a lot of sense. And for all of its faults, BSD tended to relatively fully embrace its changes While it leads to some odd issues, such as the one I ran into, pretty much any choice here is going to have some oddities. Centrally managed Bhyve infrastructure with Ansible, libvirt and pkg-ssh (http://www.shellguardians.com/2017/05/centrally-managed-bhyve-infrastructure.html) At work we've been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal. After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream "AUTOMATION!" and so did we. Therefore, we started looking for different ways to improve our deployments. We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations. The following building blocks are used: The ZFS snapshot of an existing VM. This will be our VM template. A modified version of oneoff-pkg-create to package the ZFS snapshots. pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail. libvirt to manage our Bhyve VMs. The ansible modules virt, virtnet and virtpool. Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel. NetBSD maintainer in the QEMU project (https://blog.netbsd.org/tnf/entry/netbsd_maintainer_in_the_qemu) QEMU - the FAST! processor emulator - is a generic, Open Source, machine emulator and virtualizer. It defines state of the art in modern virtualization. This software has been developed for multiplatform environments with support for NetBSD since virtually forever. It's the primary tool used by the NetBSD developers and release engineering team. It is run with continuous integration tests for daily commits and execute regression tests through the Automatic Test Framework (ATF). The QEMU developers warned the Open Source community - with version 2.9 of the emulator - that they will eventually drop support for suboptimally supported hosts if nobody will step in and take the maintainership to refresh the support. This warning was directed to major BSDs, Solaris, AIX and Haiku. Thankfully the NetBSD position has been filled - making NetBSD to restore official maintenance. Beastie Bits OpenBSD Community Goes Gold (http://undeadly.org/cgi?action=article&sid=20170510012526&mode=flat&count=0) CharmBUG’s Tor Hack-a-thon has been pushed back to July due to scheduling difficulties (https://www.meetup.com/CharmBUG/events/238218840/) Direct Rendering Manager (DRM) Driver for i915, from the Linux kernel to Haiku with the help of DragonflyBSD’s Linux Compatibility layer (https://www.haiku-os.org/blog/vivek/2017-05-05_[gsoc_2017]_3d_hardware_acceleration_in_haiku/) TomTom lists OpenBSD in license (https://twitter.com/bsdlme/status/863488045449977864) London Net BSD Meetup on May 22nd (https://mail-index.netbsd.org/regional-london/2017/05/02/msg000571.html) KnoxBUG meeting May 30th, 2017 - Introduction to FreeNAS (http://knoxbug.org/2017-05-30) *** Feedback/Questions Felix - Home Firewall (http://dpaste.com/35EWVGZ#wrap) David - Docker Recipes for Jails (http://dpaste.com/0H51NX2#wrap) Don - GoLang & Rust (http://dpaste.com/2VZ7S8K#wrap) George - OGG feed (http://dpaste.com/2A1FZF3#wrap) Roller - BSDCan Tips (http://dpaste.com/3D2B6J3#wrap) ***

193: Fire up the 802.11 AC

May 10, 2017 2:06:06 90.79 MB Downloads: 0

This week on BSD Now, Adrian Chadd on bringing up 802.11ac in FreeBSD, a PFsense and OpenVPN tutorial, and we talk about an interesting ZFS storage pool checkpoint project. This episode was brought to you by Headlines Bringing up 802.11ac on FreeBSD (http://adrianchadd.blogspot.com/2017/04/bringing-up-80211ac-on-freebsd.html) Adrian Chadd has a new blog post about his work to bring 802.11ac support to FreeBSD 802.11ac allows for speeds up to 500mbps and total bandwidth into multiple gigabits The FreeBSD net80211 stack has reasonably good 802.11n support, but no 802.11ac support. I decided a while ago to start adding basic 802.11ac support. It was a good exercise in figuring out what the minimum set of required features are and another excuse to go find some of the broken corner cases in net80211 that needed addressing. 802.11ac introduces a few new concepts that the stack needs to understand. I decided to use the QCA 802.11ac parts because (a) I know the firmware and general chip stuff from the first generation 11ac parts well, and (b) I know that it does a bunch of stuff (like rate control, packet scheduling, etc) so I don't have to do it. If I chose, say, the Intel 11ac parts then I'd have to implement a lot more of the fiddly stuff to get good behaviour. Step one - adding VHT channels. I decided in the shorter term to cheat and just add VHT channels to the already very large ieee80211channel map. The linux way of there being a channel context rather than hundreds of static channels to choose from is better in the long run, but I wanted to get things up and running. So, that's what I did first - I added VHT flags for 20, 40, 80, 80+80 and 160MHz operating modes and I did the bare work required to populate the channel lists with VHT channels as well. Then I needed to glue it into an 11ac driver. My ath10k port was far enough along to attempt this, so I added enough glue to say "I support VHT" to the iccaps field and propagated it to the driver for monitor mode configuration. And yes, after a bit of dancing, I managed to get a VHT channel to show up in ath10k in monitor mode and could capture 80MHz wide packets. Success! By far the most fiddly was getting channel promotion to work. net80211 supports the concept of dumb NICs (like atheros 11abgn parts) very well, where you can have multiple virtual interfaces but the "driver" view of the right configuration is what's programmed into the hardware. For firmware NICs which do this themselves (like basically everything sold today) this isn't exactly all that helpful. So, for now, it's limited to a single VAP, and the VAP configuration is partially derived from the global state and partially derived from the negotiated state. It's annoying, but it is adding to the list of things I will have to fix later. the QCA chips/firmware do 802.11 crypto offload. They actually pretend that there's no key - you don't include the IV, you don't include padding, or anything. You send commands to set the crypto keys and then you send unencrypted 802.11 frames (or 802.3 frames if you want to do ethernet only.) This means that I had to teach net80211 a few things: + frames decrypted by the hardware needed to have a "I'm decrypted" bit set, because the 802.11 header field saying "I'm decrypted!" is cleared + frames encrypted don't have the "i'm encrypted" bit set + frames encrypted/decrypted have no padding, so I needed to teach the input path and crypto paths to not validate those if the hardware said "we offload it all." Now comes the hard bit of fixing the shortcomings before I can commit the driver. There are .. lots. The first one is the global state. The ath10k firmware allows what they call 'vdevs' (virtual devices) - for example, multiple SSID/BSSID support is implemented with multiple vdevs. STA+WDS is implemented with vdevs. STA+P2P is implemented with vdevs. So, technically speaking I should go and find all of the global state that should really be per-vdev and make it per-vdev. This is tricky though, because a lot of the state isn't kept per-VAP even though it should be. Anyway, so far so good. I need to do some of the above and land it in FreeBSD-HEAD so I can finish off the ath10k port and commit what I have to FreeBSD. There's a lot of stuff coming - including all of the wave-2 stuff (like multiuser MIMO / MU-MIMO) which I just plainly haven't talked about yet. Viva la FreeBSD wireless! pfSense and OpenVPN Routing (http://www.terrafoundry.net/blog/2017/04/12/pfsense-openvpn/) This article tries to be a simple guide on how to enable your home (or small office) https://www.pfsense.org/ (pfSense) setup to route some traffic via the vanilla Internet, and some via a VPN site that you’ve setup in a remote location. Reasons to Setup a VPN: Control Security Privacy Fun VPNs do not instantly guarantee privacy, they’re a layer, as with any other measure you might invoke. In this example I used a server that’s directly under my name. Sure, it was a country with strict privacy laws, but that doesn’t mean that the outgoing IP address wouldn’t be logged somewhere down the line. There’s also no reason you have to use your own OpenVPN install, there are many, many personal providers out there, who can offer the same functionality, and a degree of anonymity. (If you and a hundred other people are all coming from one IP, it becomes extremely difficult to differentiate, some VPN providers even claim a ‘logless’ setup.) VPNs can be slow. The reason I have a split-setup in this article, is because there are devices that I want to connect to the internet quickly, and that I’m never doing sensitive things on, like banking. I don’t mind if my Reddit-browsing and IRC messages are a bit slower, but my Nintendo Switch and PS4 should have a nippy connection. Services like Netflix can and do block VPN traffic in some cases. This is more of an issue for wider VPN providers (I suspect, but have no proof, that they just blanket block known VPN IP addresses.) If your VPN is in another country, search results and tracking can be skewed. This is arguable a good thing, who wants to be tracked? But it can also lead to frustration if your DuckDuckGo results are tailored to the middle of Paris, rather than your flat in Birmingham. The tutorial walks through the basic setup: Labeling the interfaces, configuring DHCP, creating a VPN: Now that we have our OpenVPN connection set up, we’ll double check that we’ve got our interfaces assigned With any luck (after we’ve assigned our OPENVPN connection correctly, you should now see your new Virtual Interface on the pfSense Dashboard We’re charging full steam towards the sections that start to lose people. Don’t be disheartened if you’ve had a few issues up to now, there is no “right” way to set up a VPN installation, and it may be that you have to tweak a few things and dive into a few man-pages before you’re set up. NAT is tricky, and frankly it only exists because we stretched out IPv4 for much longer than we should have. That being said it’s a necessary evil in this day and age, so let’s set up our connection to work with it. We need NAT here because we’re going to masque our machines on the LAN interface to show as coming from the OpenVPN client IP address, to the OpenVPN server. Head over to Firewall -> NAT -> Outbound. The first thing we need to do in this section, is to change the Outbound NAT Mode to something we can work with, in this case “Hybrid.” Configure the LAN interface to be NAT’d to the OpenVPN address, and the INSECURE interface to use your regular ISP connection Configure the firewall to allow traffic from the LAN network to reach the INSECURE network Then add a second rule allowing traffic from the LAN network to any address, and set the gateway the the OPENVPN connection And there you have it, traffic from the LAN is routed via the VPN, and traffic from the INSECURE network uses the naked internet connection *** Switching to OpenBSD (https://mndrix.blogspot.co.uk/2017/05/switching-to-openbsd.html) After 12 years, I switched from macOS to OpenBSD. It's clean, focused, stable, consistent and lets me get my work done without any hassle. When I first became interested in computers, I thought operating systems were fascinating. For years I would reinstall an operating system every other weekend just to try a different configuration: MS-DOS 3.3, Windows 3.0, Linux 1.0 (countless hours recompiling kernels). In high school, I settled down and ran OS/2 for 5 years until I graduated college. I switched to Linux after college and used it exclusively for 5 years. I got tired of configuring Linux, so I switched to OS X for the next 12 years, where things just worked. But Snow Leopard was 7 years ago. These days, OS X is like running a denial of service attack against myself. macOS has a dozen apps I don't use but can't remove. Updating them requires a restart. Frequent updates to the browser require a restart. A minor XCode update requires me to download a 4.3 GB file. My monitors frequently turn off and require a restart to fix. A system's availability is a function (http://techthoughts.typepad.com/managing_computers/2007/11/availability-mt.html) of mean time between failure and mean time to repair. For macOS, both numbers are heading in the wrong direction for me. I don't hold any hard feelings about it, but it's time for me to get off this OS and back to productive work. I found OpenBSD very refreshing, so I created a bootable thumb drive and within an hour had it up and running on a two-year old laptop. I've been using it for my daily work for the past two weeks and it's been great. Simple, boring and productive. Just the way I like it. The documentation is fantastic. I've been using Unix for years and have learned quite a bit just by reading their man pages. OS releases come like clockwork every 6 months and are supported for 12. Security and other updates seem relatively rare between releases (roughly one small patch per week during 6.0). With syspatch in 6.1, installing them should be really easy too. ZFS Storage Pool Checkpoint Project (https://sdimitro.github.io/post/zpool-checkpoint) During the OpenZFS summit last year (2016), Dan Kimmel and I quickly hacked together the zpool checkpoint command in ZFS, which allows reverting an entire pool to a previous state. Since it was just for a hackathon, our design was bare bones and our implementation far from complete. Around a month later, we had a new and almost complete design within Delphix and I was able to start the implementation on my own. I completed the implementation last month, and we’re now running regression tests, so I decided to write this blog post explaining what a storage pool checkpoint is, why we need it within Delphix, and how to use it. The Delphix product is basically a VM running DelphixOS (a derivative of illumos) with our application stack on top of it. During an upgrade, the VM reboots into the new OS bits and then runs some scripts that update the environment (directories, snapshots, open connections, etc.) for the new version of our app stack. Software being software, failures can happen at different points during the upgrade process. When an upgrade script that makes changes to ZFS fails, we have a corresponding rollback script that attempts to bring ZFS and our app stack back to their previous state. This is very tricky as we need to undo every single modification applied to ZFS (including dataset creation and renaming, or enabling new zpool features). The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn’t corrupt your data). It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Otherwise, she discards it. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”. I definitely see value in this for the appliance use case Some usage examples follow, along with some caveats. One of the restrictions is that you cannot attach, detach, or remove a device while a checkpoint exists. However, the zpool add operation is still possible, however if you roll back to the checkpoint, the device will no longer be part of the pool. Rather than a shortcoming, this seems like a nice feature, a way to help users avoid the most common foot shooting (which I witnessed in person at Linux Fest), adding a new log or cache device, but missing a keyword and adding it is a storage vdev rather than a aux vdev. This operation could simply be undone if a checkpoint where taken before the device was added. *** News Roundup Review of TrueOS (https://distrowatch.com/weekly.php?issue=20170501#trueos) TrueOS, which was formerly named PC-BSD, is a FreeBSD-based operating system. TrueOS is a rolling release platform which is based on FreeBSD's "CURRENT" branch, providing TrueOS with the latest drivers and features from FreeBSD. Apart from the name change, TrueOS has deviated from the old PC-BSD project in a number of ways. The system installer is now more streamlined (and I will touch on that later) and TrueOS is a rolling release platform while PC-BSD defaulted to point releases. Another change is PC-BSD used to allow the user to customize which software was installed at boot time, including the desktop environment. The TrueOS project now selects a minimal amount of software for the user and defaults to using the Lumina desktop environment. From the conclusions: What I took away from my time with TrueOS is that the project is different in a lot of ways from PC-BSD. Much more than just the name has changed. The system is now more focused on cutting edge software and features in FreeBSD's development branch. The install process has been streamlined and the user begins with a set of default software rather than selecting desired packages during the initial setup. The configuration tools, particularly the Control Panel and AppCafe, have changed a lot in the past year. The designs have a more flat, minimal look. It used to be that PC-BSD did not have a default desktop exactly, but there tended to be a focus on KDE. With TrueOS the project's in-house desktop, Lumina, serves as the default environment and I think it holds up fairly well. In all, I think TrueOS offers a convenient way to experiment with new FreeBSD technologies and ZFS. I also think people who want to run FreeBSD on a desktop computer may want to look at TrueOS as it sets up a graphical environment automatically. However, people who want a stable desktop platform with lots of applications available out of the box may not find what they want with this project. A simple guide to install Ubuntu on FreeBSD with byhve (https://www.davd.eu/install-ubuntu-on-freebsd-with-bhyve/) David Prandzioch writes in his blog: For some reasons I needed a Linux installation on my NAS. bhyve is a lightweight virtualization solution for FreeBSD that makes that easy and efficient. However, the CLI of bhyve is somewhat bulky and bare making it hard to use, especially for the first time. This is what vm-bhyve solves - it provides a simple CLI for working with virtual machines. More details follow about what steps are needed to setup vm_bhyve on FreeBSD Also check out his other tutorials on his blog: https://www.davd.eu/freebsd/ (https://www.davd.eu/freebsd/) *** Graphical Overview of the Architecture of FreeBSD (https://dspinellis.github.io/unix-architecture/arch.pdf) This diagram tries to show the different components that make up the FreeBSD Operating Systems It breaks down the various utilities, libraries, and components into some categories and sub-categories: User Commands: Development (cc, ld, nm, as, etc) File Management (ls, cp, cmp, mkdir) Multiuser Commands (login, chown, su, who) Number Processing (bc, dc, units, expr) Text Processing (cut, grep, sort, uniq, wc) User Messaging (mail, mesg, write, talk) Little Languages (sed, awk, m4) Network Clients (ftp, scp, fetch) Document Preparation (*roff, eqn, tbl, refer) Administrator and System Commands Filesystem Management (fsck, newfs, gpart, mount, umount) Networking (ifconfig, route, arp) User Management (adduser, pw, vipw, sa, quota*) Statistics (iostat, vmstat, pstat, gstat, top) Network Servers (sshd, ftpd, ntpd, routed, rpc.*) Scheduling (cron, periodic, rc.*, atrun) Libraries (C Standard, Operating System, Peripheral Access, System File Access, Data Handling, Security, Internationalization, Threads) System Call Interface (File I/O, Mountable Filesystems, File ACLs, File Permissions, Processes, Process Tracing, IPC, Memory Mapping, Shared Memory, Kernel Events, Memory Locking, Capsicum, Auditing, Jails) Bootstrapping (Loaders, Configuration, Kernel Modules) Kernel Utility Functions Privilege Management (acl, mac, priv) Multitasking (kproc, kthread, taskqueue, swi, ithread) Memory Management (vmem, uma, pbuf, sbuf, mbuf, mbchain, malloc/free) Generic (nvlist, osd, socket, mbuf_tags, bitset) Virtualization (cpuset, crypto, device, devclass, driver) Synchronization (lock, sx, sema, mutex, condvar_, atomic_*, signal) Operations (sysctl, dtrace, watchdog, stack, alq, ktr, panic) I/O Subsystem Special Devices (line discipline, tty, raw character, raw disk) Filesystems (UFS, FFS, NFS, CD9660, Ext2, UDF, ZFS, devfs, procfs) Sockets Network Protocols (TCP, UDP, UCMP, IPSec, IP4, IP6) Netgraph (50+ modules) Drivers and Abstractions Character Devices CAM (ATA, SATA, SAS, SPI) Network Interface Drivers (802.11, ifae, 100+, ifxl, NDIS) GEOM Storage (stripe, mirror, raid3, raid5, concat) Encryption / Compression (eli, bde, shsec, uzip) Filesystem (label, journal, cache, mbr, bsd) Virtualization (md, nop, gate, virtstor) Process Control Subsystems Scheduler Memory Management Inter-process Communication Debugging Support *** Official OpenBSD 6.1 CD - There's only One! (http://undeadly.org/cgi?action=article&sid=20170503203426&mode=expanded) Ebay auction Link (http://www.ebay.com/itm/The-only-Official-OpenBSD-6-1-CD-set-to-be-made-For-auction-for-the-project-/252910718452) Now it turns out that in fact, exactly one CD set was made, and it can be yours if you are the successful bidder in the auction that ends on May 13, 2017 (About 3 days from when this episode was recorded). The CD set is hand made and signed by Theo de Raadt. Fun Fact: The winning bidder will have an OpenBSD CD set that even Theo doesn't have. *** Beastie Bits Hardware Wanted by OpenBSD developers (https://www.openbsd.org/want.html) Donate hardware to FreeBSD developers (https://www.freebsd.org/donations/index.html#components) Announcing NetBSD and the Google Summer of Code Projects 2017 (https://blog.netbsd.org/tnf/entry/announcing_netbsd_and_the_google) Announcing FreeBSD GSoC 2017 Projects (https://wiki.freebsd.org/SummerOfCode2017Projects) LibreSSL 2.5.4 Released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.4-relnotes.txt) CharmBUG Meeting - Tor Browser Bundle Hack-a-thon (https://www.meetup.com/CharmBUG/events/238218840/) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) Experimental Price Cuts (https://blather.michaelwlucas.com/archives/2931) Linux Fest North West 2017: Three Generations of FreeNAS: The World’s most popular storage OS turns 12 (https://www.youtube.com/watch?v=x6VznQz3VEY) *** Feedback/Questions Don - Reproducible builds & gcc/clang (http://dpaste.com/2AXX75X#wrap) architect - C development on BSD (http://dpaste.com/0FJ854X#wrap) David - Linux ABI (http://dpaste.com/2CCK2WF#wrap) Tom - ZFS (http://dpaste.com/2Z25FKJ#wrap) RAIDZ Stripe Width Myth, Busted (https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz) Ivan - Jails (http://dpaste.com/1Z173WA#wrap) ***

192: SSHv1 Be Gone

May 03, 2017 2:04:13 89.44 MB Downloads: 0

This week we have a FreeBSD Foundation development update, tell you about sprinkling in the TrueOS project, Dynamic WDS & a whole lot more! This episode was brought to you by Headlines OpenSSH Removes SSHv1 Support (http://undeadly.org/cgi?action=article&sid=20170501005206) In a series of commits starting here (http://marc.info/?l=openbsd-cvs&m=149359384905651&w=2) and ending with this one (http://marc.info/?l=openbsd-cvs&m=149359530105864&w=2), Damien Miller completed the removal of all support for the now-historic SSHv1 protocol from OpenSSH (https://www.openssh.com/). The final commit message, for the commit that removes the SSHv1 related regression tests, reads: Eliminate explicit specification of protocol in tests and loops over protocol. We only support SSHv2 now. Dropping support for SSHv1 and associated ciphers that were either suspected to or known to be broken has been planned for several releases, and has been eagerly anticipated by many in the OpenBSD camp. In practical terms this means that starting with OpenBSD-current and snapshots as they will be very soon (and further down the road OpenBSD 6.2 with OpenSSH 7.6), the arcane options you used with ssh (http://man.openbsd.org/ssh) to connect to some end-of-life gear in a derelict data centre you don't want to visit anymore will no longer work and you will be forced do the reasonable thing. Upgrade. FreeBSD Foundation April 2017 Development Projects Update (https://www.freebsdfoundation.org/blog/april-2017-development-projects-update/) FreeBSD runs on many embedded boards that provide a USB target or USB On-the-Go (OTG) interface. This allows the embedded target to act as a USB device, and present one or more interfaces (USB device classes) to a USB host. That host could be running FreeBSD, Linux, Mac OS, Windows, Android, or another operating system. USB device classes include audio input or output (e.g. headphones), mass storage (USB flash drives), human interface device (keyboards, mice), communications (Ethernet adapters), and many others. The Foundation awarded a project grant to Edward Tomasz Napierała to develop a USB mass storage target driver, using the FreeBSD CAM Target Layer (CTL) as a backend. This project allows FreeBSD running on an embedded platform, such as a BeagleBone Black or Raspberry Pi Zero, to emulate a USB mass storage target, commonly known as a USB flash stick. The backing storage for the emulated mass storage target is on the embedded board’s own storage media. It can be configured at runtime using the standard CTL configuration mechanism – the ctladm(8) utility, or the ctl.conf(5) file. The FreeBSD target can now present a mass storage interface, a serial interface (for a console on the embedded system), and an Ethernet interface for network access. A typical usage scenario for the mass storage interface is to provide users with documentation and drivers that can be accessed from their host system. This makes it easier for new users to interact with the embedded FreeBSD board, especially in cases where the host operating system may require drivers to access all of the functionality, as with Windows and OS X. They provide instructions on how to configure a BeagleBone Black to act as a flash memory stick attached to a host computer. +Check out the article, test, and report back your experiences with the new USB OTG interface. *** Spring cleaning: Hardware Update and Preview of upcoming TrueOS changes (https://www.trueos.org/blog/spring-cleaning-hardware-update-preview-upcoming-trueos-changes/) The much-abused TrueOS build server is experiencing some technical difficulties, slowing down building new packages and releasing updates. After some investigation, one problem seemed to be a bug with the Poudriere port building software. After updating builders to the new version, some of the instability is resolved. Thankfully, we won’t have to rely on this server so much, because… We’re getting new hardware! A TrueOS/Lumina contributor is donating a new(ish) server to the project. Special thanks to TrueOS contributor/developer q5sys for the awesome new hardware! Preview: UNSTABLE and Upcoming TrueOS STABLE update A fresh UNSTABLE release is dropping today, with a few key changes: Nvidia/graphics driver detection fixes. Boot environment listing fix (FreeBSD boot-loader only) Virtual box issues fixed on most systems. There appears to be a regression in VirtualBox 5.1 with some hardware. New icon themes for Lumina (Preferences -> Appearance -> Theme). Removal of legacy pc-diskmanager. It was broken and unmaintained, so it is time to remove it. Installer/.iso Changes (Available with new STABLE Update): The text installer has been removed. It was broken and unmaintained, so it is time to remove it. There is now a single TrueOS install image. You can still choose to install as either a server or desktop, but both options live in a single install image now. This image is still available as either an .iso or .img file. The size of the .iso and .img files is reduced about 500 Mb to around 2Gb total. We’ve removed Firefox and Thunderbird from the default desktop installation. These have been replaced with Qupzilla and Trojita. Note you can replace Qupzilla and Trojita with Firefox and Thunderbird via the SysAdm Appcafe after completing the TrueOS install. Grub is no longer an installation option. Instead, the FreeBSD boot-loader is always used for the TrueOS partition. rEFInd is used as the master boot-loader for multi-booting; EFI partitioning is required. Qpdfview is now preinstalled for pdf viewing. Included a slideshow during the installation with tips and screenshots. Interview - Patrick M. Hausen - hausen@punkt.de (mailto:hausen@punkt.de) Founder of Punkt.de HAST - Highly Available Storage (https://wiki.freebsd.org/HAST) News Roundup (finally) investigating how to get dynamic WDS (DWDS) working in FreeBSD! (http://adrianchadd.blogspot.com/2017/04/finally-investigating-how-to-get.html) Adrian Chadd writes in his blog: I sat down recently to figure out how to get dynamic WDS working on FreeBSD-HEAD. It's been in FreeBSD since forever, and it in theory should actually have just worked, but it's extremely not documented in any useful way. It's basically the same technology in earlier Apple Airports (before it grew into what the wireless tech world calls "Proxy-STA") and is what the "extender" technology on Qualcomm Atheros chipsets implement. A common question I get from people is "why can't I bridge multiple virtual machines on my laptop and have them show up over wifi? It works on ethernet!" And my response is "when I make dynamic WDS work, you can just make this work on FreeBSD devices - but for now, use NAT." That always makes people sad. + Goes on to explain that normal station/access point setups have up to three addresses and depending on the packet type, these can vary. There are a couple of variations in the addresses, which is more than the number of address fields in a normal 802.11 frame. The big note here is that there's not enough MAC addresses to say "please send this frame to a station MAC address, but then have them forward it to another MAC address attached behind it in a bridge." That would require 4 mac addresses in the 802.11 header, which we don't get. .. except we do. There's a separate address format where from-DS and to-DS bits in the header set to 1, which means "this frame is coming from distribution system to a distribution system", and it has four mac addresses. The RA is then the AP you're sending to, and then a fourth field indicates the eventual destination address that may be an ethernet device connected behind said STA. If you don't configure up WDS, then when you send frames from a station from a MAC address that isn't actually your 802.11 interface MAC address, the system would be confused. The STA wouldn't be able to transmit it easily, and the AP wouldn't know how to get back to your bridged ethernet addresses. The original WDS was a statically configured thing. [...] So for static configurations, this works great. You'd associate your extender AP as a station of the central AP, it'd use wpa_supplicant to setup encryption, then anything between that central AP and that extender AP (as a station) would be encrypted as normal station traffic (but, 4-address frame format.) But that's not very convenient. You have to statically configure everything, including telling your central AP about all of your satellite extender APs. If you want to replace your central AP, you have to reprogram all of your extenders to use the new MAC addresses. So, Sam Leffler came up with "dynamic WDS" - where you don't have to explicitly state the list of central/satellite APs. Instead, you configure a central AP as "dynamic WDS", and when a 4-address frame shows up from an associated station, it "promotes" it to a WDS peer for you. On the satellite AP, it will just find an AP to communicate to, and then assume it'll do WDS and start using 4-address frames. It's still a bit clunky (there's no beacon, probe request, etc IEs that say "I do dynamic WDS!" so you'd better make ALL your central APs a different SSID!) but it certainly is better than what we had. Firstly, there are scripts in src/tools/tools/net80211/ - setup.wdsmain and setup.wdsrelay. These scripts are .. well, the almost complete documentation on a dynamic WDS setup. The manpage doesn't go into anywhere near enough information. So I dug into it. It turns out that dynamic WDS uses a helper daemon - 'wlanwds' - which listens for dynamic WDS configuration changes and will do things for you. This is what runs on the central AP side. Then it started making sense! So far, so good. I followed that script, modified it a bit to use encryption, and .. well, it half worked. Association worked fine, but no traffic was passing. A little more digging showed the actual problem - the dynamic WDS example scripts are for an open/unencrypted network. If you are using an encrypted network, the central AP side needs to enable privacy on the virtual interfaces so traffic gets encrypted with the parent interface encryption keys. Now, I've only done enough testing to show that indeed it is working. I haven't done anything like pass lots of traffic via iperf, or have a mix of DWDS and normal STA peers, nor actually run it for longer than 5 minutes. I'm sure there will be issues to fix. However - I do need it at home, as I can't see the home AP from the upstairs room (and now you see why I care about DWDS!) and so when I start using it daily I'll fix whatever hilarity ensues. Why don't schools teach debugging? (https://danluu.com/teach-debugging/) A friend of mine and I couldn’t understand why some people were having so much trouble; the material seemed like common sense. The Feynman Method was the only tool we needed. Write down the problem Think real hard Write down the solution The Feynman Method failed us on the last project: the design of a divider, a real-world-scale project an order of magnitude more complex than anything we’d been asked to tackle before. I understand now why half the class struggled with the earlier assignments. Without an explanation of how to systematically approach problems, anyone who didn’t intuitively grasp the correct solution was in for a semester of frustration. People who were, like me, above average but not great, skated through most of the class and either got lucky or wasted a huge chunk of time on the final project. I’ve even seen people talented enough to breeze through the entire degree without ever running into a problem too big to intuitively understand; those people have a very bad time when they run into a 10 million line codebase in the real world. The more talented the engineer, the more likely they are to hit a debugging wall outside of school. It’s one of the most fundamental skills in engineering: start at the symptom of a problem and trace backwards to find the source. It takes, at most, half an hour to teach the absolute basics – and even that little bit would be enough to save a significant fraction of those who wash out and switch to non-STEM majors. Why do we leave material out of classes and then fail students who can’t figure out that material for themselves? Why do we make the first couple years of an engineering major some kind of hazing ritual, instead of simply teaching people what they need to know to be good engineers? For all the high-level talk about how we need to plug the leaks in our STEM education pipeline, not only are we not plugging the holes, we’re proud of how fast the pipeline is leaking. FreeBSD: pNFS server for testing (https://lists.freebsd.org/pipermail/freebsd-fs/2017-April/024702.html) Rick Macklem has issued a call for testing his new pNFS server: I now have a pNFS server that I think is ready for testing/evaluation. It is basically a patched FreeBSD-current kernel plus nfsd daemon. If you are interested, some very basic notes on how it works and how to set it up are at: http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt (http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt) A Plan B pNFS service consists of a single MetaData Server (MDS) and K Data Servers (DS), all of which would be recent FreeBSD systems. Clients will mount the MDS as they would a single NFS server. When files are created, the MDS creates a file tree identical to what a single NFS server creates, except that all the regular (VREG) files will be empty. As such, if you look at the exported tree on the MDS directly on the MDS server (not via an NFS mount), the files will all be of size == 0. Each of these files will also have two extended attributes in the system attribute name space: pnfsd.dsfile - This extended attrbute stores the information that the MDS needs to find the data storage file on a DS for this file. pnfsd.dsattr - This extended attribute stores the Size, ModifyTime and Change attributes for the file. For each regular (VREG) file, the MDS creates a data storage file on one of the K DSs, in one of the dsNN directories. The name of this file is the file handle of the file on the MDS in hexadecimal. The DSs use 20 subdirectories named "ds0" to "ds19" so that no one directory gets too large. At this time, the MDS generates File Layout layouts to NFSv4.1 clients that know how to do pNFS. For NFS clients that do not support NFSv4.1 pNFS, there will be a performance hit, since the IO RPCs will be proxied by the MDS for the DS server the data storage file resides on. The current setup does not allow for redundant servers. If the MDS or any of the K DS servers fail, the entire pNFS service will be non-functional. Looking at creating mirrored DS servers is planned, but it may be a year or more before that is implemented. I am planning on using the Flex File Layout for this, since it supports client side mirroring, where the client will write to all mirrors concurrently. Beastie Bits Openbsd changes of note 620 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-620) Why Unix commands are short (http://www.catonmat.net/blog/why-unix-commands-are-short/) OPNsense 17.1.5 released (https://opnsense.org/opnsense-17-1-5-released/) Something for Apple dual-GPU users (http://lists.dragonflybsd.org/pipermail/commits/2017-April/625847.html) pkgsrcCon 2017 CFT (https://mail-index.netbsd.org/netbsd-advocacy/2017/05/01/msg000735.html) TrueOS/Lumina Dev Q&A: May 5th 2017 (https://discourse.trueos.org/t/trueos-lumina-dev-q-a-5-4-17/1347) Feedback/Questions Peter - Jails (http://dpaste.com/0J14HGJ#wrap) Andrew - Languages and University Courses (http://dpaste.com/31AVFSF#wrap) JuniorJobs (https://wiki.freebsd.org/JuniorJobs) Steve - TrueOS and Bootloader (http://dpaste.com/1BXVZSY#wrap) Ben - ZFS questions (http://dpaste.com/0R7AW2T#wrap) Steve - Linux Emulation (http://dpaste.com/3ZR7NCC#wrap)

191: I Know 64 & A Bunch More

April 26, 2017 2:06:58 91.42 MB Downloads: 0

We cover TrueOS/Lumina working to be less dependent on Linux, How the IllumOS network stack works, Throttling the password gropers & the 64 bit inode call for testing. This episode was brought to you by Headlines vBSDCon CFP closed April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) Developer Commentary: Philosophy, Evolution of TrueOS/Lumina, and Other Thoughts. (https://www.trueos.org/blog/developer-commentary-philosophy-evolution-trueoslumina-thoughts/) Philosophy of Development No project is an island. Every single project needs or uses some other external utility, library, communications format, standards compliance, and more in order to be useful. A static project is typically a dead project. A project needs regular upkeep and maintenance to ensure it continues to build and run with the current ecosystem of libraries and utilities, even if the project has no considerable changes to the code base or feature set. “Upstream” decisions can have drastic consequences on your project. Through no fault of yours, your project can be rendered obsolete or broken by changing standards in the global ecosystem that affect your project’s dependencies. Operating system focus is key. What OS is the project originally designed for? This determines how the “upstream” dependencies list appears and which “heartbeat” to monitor. Evolution of PC-BSD, Lumina, and TrueOS. With these principles in mind – let's look at PC-BSD, Lumina, and TrueOS. PC-BSD : PC-BSD was largely designed around KDE on FreeBSD. KDE/Plasma5 has been available for Linux OS’s for well over a year, but is still not generally available on FreeBSD. It is still tucked away in the experimental “area51” repository where people are trying to get it working first. Lumina : As a developer with PC-BSD for a long time, and a tester from nearly the beginning of the project, I was keenly aware the “winds of change” were blowing in the open-source ecosystem. TrueOS : All of these ecosystem changes finally came to a head for us near the beginning of 2016. KDE4 was starting to deteriorate underneath us, and the FreeBSD “Release” branch would never allow us to compete with the rate of graphics driver or standards changes coming out of the Linux camp. The Rename and Next Steps With all of these changes and the lack of a clear “upgrade” path from PC-BSD to the new systems, we decided it was necessary to change the project itself (name and all). To us, this was the only way to ensure people were aware of the differences, and that TrueOS really is a different kind of project from PC-BSD. Note this was not a “hostile takeover” of the PC-BSD project by rabid FreeBSD fanatics. This was more a refocusing of the PC-BSD project into something that could ensure longevity and reliability for the foreseeable future. Does TrueOS have bugs and issues? Of course! That is the nature of “rolling” with upstream changes all the time. Not only do you always get the latest version of something (a good thing), you also find yourself on the “front line” for finding and reporting bugs in those same applications (a bad thing if you like consistency or stability). What you are also seeing is just how much “churn” happens in the open-source ecosystem at any given time. We are devoted to providing our users (and ourselves – don’t forget we use TrueOS every day too!) a stable, reliable, and secure experience. Please be patient as we continue striving toward this goal in the best way possible, not just doing what works for the moment, but the project’s future too. Robert Mustacchi: Excerpts from The Soft Ring Cycle #1 (https://www.youtube.com/watch?v=vnD10WQ2930) The author of the “Turtles on the Wire” post we featured the other week, is back with a video. Joyent has started a new series of lunchtime technical discussions to share information as they grow their engineering team This video focuses on the network stack, how it works, and how it relates to virtualization and multi-tenancy Basically, how the network stack on IllumOS works when you have virtual tenants, be they virtual machines or zones The video describes the many layers of the network stack, how they work together, and how they can be made to work quickly It also talks about the trade-offs between high throughput and low latency How security is enforced, so virtual tenants cannot send packets into VLANs they are not members of, or receive traffic that they are not allowed to by the administrator How incoming packets are classified, and eventually delivered to the intended destination How the system decides if it has enough available resources to process the packet, or if it needs to be dropped How interface polling works on IllumOS (a lot different than on FreeBSD) Then the last 20 minutes are about how the qemu interface of the KVM hypervisor interfaces with the network stack We look forward to seeing more of these videos as they come out *** Forcing the password gropers through a smaller hole with OpenBSD's PF queues (http://bsdly.blogspot.com/2017/04/forcing-password-gropers-through.html) While preparing material for the upcoming BSDCan PF and networking tutorial (http://www.bsdcan.org/2017/schedule/events/805.en.html), I realized that the pop3 gropers were actually not much fun to watch anymore. So I used the traffic shaping features of my OpenBSD firewall to let the miscreants inflict some pain on themselves. Watching logs became fun again. The actual useful parts of this article follow - take this as a walkthrough of how to mitigate a wide range of threats and annoyances. First, analyze the behavior that you want to defend against. In our case that's fairly obvious: We have a service that's getting a volume of unwanted traffic, and looking at our logs the attempts come fairly quickly with a number of repeated attempts from each source address. I've written about the rapid-fire ssh bruteforce attacks and their mitigation before (and of course it's in The Book of PF) as well as the slower kind where those techniques actually come up short. The traditional approach to ssh bruteforcers has been to simply block their traffic, and the state-tracking features of PF let you set up overload criteria that add the source addresses to the table that holds the addresses you want to block. For the system that runs our pop3 service, we also have a PF ruleset in place with queues for traffic shaping. For some odd reason that ruleset is fairly close to the HFSC traffic shaper example in The Book of PF, and it contains a queue that I set up mainly as an experiment to annoy spammers (as in, the ones that are already for one reason or the other blacklisted by our spamd). The queue is defined like this: queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300 yes, that's right. A queue with a maximum throughput of 1 kilobit per second. I have been warned that this is small enough that the code may be unable to strictly enforce that limit due to the timer resolution in the HFSC code. But that didn't keep me from trying. Now a few small additions to the ruleset are needed for the good to put the evil to the task. We start with a table to hold the addresses we want to mess with. Actually, I'll add two, for reasons that will become clear later: table persist counters table persist counters The rules that use those tables are: block drop log (all) quick from pass in quick log (all) on egress proto tcp from to port pop3 flags S/SA keep state \ (max-src-conn 2, max-src-conn-rate 3/3, overload flush global, pflow) set queue spamd pass in log (all) on egress proto tcp to port pop3 flags S/SA keep state \ (max-src-conn 5, max-src-conn-rate 6/3, overload flush global, pflow) The last one lets anybody connect to the pop3 service, but any one source address can have only open five simultaneous connections and at a rate of six over three seconds. The results were immediately visible. Monitoring the queues using pfctl -vvsq shows the tiny queue works as expected: queue spamd parent rootq bandwidth 1K, max 1K qlimit 300 [ pkts: 196136 bytes: 12157940 dropped pkts: 398350 bytes: 24692564 ] [ qlength: 300/300 ] [ measured: 2.0 packets/s, 999.13 b/s ] and looking at the pop3 daemon's log entries, a typical encounter looks like this: Apr 19 22:39:33 skapet spop3d[44875]: connect from 111.181.52.216 Apr 19 22:39:33 skapet spop3d[75112]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[57116]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[65982]: connect from 111.181.52.216 Apr 19 22:39:34 skapet spop3d[58964]: connect from 111.181.52.216 Apr 19 22:40:34 skapet spop3d[12410]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[63573]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[76113]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[23524]: autologout time elapsed - 111.181.52.216 Apr 19 22:40:34 skapet spop3d[16916]: autologout time elapsed - 111.181.52.216 here the miscreant comes in way too fast and only manages to get five connections going before they're shunted to the tiny queue to fight it out with known spammers for a share of bandwidth. One important takeaway from this, and possibly the most important point of this article, is that it does not take a lot of imagination to retool this setup to watch for and protect against undesirable activity directed at essentially any network service. You pick the service and the ports it uses, then figure out what are the parameters that determine what is acceptable behavior. Once you have those parameters defined, you can choose to assign to a minimal queue like in this example, block outright, redirect to something unpleasant or even pass with a low probability. 64-bit inodes (ino64) Status Update and Call for Testing (https://lists.freebsd.org/pipermail/freebsd-fs/2017-April/024684.html) Inodes are data structures corresponding to objects in a file system, such as files and directories. FreeBSD has historically used 32-bit values to identify inodes, which limits file systems to somewhat under 2^32 objects. Many modern file systems internally use 64-bit identifiers and FreeBSD needs to follow suit to properly and fully support these file systems. The 64-bit inode project, also known as ino64, started life many years ago as a project by Gleb Kurtsou (gleb@). After that time several people have had a hand in updating it and addressing regressions, after mckusick@ picked up and updated the patch, and acted as a flag-waver. Overview : The ino64 branch extends the basic system types inot and devt from 32-bit to 64-bit, and nlink_t from 16-bit to 64-bit. Motivation : The main risk of the ino64 change is the uncontrolled ABI breakage. Quirks : We handled kinfo sysctl MIBs, but other MIBs which report structures depended on the changed type, are not handled in general. It was considered that the breakage is either in the management interfaces, where we usually allow ABI slip, or is not important. Testing procedure : The ino64 project can be tested by cloning the project branch from GitHub or by applying the patch to a working tree. New kernel, old world. New kernel, new world, old third-party applications. 32bit compat. Targeted tests. NFS server and client test Other filesystems Test accounting Ports Status with ino64 : A ports exp-run for ino64 is open in PR 218320. 5.1. LLVM : LLVM includes a component called Address Sanitizer or ASAN, which triesto intercept syscalls, and contains knowledge of the layout of many system structures. Since stat and lstat syscalls were removed and several types and structures changed, this has to be reflected in the ASAN hacks. 5.2. lang/ghc : The ghc compiler and parts of the runtime are written in Haskell, which means that to compile ghc, you need a working Haskell compiler for bootstrap. 5.3. lang/rust Rustc has a similar structure to GHC, and same issue. The same solution of patching the bootstrap was done. Next Steps : The tentative schedule for the ino64 project: 2017-04-20 Post wide call for testing : Investigate and address port failures with maintainer support 2017-05-05 Request second exp-run with initial patches applied : Investigate and address port failures with maintainer support 2017-05-19 Commit to HEAD : Address post-commit failures where feasible *** News Roundup Sing, beastie, sing! (http://meka.rs/blog/2017/01/25/sing-beastie-sing/) FreeBSD digital audio workstation, or DAW for short, is now possible. At this very moment it's not user friendly that much, but you'll manage. What I want to say is that I worked on porting some of the audio apps to FreeBSD, met some other people interested in porting audio stuff and became heavily involved with DrumGizmo - drum sampling engine. Let me start with the basic setup. FreeBSD doesn't have hard real-time support, but it's pretty close. For the needs of audio, FreeBSD's implementation of real-time is sufficient and, in my opinion, superior to the one you can get on Linux with RT path (which is ugly, not supported by distributions and breaks apps like VirtualBox). As default install of FreeBSD is concerned with real-time too much, we have to tweak sysctl a bit, so append this to your /etc/sysctl.conf: kern.timecounter.alloweddeviation=0 hw.usb.uaudio.buffer_ms=2 # only on -STABLE for now hw.snd.latency=0 kern.coredump=0 So let me go through the list. First item tells FreeBSD how many events it can aggregate (or wait for) before emitting them. The reason this is the default is because aggregating events saves power a bit, and currently more laptops are running FreeBSD than DAWs. Second one is the lowest possible buffer for USB audio driver. If you're not using USB audio, this won't change a thing. Third one has nothing to do with real-time, but dealing with programs that consume ~3GB of RAM, dumping cores around made a problem on my machine. Besides, core dumps are only useful if you know how to debug the problem, or someone is willing to do that for you. I like to not generate those files by default, but if some app is constantly crashing, I enable dumps, run the app, crash it, and disable dumps again. I lost 30GB in under a minute by examining 10 different drumkits of DrumGizmo and all of them gave me 3GB of core file, each. More setup instructions follow, including jackd setup and PulseAudio using virtual_oss. With this setup I can play OSS, JACK and PulseAudio sound all at the same time, which I was not able to do on Linux. FreeBSD 11 Unbound DNS server (https://itso.dk/?p=499) In FreeBSD, there is a built-in DNS server called Unbound. So why would run a local DNS server? I am in a region where internet traffic is still a bit expensive, that also implies slow, and high response times. To speed that a up a little, you can use own DNS server. It will speed up because for every homepage you visit, there will be several hooks to other domains: commercials, site components, and links to other sites. These, will now all be cached locally on your new DNS server. In my case I use an old PC-Engine Alix board for my home DNS server, but you can use almost everything, Raspberry Pi, old laptop/desktop and others. As long as it runs FreeBSD. Goes into more details about what commands to run and which services to start Try it out if you are in a similar situation *** Why it is important that documentation and tutorials be correct and carefully reviewed (https://arxiv.org/pdf/1704.02786.pdf) A group of researchers found that a lot of online web programming tutorials contain serious security flaws. They decided to do a research project to see how this impacts software that is written possibly based on those tutorials. They used a number of simple google search terms to make a list of tutorials, and manually audited them for common vulnerabilities. They then crawled GitHub to find projects with very similar code snippets that might have been taken from those tutorials. The Web is replete with tutorial-style content on how to accomplish programming tasks. Unfortunately, even top-ranked tutorials suffer from severe security vulnerabilities, such as cross-site scripting (XSS), and SQL injection (SQLi). Assuming that these tutorials influence real-world software development, we hypothesize that code snippets from popular tutorials can be used to bootstrap vulnerability discovery at scale. To validate our hypothesis, we propose a semi-automated approach to find recurring vulnerabilities starting from a handful of top-ranked tutorials that contain vulnerable code snippets. We evaluate our approach by performing an analysis of tens of thousands of open-source web applications to check if vulnerabilities originating in the selected tutorials recur. Our analysis framework has been running on a standard PC, analyzed 64,415 PHP codebases hosted on GitHub thus far, and found a total of 117 vulnerabilities that have a strong syntactic similarity to vulnerable code snippets present in popular tutorials. In addition to shedding light on the anecdotal belief that programmers reuse web tutorial code in an ad hoc manner, our study finds disconcerting evidence of insufficiently reviewed tutorials compromising the security of open-source projects. Moreover, our findings testify to the feasibility of large-scale vulnerability discovery using poorly written tutorials as a starting point The researchers found 117 vulnerabilities, of these, at least 8 appear to be nearly exact copy/pastes of the tutorials that were found to be vulnerable. *** 1.3.0 Development Preview: New icon themes (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) As version 1.3.0 of the Lumina desktop starts getting closer to release, I want to take a couple weeks and give you all some sneak peaks at some of the changes/updates that we have been working on (and are in the process of finishing up). New icon theme (https://lumina-desktop.org/1-3-0-development-preview-new-icon-themes/) Material Design Light/Dark There are a lot more icons available in the reference icon packs which we still have not gotten around to renaming yet, but this initial version satisfies all the XDG standards for an icon theme + all the extra icons needed for Lumina and it’s utilities + a large number of additional icons for application use. This highlights one the big things that I love about Lumina: it gives you an interface that is custom-tailored to YOUR needs/wants – rather than expecting YOU to change your routines to accomodate how some random developer/designer across the world thinks everybody should use a computer. Lumina Media Player (https://lumina-desktop.org/1-3-0-development-preview-lumina-mediaplayer/) This is a small utility designed to provide the ability for the user to play audio and video files on the local system, as well as stream audio from online sources. For now, only the Pandora internet radio service is supported via the “pianobar” CLI utility, which is an optional runtime dependency. However, we hope to gradually add new streaming sources over time. For a long time I had been using another Pandora streaming client on my TrueOS desktop, but it was very fragile with respect to underlying changes: LibreSSL versions for example. The player would regularly stop functioning for a few update cycles until a version of LibreSSL which was “compatible” with the player was used. After enduring this for some time, I was finally frustrated enough to start looking for alternatives. A co-worker pointed me to a command-line utility called “pianobar“, which was also a small client for Pandora radio. After using pianobar for a couple weeks, I was impressed with how stable it was and how little “overhead” it required with regards to extra runtime dependencies. Of course, I started thinking “I could write a Qt5 GUI for that!”. Once I had a few free hours, I started writing what became lumina-mediaplayer. I started with the interface to pianobar itself to see how complicated it would be to interact with, but after a couple days of tinkering in my spare time, I realized I had a full client to Pandora radio basically finished. Beastie Bits vBSDCon CFP closes April 29th (https://easychair.org/conferences/?conf=vbsdcon2017) EuroBSDCon CFP closes April 30th (https://2017.eurobsdcon.org/2017/03/13/call-for-proposals/) clang(1) added to base on amd64 and i386 (http://undeadly.org/cgi?action=article&sid=20170421001933) Theo: “Most things come to an end, sorry.” (https://marc.info/?l=openbsd-misc&m=149232307018311&w=2) ASLR, PIE, NX, and other capital letters (https://www.dragonflydigest.com/2017/04/24/19609.html) How SSH got port number 22 (https://www.ssh.com/ssh/port) Netflix Serving 90Gb/s+ From Single Machines Using Tuned FreeBSD (https://news.ycombinator.com/item?id=14128637) Compressed zfs send / receive lands in FreeBSD HEAD (https://svnweb.freebsd.org/base?view=revision&revision=317414) *** Feedback/Questions Steve - FreeBSD Jobs (http://dpaste.com/3QSMYEH#wrap) Mike - CuBox i4Pro (http://dpaste.com/0NNYH22#wrap) Steve - Year of the BSD Desktop? (http://dpaste.com/1QRZBPD#wrap) Brad - Configuration Management (http://dpaste.com/2TFV8AJ#wrap) ***

190: The Moore You Know

April 19, 2017 2:10:59 94.31 MB Downloads: 0

This week, we look forward with the latest OpenBSD release, look back with Dennis Ritchie’s paper on the evolution of Unix Time Sharing, have an Interview with Kris This episode was brought to you by OpenBSD 6.1 RELEASED (http://undeadly.org/cgi?action=article&sid=20170411132956) Mailing list post (https://marc.info/?l=openbsd-announce&m=149191716921690&w=2') We are pleased to announce the official release of OpenBSD 6.1. This is our 42nd release. New/extended platforms: New arm64 platform, using clang(1) as the base system compiler. The loongson platform now supports systems with Loongson 3A CPU and RS780E chipset. The following platforms were retired: armish, sparc, zaurus New vmm(4)/ vmd(8) IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8)/ dhcpd(8)/ dhcrelay(8) improvements Assorted improvements OpenSMTPD 6.0.0 OpenSSH 7.4 LibreSSL 2.5.3 mandoc 1.14.1 *** Fuzz Testing OpenSSH (http://vegardno.blogspot.ca/2017/03/fuzzing-openssh-daemon-using-afl.html) Vegard Nossum writes a blog post explaining how to fuzz OpenSSH using AFL It starts by compiling AFL and SSH with LLVM to get extra instrumentation to make the fuzzing process better, and faster Sandboxing, PIE, and other features are disabled to increase debuggability, and to try to make breaking SSH easier Privsep is also disabled, because when AFL does make SSH crash, the child process crashing causes the parent process to exit normally, and AFL then doesn’t realize that a crash has happened. A one-line patch disables the privsep feature for the purposes of testing A few other features are disabled to make testing easier (disabling replay attack protection allows the same inputs to be reused many times), and faster: the local arc4random_buf() is patched to return a buffer of zeros disabling CRC checks disabling MAC checks disabling encryption (allow the NULL cipher for everything) add a call to _AFLINIT(), to enable “deferred forkserver mode” disabling closefrom() “Skipping expensive DH/curve and key derivation operations” Then, you can finally get around to writing some test cases The steps are all described in detail In one day of testing, the author found a few NULL dereferences that have since been fixed. Maybe you can think of some other code paths through SSH that should be tested, or want to test another daemon *** Getting OpenBSD running on Raspberry Pi 3 (http://undeadly.org/cgi?action=article&sid=20170409123528) Ian Darwin writes in about his work deploying the arm64 platform and the Raspberry Pi 3 So I have this empty white birdhouse-like thing in the yard, open at the front. It was intended to house the wireless remote temperature sensor from a low-cost weather station, which had previously been mounted on a dark-colored wall of the house [...]. But when I put the sensor into the birdhouse, the signal is too weak for the weather station to receive it (the mounting post was put in place by a previous owner of our property, and is set deeply in concrete). So the next plan was to pop in a tiny OpenBSD computer with a uthum(4) temperature sensor and stream the temperature over WiFi. The Raspberry Pi computers are interesting in their own way: intending to bring low-cost computing to everybody, they take shortcuts and omit things that you'd expect on a laptop or desktop. They aren't too bright on their own: there's very little smarts in the board compared to the "BIOS" and later firmwares on conventional systems. Some of the "smarts" are only available as binary files. This was part of the reason that our favorite OS never came to the Pi Party for the original rpi, and didn't quite arrive for the rpi2. With the rpi3, though, there is enough availability that our devs were able to make it boot. Some limitations remain, though: if you want to build your own full release, you have to install the dedicated raspberrypi-firmware package from the ports tree. And, the boot disks have to have several extra files on them - this is set up on the install sets, but you should be careful not to mess with these extra files until you know what you're doing! But wait! Before you read on, please note that, as of April 1, 2017, this platform boots up but is not yet ready for prime time: there's no driver for SD/MMC but that's the only thing the hardware can level-0 boot from, so you need both the uSD card and a USB disk, at least while getting started; there is no support for the built-in WiFi (a Broadcom BCM43438 SDIO 802.11), so you have to use wired Ethernet or a USB WiFi dongle (for my project an old MSI that shows up as ural(4) seems to work fine); the HDMI driver isn't used by the kernel (if a monitor is plugged in uBoot will display its messages there), so you need to set up cu with a 3V serial cable, at least for initial setup. the ports tree isn't ready to cope with the base compiler being clang yet, so packages are "a thing of the future" But wait - there's more! The "USB disk" can be a USB thumb drive, though they're generally slower than a "real" disk. My first forays used a Kingston DTSE9, the hardy little steel-cased version of the popular DataTraveler line. I was able to do the install, and boot it, once (when I captured the dmesg output shown below). After that, it failed - the boot process hung with the ever-unpopular "scanning usb for storage devices..." message. I tried the whole thing again with a second DTSE9, and with a 32GB plastic-cased DataTraveler. Same results. After considerable wasted time, I found a post on RPI's own site which dates back to the early days of the PI 3, in which they admit that they took shortcuts in developing the firmware, and it just can't be made to work with the Kingston DataTraveler! Not having any of the "approved" devices, and not living around the corner from a computer store, I switched to a Sabrent USB dock with a 320GB Western Digital disk, and it's been rock solid. Too big and energy-hungry for the final project, but enough to show that the rpi3 can be solid with the right (solid-state) disk. And fast enough to build a few simple ports - though a lot will not build yet. I then found and installed OpenBSD onto a “PNY” brand thumb drive and found it solid - in fact I populated it by dd’ing from one of the DataTraveller drives, so they’re not at fault. Check out the full article for detailed setup instructions *** Dennis M. Ritchie’s Paper: The Evolution of the Unix Time Sharing System (http://www.read.seas.harvard.edu/~kohler/class/aosref/ritchie84evolution.pdf) From the abstract: This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system. During the past few years, the Unix operating system has come into wide use, so wide that its very name has become a trademark of Bell Laboratories. Its important characteristics have become known to many people. It has suffered much rewriting and tinkering since the first publication describing it in 1974 [1], but few fundamental changes. However, Unix was born in 1969 not 1974, and the account of its development makes a little-known and perhaps instructive story. This paper presents a technical and social history of the evolution of the system. High level document structure: Origins The PDP-7 Unix file system Process control IO Redirection The advent of the PDP-11 The first PDP-11 system Pipes High-level languages Conclusion One of the comforting things about old memories is their tendency to take on a rosy glow. The programming environment provided by the early versions of Unix seems, when described here, to be extremely harsh and primitive. I am sure that if forced back to the PDP-7 I would find it intolerably limiting and lacking in conveniences. Nevertheless, it did not seem so at the time; the memory fixes on what was good and what lasted, and on the joy of helping to create the improvements that made life better. In ten years, I hope we can look back with the same mixed impression of progress combined with continuity. Interview - Kris Moore - kris@trueos.org (mailto:kris@trueos.org) | @pcbsdkris (https://twitter.com/pcbsdkris) Director of Engineering at iXSystems FreeNAS News Roundup Compressed zfs send / receive now in FreeBSD’s vendor area (https://svnweb.freebsd.org/base?view=revision&revision=316894) Andriy Gapon committed a whole lot of ZFS updates to FreeBSD’s vendor area This feature takes advantage of the new compressed ARC feature, which means blocks that are compressed on disk, remain compressed in ZFS’ RAM cache, to use the compressed blocks when using ZFS replication. Previously, blocks were uncompressed, sent (usually over the network), then recompressed on the other side. This is rather wasteful, and can make the process slower, not just because of the CPU time wasted decompressing/recompressing the data, but because it means more data has to be sent over the network. This caused many users to end up doing: zfs send | xz -T0 | ssh unxz | zfs recv, or similar, to compress the data before sending it over the network. With this new feature, zfs send with the new -c flag, will transmit the already compressed blocks instead. This change also adds longopts versions of all of the zfs send flags, making them easier to understand when written in shell scripts. A lot of fixes, man page updates, etc. from upstream OpenZFS Thanks to everyone who worked on these fixes and features! We’ll announce when these have been committed to head for testing *** Granting privileges using the FreeBSD MAC framework (https://mysteriouscode.io/blog/granting-privileges-using-mac-framework/) The MAC (Mandatory Access Control) framework allows finer grained permissions than the standard UNIX permissions that exist in the base system FreeBSD’s kernel provides quite sophisticated privilege model that extends the traditional UNIX user-and-group one. Here I’ll show how to leverage it to grant access to specific privileges to group of non-root users. mac(9) allows creating pluggable modules with policies that can extend existing base system security definitions. struct macpolicyops consist of many entry points that we can use to amend the behaviour. This time, I wanted to grant a privilege to change realtime priority to a selected group. While Linux kernel lets you specify a named group, FreeBSD doesn’t have such ability, hence I created this very simple policy. The privilege check can be extended using two user supplied functions: privcheck and privgrant. The first one can be used to further restrict existing privileges, i.e. you can disallow some specific priv to be used in jails, etc. The second one is used to explicitly grant extra privileges not available for the target in base configuration. The core of the macrtprio module is dead simple. I defined sysctl tree for two oids: enable (on/off switch for the policy) and gid (the GID target has to be member of), then I specified our custom version of mpoprivgrant called rtprioprivgrant. Body of my granting function is even simpler. If the policy is disabled or the privilege that is being checked is not PRIVSCHED_RTPRIO, we simply skip and return EPERM. If the user is member of the designated group we return 0 that’ll allow the action – target would change realtime privileges. Another useful thing the MAC framework can be used to grant to non-root users: PortACL: The ability to bind to TCP/UDP ports less than 1024, which is usually restricted to root. Some other uses for the MAC framework are discussed in The FreeBSD Handbook (https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/mac.html) However, there are lots more, and we would really like to see more tutorials and documentation on using MAC to make more secure servers, but allowing the few specific things that normally require root access. *** The Story of the PING Program (http://ftp.arl.army.mil/~mike/ping.html) This is from the homepage of Mike Muuss: Yes, it's true! I'm the author of ping for UNIX. Ping is a little thousand-line hack that I wrote in an evening which practically everyone seems to know about. :-) I named it after the sound that a sonar makes, inspired by the whole principle of cho-location. In college I'd done a lot of modeling of sonar and radar systems, so the "Cyberspace" analogy seemed very apt. It's exactly the same paradigm applied to a new problem domain: ping uses timed IP/ICMP ECHOREQUEST and ECHOREPLY packets to probe the "distance" to the target machine. My original impetus for writing PING for 4.2a BSD UNIX came from an offhand remark in July 1983 by Dr. Dave Mills while we were attending a DARPA meeting in Norway, in which he described some work that he had done on his "Fuzzball" LSI-11 systems to measure path latency using timed ICMP Echo packets. In December of 1983 I encountered some odd behavior of the IP network at BRL. Recalling Dr. Mills' comments, I quickly coded up the PING program, which revolved around opening an ICMP style SOCKRAW AFINET Berkeley-style socket(). The code compiled just fine, but it didn't work -- there was no kernel support for raw ICMP sockets! Incensed, I coded up the kernel support and had everything working well before sunrise. Not surprisingly, Chuck Kennedy (aka "Kermit") had found and fixed the network hardware before I was able to launch my very first "ping" packet. But I've used it a few times since then. grin If I'd known then that it would be my most famous accomplishment in life, I might have worked on it another day or two and added some more options. The folks at Berkeley eagerly took back my kernel modifications and the PING source code, and it's been a standard part of Berkeley UNIX ever since. Since it's free, it has been ported to many systems since then, including Microsoft Windows95 and WindowsNT. In 1993, ten years after I wrote PING, the USENIX association presented me with a handsome scroll, pronouncing me a Joint recipient of The USENIX Association 1993 Lifetime Achievement Award presented to the Computer Systems Research Group, University of California at Berkeley 1979-1993. ``Presented to honor profound intellectual achievement and unparalleled service to our Community. At the behest of CSRG principals we hereby recognize the following individuals and organizations as CSRG participants, contributors and supporters.'' Wow! The best ping story I've ever heard was told to me at a USENIX conference, where a network administrator with an intermittent Ethernet had linked the ping program to his vocoder program, in essence writing: ping goodhost | sed -e 's/.*/ping/' | vocoder He wired the vocoder's output into his office stereo and turned up the volume as loud as he could stand. The computer sat there shouting "Ping, ping, ping..." once a second, and he wandered through the building wiggling Ethernet connectors until the sound stopped. And that's how he found the intermittent failure. FreeBSD: /usr/local/lib/libpkg.so.3: Undefined symbol "utimensat" (http://glasz.org/sheeplog/2017/02/freebsd-usrlocalliblibpkgso3-undefined-symbol-utimensat.html) The internet will tell you that, of course, 10.2 is EOL, that packages are being built for 10.3 by now and to better upgrade to the latest version of FreeBSD. While all of this is true and running the latest versions is generally good advise, in most cases it is unfeasible to do an entire OS upgrade just to be able to install a package. Points out the ABI variable being used in /usr/local/etc/pkg/repos/FreeBSD.conf Now, if you have 10.2 installed and 10.3 is the current latest FreeBSD version, this url will point to packages built for 10.3 resulting in the problem that, when running pkg upgrade pkg it’ll go ahead and install the latest version of pkg build for 10.3 onto your 10.2 system. Yikes! FreeBSD 10.3 and pkgng broke the ABI by introducing new symbols, like utimensat. The solution: Have a look at the actual repo url http://pkg.FreeBSD.org/FreeBSD:10:amd64… there’s repo’s for each release! Instead of going through the tedious process of upgrading FreeBSD you just need to Use a repo url that fits your FreeBSD release: Update the package cache: pkg update Downgrade pkgng (in case you accidentally upgraded it already): pkg delete -f pkg pkg install -y pkg Install your package There you go. Don’t fret. But upgrade your OS soon ;) Beastie Bits CPU temperature collectd report on NetBSD (https://imil.net/blog/2017/01/22/collectd_NetBSD_temperature/) Booting FreeBSD 11 with NVMe and ZFS on AMD Ryzen (https://www.servethehome.com/booting-freebsd-11-nvme-zfs-amd-ryzen/) BeagleBone Black Tor relay (https://torbsd.github.io/blog.html#busy-bbb) FreeBSD - Disable in-tree GDB by default on x86, mips, and powerpc (https://reviews.freebsd.org/rS317094) CharmBUG April Meetup (https://www.meetup.com/CharmBUG/events/238218742/) The origins of XXX as FIXME (https://www.snellman.net/blog/archive/2017-04-17-xxx-fixme/) *** Feedback/Questions Felis - L2ARC (http://dpaste.com/2APJE4E#wrap) Gabe - FreeBSD Server Install (http://dpaste.com/0BRJJ73#wrap) FEMP Script (http://dpaste.com/05EYNJ4#wrap) Scott - FreeNAS & LAGG (http://dpaste.com/1CV323G#wrap) Marko - Backups (http://dpaste.com/3486VQZ#wrap) ***

189: Codified Summer

April 12, 2017 2:33:24 92.04 MB Downloads: 0

This week on the show we interview Wendell from Level1Techs, cover Google Summer of Code on the different BSD projects, cover YubiKey usage, dive into how NICs work & This episode was brought to you by Headlines Google summer of code for BSDs FreeBSD (https://www.freebsd.org/projects/summerofcode.html) FreeBSD's existing list of GSoC Ideas for potential students (https://wiki.freebsd.org/SummerOfCodeIdeas) FreeBSD/Xen: import the grant-table bus_dma(9) handlers from OpenBSD Add support for usbdump file-format to wireshark and vusb-analyzer Write a new boot environment manager Basic smoke test of all base utilities Port OpenBSD's pf testing framework and tests Userspace Address Space Annotation zstandard integration in libstand Replace mergesort implementation Test Kload (kexec for FreeBSD) Kernel fuzzing suite Integrate MFSBSD into the release building tools NVMe controller emulation for bhyve Verification of bhyve's instruction emulation VGA emulation improvements for bhyve audit framework test suite Add more FreeBSD testing to Xen osstest Lua in bootloader POSIX compliance testing framework coreclr: add Microsoft's coreclr and corefx to the Ports tree. NetBSD (https://wiki.netbsd.org/projects/gsoc/) Kernel-level projects Medium ISDN NT support and Asterisk integration LED/LCD Generic API NetBSD/azure -- Bringing NetBSD to Microsoft Azure OpenCrypto swcrypto(4) enhancements Scalable entropy gathering Userland PCI drivers Hard Real asynchronous I/O Parallelize page queues Tickless NetBSD with high-resolution timers Userland projects Easy Inetd enhancements -- Add new features to inetd Curses library automated testing Medium Make Anita support additional virtual machine systems Create an SQL backend and statistics/query page for ATF test results Light weight precision user level time reading Query optimizer for find(1) Port launchd Secure-PLT - supporting RELRO binaries Sysinst alternative interface Hard Verification tool for NetBSD32 pkgsrc projects Easy Version control config files Spawn support in pkgsrc tools Authentication server meta-package Medium pkgin improvements Unify standard installation tasks Hard Add dependency information to binary packages Tool to find dependencies precisely LLVM (http://llvm.org/OpenProjects.html#gsoc17) Fuzzing the Bitcode reader Description of the project: The optimizer is 25-30% slower when debug info are enabled, it'd be nice to track all the places where we don't do a good job about ignoring them! Extend clang AST to provide information for the type as written in template instantiations. Description of the project: When instantiating a template, the template arguments are canonicalized before being substituted into the template pattern. Clang does not preserve type sugar when subsequently accessing members of the instantiation. Clang should "re-sugar" the type when performing member access on a class template specialization, based on the type sugar of the accessed specialization. Shell auto-completion support for clang. Bash and other shells support typing a partial command and then automatically completing it for the user (or at least providing suggestions how to complete) when pressing the tab key. This is usually only supported for popular programs such as package managers (e.g. pressing tab after typing "apt-get install late" queries the APT package database and lists all packages that start with "late"). As of now clang's frontend isn't supported by any common shell. Clang-based C/C++ diff tool. Description of the project: Every developer has to interact with diff tools daily. The algorithms are usually based on detecting "longest common subsequences", which is agnostic to the file type content. A tool that would understand the structure of the code may provide a better diff experience by being robust against, for example, clang-format changes. Find dereference of pointers. Description of the project: Find dereference of pointer before checking for nullptr. Warn if virtual calls are made from constructors or destructors. Description of the project: Implement a path-sensitive checker that warns if virtual calls are made from constructors and destructors, which is not valid in case of pure virtual calls and could be a sign of user error in non-pure calls. Improve Code Layout Description of the project: The goal for the project is trying to improve the layout/performances of the generated executable. The primary object format considered for the project is ELF but this can be extended to other object formats. The project will touch both LLVM and lld. Why Isn’t OpenBSD in Google Summer of Code 2017? (http://marc.info/?l=openbsd-misc&m=149119308705465&w=2) Hacker News Discussion Thread (https://news.ycombinator.com/item?id=14020814) Turtles on the Wire: Understanding How the OS Uses the Modern NIC (http://dtrace.org/blogs/rm/2016/09/15/turtles-on-the-wire-understanding-how-the-os-uses-the-modern-nic/) The Simple NIC MAC Address Filters and Promiscuous Mode Problem: The Single Busy CPU A Swing and a Miss Nine Rings for Packets Doomed to be Hashed Problem: Density, Density, Density A Brief Aside: The Virtual NIC Always Promiscuous? The Classification Challenge Problem: CPUs are too ‘slow’ Problem: The Interrupts are Coming in too Hot Solution One: Do Less Work Solution Two: Turn Off Interrupts Recapping Future Directions and More Reading Make Dragonfly BSD great again! (http://akat1.pl/?id=3) Recently I spent some time reading Dragonfly BSD code. While doing so I spotted a vulnerability in the sysvsem subsystem that let user to point to any piece of memory and write data through it (including the kernel space). This can be turned into execution of arbitrary code in the kernel context and by exploiting this, we're gonna make Dragonfly BSD great again! Dragonfly BSD is a BSD system which originally comes from the FreeBSD project. In 2003 Matthew Dillon forked code from the 4.x branch of the FreeBSD and started a new flavour. I thought of Dragonfly BSD as just another fork, but during EuroBSDCon 2015 I accidentally saw the talk about graphical stack in the Dragonfly BSD. I confused rooms, but it was too late to escape as I was sitting in the middle of a row, and the exit seemed light years away from me. :-) Anyway, this talk was a sign to me that it's not just a niche of a niche of a niche of a niche operating system. I recommend spending a few minutes of your precious time to check out the HAMMER file system, Dragonfly's approach to MP, process snapshots and other cool features that it offers. Wikipedia article is a good starter With the exploit, they are able to change the name of the operating system back to FreeBSD, and escalate from an unprivileged user to root. The Bug itself is located in the semctl(2) system call implementation. bcopy(3) in line 385 copies semid_ds structure to memory pointed by arg->buf, this pointer is fully controlled by the user, as it's one of the syscall's arguments. So the bad thing here is that we can copy things to arbitrary address, but we have not idea what we copy yet. This code was introduced by wrongly merging code from the FreeBSD project, bah, bug happens. Using this access, the example code shows how to overwrite the function pointers in the kernel used for the open() syscall, and how to overwrite the ostype global, changing the name of the operating system. In the second example, the reference to the credentials of the user trying to open a file are used to overwrite that data, making the user root. The bug was fixed in uber fast manner (within few hours!) by Matthew Dillon, version 4.6.1 released shortly after that seems to be safe. In case you care, you know what to do! Thanks to Mateusz Kocielski for the detailed post, and finding the bug *** Interview - Wendell - wendell@level1techs.com (mailto:wendell@level1techs.com) / @tekwendell (https://twitter.com/tekwendell) Host of Level1Techs website, podcast and YouTube channel News Roundup Using yubikeys everywhere (http://www.tedunangst.com/flak/post/using-yubikeys-everywhere) Ted Unangst is back, with an interesting post about YUBI Keys Everybody is getting real excited about yubikeys recently, so I figured I should get excited, too. I have so far resisted two factor authorizing everything, but this seemed like another fun experiment. There’s a lot written about yubikeys and how you should use one, but nothing I’ve read answered a few of the specific questions I had To begin with, I ordered two yubikeys. One regular sized 4 and one nano. I wanted to play with different form factors to see which is better for various uses, and I wanted to test having a key and a backup key. Everybody always talks about having one yubikey. And then if you lose it, terrible things happen. Can this problem be alleviated with two keys? I’m also very curious what happens when I try to login to a service with my phone after enabling U2F. We’ve got three computers (and operating systems) in the mix, along with a number of (mostly web) services. Wherever possible, I want to use a yubikey both to login to the computer and to authorize myself to remote services. I started my adventure on my chromebook. Ultimate goal would be to use the yubikey for local logins. Either as a second factor, or as an alternative factor. First things first and we need to get the yubikey into the account I use to sign into the chromebook. Alas, there is apparently no way to enroll only a security key for a Google account. Every time I tried, it would ask me for my phone number. That is not what I want. Zero stars. Giving up on protecting the chromebook itself, at least maybe I can use it to enable U2F with some other sites. U2F is currently limited to Chrome, but it sounds like everything I want. Facebook signup using U2F was pretty easy. Go to account settings, security subheading, add the device. Tap the button when it glows. Key added. Note that it’s possible to add a key without actually enabling two factor auth, in which case you can still login with only a password, but no way to login with no password and only a USB key. Logged out to confirm it would check the key, and everything looked good, so I killed all my other active sessions. Now for the phone test. Not quite as smooth. Tried to login, the Facebook app then tells me it has sent me an SMS and to enter the code in the box. But I don’t have a phone number attached. I’m not getting an SMS code. Meanwhile, on my laptop, I have a new notification about a login attempt. Follow the prompts to confirm it’s me and permit the login. This doesn’t have any effect on the phone, however. I have to tap back, return to the login screen, and enter my password again. This time the login succeeds. So everything works, but there are still some rough patches in the flow. Ideally, the phone would more accurately tell me to visit the desktop site, and then automatically proceed after I approve. (The messenger app crashed after telling me my session had expired, but upon restarting it was able to borrow the Facebook app credentials and I was immediately logged back in.) Let’s configure Dropbox next. Dropbox won’t let you add a security key to an account until after you’ve already set up some other mobile authenticator. I already had the Duo app on my phone, so I picked that, and after a short QR scan, I’m ready to add the yubikey. So the key works to access Dropbox via Chrome. Accessing Dropbox via my phone or Firefox requires entering a six digit code. No way to use a yubikey in a three legged configuration I don’t use Github, but I know they support two factors, so let’s try them next. Very similar to Dropbox. In order to set up a key, I must first set up an authenticator app. This time I went with Yubico’s own desktop authenticator. Instead of scanning the QR code, type in some giant number (on my Windows laptop), and it spits out an endless series of six digit numbers, but only while the yubikey is inserted. I guess this is kind of what I want, although a three pound yubikey is kind of unwieldy. As part of my experiment, I noticed that Dropbox verifies passwords before even looking at the second auth. I have a feeling that they should be checked at the same time. No sense allowing my password guessing attack to proceed while I plot how to steal someone’s yubikey. In a sense, the yubikey should serve as a salt, preventing me from mounting such an attack until I have it, thus creating a race where the victim notices the key is gone and revokes access before I learn the password. If I know the password, the instant I grab the key I get access. Along similar lines, I was able to complete a password reset without entering any kind of secondary code. Having my phone turn into a second factor is a big part of what I’m looking to avoid with the yubikey. I’d like to be able to take my phone with me, logged into some sites but not all, and unable to login to the rest. All these sites that require using my phone as mobile authenticator are making that difficult. I bought the yubikey because it was cheaper than buying another phone! Using the Yubico desktop authenticator seems the best way around that. The article also provides instructions for configuring the Yubikey on OpenBSD A few notes about OTP. As mentioned, the secret key is the real password. It’s stored on whatever laptop or server you login to. Meaning any of those machines can take the key and use it to login to any other machine. If you use the same yubikey to login to both your laptop and a remote server, your stolen laptop can trivially be used to login to the server without the key. Be mindful of that when setting up multiple machines. Also, the OTP counter isn’t synced between machines in this setup, which allows limited replay attacks. Ted didn’t switch his SSH keys to the Yubikey, because it doesn’t support ED25519, and he just finished rotating all of his keys and doesn’t want to do it again. I did most of my experimenting with the larger yubikey, since it was easier to move between machines. For operations involving logging into a web site, however, I’d prefer the nano. It’s very small, even smaller than the tiniest wireless mouse transcievers I’ve seen. So small, in fact, I had trouble removing it because I couldn’t find anything small enough to fit through the tiny loop. But probably a good thing. Most other micro USB gadgets stick out just enough to snag when pushing a laptop into a bag. Not the nano. You lose a port, but there’s really no reason to ever take it out. Just leave it in, and then tap it whenever you login to the tubes. It would not be a good choice for authenticating to the local machine, however. The larger device, sized to fit on a keychain, is much better for that. It is possible to use two keys as backups. Facebook and Dropbox allow adding two U2F keys. This is perhaps a little tiresome if there’s lots of sites, as I see no way to clone a key. You have to login to every service. For challenge response and OTP, however, the personalization tool makes it easy to generate lots of yubikeys with the same secrets. On the other hand, a single device supports an infinite number of U2F sites. The programmable interfaces like OTP are limited to only two slots, and the first is already used by the factory OTP setup. What happened to my vlan (http://www.grenadille.net/post/2017/02/13/What-happened-to-my-vlan) A long term goal of the effort I'm driving to unlock OpenBSD's Network Stack is obviously to increase performances. So I'd understand that you find confusing when some of our changes introduce performance regressions. It is just really hard to do incremental changes without introducing temporary regressions. But as much as security is a process, improving performance is also a process. Recently markus@ told me that vlan(4) performances dropped in last releases. He had some ideas why but he couldn't provide evidences. So what really happened? Hrvoje Popovski was kind enough to help me with some tests. He first confirmed that on his Xeon box (E5-2643 v2 @ 3.50GHz), forwarding performances without pf(4) dropped from 1.42Mpps to 880Kpps when using vlan(4) on both interfaces. Together vlaninput() and vlanstart() represent 25% of the time CPU1 spends processing packets. This is not exactly between 33% and 50% but it is close enough. The assumption we made earlier is certainly too simple. If we compare the amount of work done in process context, represented by ifinputprocess() we clearly see that half of the CPU time is not spent in etherinput(). I'm not sure how this is related to the measured performance drop. It is actually hard to tell since packets are currently being processed in 3 different contexts. One of the arguments mikeb@ raised when we discussed moving everything in a single context, is that it is simpler to analyse and hopefully make it scale. With some measurements, a couple of nice pictures, a bit of analysis and some educated guesses we are now in measure of saying that the performances impact observed with vlan(4) is certainly due to the pseudo-driver itself. A decrease of 30% to 50% is not what I would expect from such pseudo-driver. I originally heard that the reason for this regression was the use of SRP but by looking at the profiling data it seems to me that the queuing API is the problem. In the graph above the CPU time spent in ifinput() and ifenqueue() from vlan(4) is impressive. Remember, in the case of vlan(4) these operations are done per packet! When ifinput() has been introduced the queuing API did not exist and putting/taking a single packet on/from an interface queue was cheap. Now it requires a mutex per operation, which in the case of packets received and sent on vlan(4) means grabbing three mutexes per packets. I still can't say if my analysis is correct or not, but at least it could explain the decrease observed by Hrvoje when testing multiple vlan(4) configurations. vlaninput() takes one mutex per packet, so it decreases the number of forwarded packets by ~100Kpps on this machine, while vlanstart() taking two mutexes decreases it by ~200Kpps. An interesting analysis of the routing performance regression on OpenBSD I have asked Olivier Cochard-Labbe about doing a similar comparison of routing performance on FreeBSD when a vlan pseudo interface is added to the forwarding path *** NetBSD: the first BSD introducing a modern process plugin framework in LLDB (https://blog.netbsd.org/tnf/entry/netbsd_the_first_bsd_introducing) Clean up in ptrace(2) ATF tests We have created some maintanance burden for the current ptrace(2) regression tests. The main issues with them is code duplication and the splitting between generic (Machine Independent) and port-specific (Machine Dependent) test files. I've eliminated some of the ballast and merged tests into the appropriate directory tests/lib/libc/sys/. The old location (tests/kernel) was a violation of the tests/README recommendation PTRACE_FORK on !x86 ports Along with the motivation from Martin Husemann we have investigated the issue with PTRACE_FORK ATF regression tests. It was discovered that these tests aren't functional on evbarm, alpha, shark, sparc and sparc64 and likely on other non-x86 ports. We have discovered that there is a missing SIGTRAP emitted from the child, during the fork(2) handshake. The proper order of operations is as follows: parent emits SIGTRAP with sicode=TRAPCHLD and pesetevent=pid of forkee child emits SIGTRAP with sicode=TRAPCHLD and pesetevent=pid of forker Only the x86 ports were emitting the second SIGTRAP signal. PTSYSCALL and PTSYSCALLEMU With the addition of PTSYSCALLEMU we can implement a virtual kernel syscall monitor. It means that we can fake syscalls within a debugger. In order to achieve this feature, we need to use the PTSYSCALL operation, catch SIGTRAP with sicode=TRAPSCE (syscall entry), call PTSYSCALLEMU and perform an emulated userspace syscall that would have been done by the kernel, followed by calling another PTSYSCALL with sicode=TRAPSCX. What has been done in LLDB A lot of work has been done with the goal to get breakpoints functional. This target penetrated bugs in the existing local patches and unveiled missing features required to be added. My initial test was tracing a dummy hello-world application in C. I have sniffed the GDB Remote Protocol packets and compared them between Linux and NetBSD. This helped to streamline both versions and bring the NetBSD support to the required Linux level. Plan for the next milestone I've listed the following goals for the next milestone. watchpoints support floating point registers support enhance core(5) and make it work for multiple threads introduce PTSETSTEP and PTCLEARSTEP in ptrace(2) support threads in the NetBSD Process Plugin research F_GETPATH in fcntl(2) Beyond the next milestone is x86 32-bit support. LibreSSL 2.5.2 released (https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-2.5.2-relnotes.txt) Added the recallocarray(3) memory allocation function, and converted various places in the library to use it, such as CBB and BUFMEMgrow. recallocarray(3) is similar to reallocarray. Newly allocated memory is cleared similar to calloc(3). Memory that becomes unallocated while shrinking or moving existing allocations is explicitly discarded by unmapping or clearing to 0. Added new root CAs from SECOM Trust Systems / Security Communication of Japan. Added EVP interface for MD5+SHA1 hashes. Fixed DTLS client failures when the server sends a certificate request. Correct handling of padding when upgrading an SSLv2 challenge into an SSLv3/TLS connection. Allow protocols and ciphers to be set on a TLS config object in libtls. Improved nc(1) TLS handshake CPU usage and server-side error reporting. Beastie Bits HardenedBSD Stable v46.16 released (http://hardenedbsd.org/article/op/2017-03-30/stable-release-hardenedbsd-stable-11-stable-v4616) KnoxBUG looking for OpenBSD people in Knoxville TN area (https://www.reddit.com/r/openbsd/comments/5vggn7/knoxbug_looking_for_openbsd_people_in_knoxville/) KnoxBUG Tuesday, April 18, 2017 - 6:00pm : Caleb Cooper: Advanced BASH Scripting](http://knoxbug.org/2017-04-18) e2k17 Nano hackathon report from Bob Beck (http://undeadly.org/cgi?action=article&sid=20170405110059) Noah Chelliah, Host of the Linux Action Show calls Linux a ‘Bad Science Project’ and ditches Linux for TrueOS](https://youtu.be/yXB85_olYhQ?t=3238) *** Feedback/Questions James - ZFS Mounting (http://dpaste.com/1H43JGV#wrap) Kevin - Virtualization (http://dpaste.com/18VNAJK#wrap) Ben - Jails (http://dpaste.com/0R7CRZ7#wrap) Florian - ZFS and Migrating Linux userlands (http://dpaste.com/2Z1P23T#wrap) q5sys - question for the community (http://dpaste.com/26M453F#wrap)

188: And then the murders began

April 05, 2017 1:23:39 60.23 MB Downloads: 0

Today on BSD Now, the latest Dragonfly BSD release, RaidZ performance, another OpenSSL Vulnerability, and more; all this week on BSD Now. This episode was brought to you by Headlines DragonFly BSD 4.8 is released (https://www.dragonflybsd.org/release48/) Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details. Support for eMMC booting, and mobile and high-performance PCIe SSDs This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available. EFI support The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice. DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system. Improved graphics support The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements. Other user-affecting changes Kernel is now built using -O2. VKernels now use COW, so multiple vkernels can share one disk image. powerd() is now sensitive to time and temperature changes. Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf. *** #8005 poor performance of 1MB writes on certain RAID-Z configurations (https://github.com/openzfs/openzfs/pull/321) Matt Ahrens posts a new patch for OpenZFS Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB. To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 "pad sectors" at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates "optional" zio's when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009. Writing the optional zio is done by the io aggregation code in vdevqueue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfsvdevaggregationlimit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfsvdevaggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation. The solution is to aggregate optional zio's regardless of the aggregation size limit. As you can see from the graphs, this can make a large difference in performance. I encourage you to read the entire commit message, it is well written and very detailed. *** Can you spot the OpenSSL vulnerability (https://guidovranken.wordpress.com/2017/01/28/can-you-spot-the-vulnerability/) This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statemsrvr.c, sslbytestocipherlist()), and can easily be reached remotely. Can you spot the vulnerability? So there is a loop, and within that loop we have an ‘if’ statement, that tests a number of conditions. If any of those conditions fail, OPENSSLfree(raw) is called. But raw isn’t the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability. But not quite. None of those checks in the ‘if’ statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKETget1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKETcopybytes and PACKET_forward cannot fail. So, does the code do what the original author thought, or expected it to do? But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code? Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312 (https://github.com/openssl/openssl/pull/2312) PS I’m not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody’s best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes. Thanks to Guido Vranken for the sharp eye and the blog post *** Research Debt (http://distill.pub/2017/research-debt/) I found this article interesting as it relates to not just research, but a lot of technical areas in general Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next. People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that’s gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It’s entirely possible to build paths and staircases into these mountains. The climb isn’t something to be proud of. The climb isn’t progress: the climb is a mountain of debt. Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run. Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don’t appreciate how much better things could be. Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking. Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they’re bad. For example, an object with extra electrons is negative, and pi is wrong Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there’s no easy way to filter or summarize them. We think noise is the main way experts experience research debt. There’s a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery. + The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools. Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it. Anyway, if that bit piqued your interest, go read the full article and the suggested further reading. *** News Roundup And then the murders began. (https://blather.michaelwlucas.com/archives/2902) A whole bunch of people have pointed me at articles like this one (http://thehookmag.com/2017/03/adding-murders-began-second-sentence-book-makes-instantly-better-125462/), which claim that you can improve almost any book by making the second sentence “And then the murders began.” It’s entirely possible they’re correct. But let’s check, with a sampling of books. As different books come in different tenses and have different voices, I’ve made some minor changes. “Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed “Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery “The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin’s Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS “Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed Netdata now supports FreeBSD (https://github.com/firehol/netdata) netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards. From the release notes: apps.plugin ported for FreeBSD Check out their demo sites (https://github.com/firehol/netdata/wiki) *** Distrowatch Weekly reviews RaspBSD (https://distrowatch.com/weekly.php?issue=20170220#raspbsd) RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers. The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user's shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory. I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi's external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity. Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable. Research UNIX V8, V9 and V10 made public by Alcatel-Lucent (https://media-bell-labs-com.s3.amazonaws.com/pages/20170327_1602/statement%20regarding%20Unix%203-7-17.pdf) Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10. Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page (https://en.wikipedia.org/wiki/Research_Unix) It only took 30+ years, but now they’re public You can grab them from here (http://www.tuhs.org/Archive/Distributions/Research/) If you’re wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9 (http://plan9.bell-labs.com/plan9/); which itself was succeeded by Inferno (http://www.vitanuova.com/inferno/). *** Beastie Bits The BSD Family Tree (https://github.com/freebsd/freebsd/blob/master/share/misc/bsd-family-tree) Unix Permissions Calculator (http://permissions-calculator.org/) NAS4Free release 11.0.0.4 now available (https://sourceforge.net/projects/nas4free/files/NAS4Free-11.0.0.4/11.0.0.4.4141/) Another BSD Mag released for free downloads (https://bsdmag.org/download/simple-quorum-drive-freebsd-ctl-ha-beast-storage-system/) OPNsense 17.1.4 released (https://forum.opnsense.org/index.php?topic=4898.msg19359) *** Feedback/Questions gozes asks via twitter about how get involved in FreeBSD (https://twitter.com/gozes/status/846779901738991620) ***

187: Catching up to BSD

March 29, 2017 1:15:12 54.14 MB Downloads: 0

Catching up to BSD, news about the NetBSD project, a BSD Phone, and a bunch of OpenBSD and TrueOS News. This episode was brought to you by Headlines NetBSD 7.1 released (http://www.netbsd.org/releases/formal-7/NetBSD-7.1.html) This update represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and enhancements. Kernel compat_linux(8) (http://netbsd.gw.com/cgi-bin/man-cgi?compat_linux+8.i386+NetBSD-7.1): Fully support schedsetaffinity and schedgetaffinity, fixing, e.g., the Intel Math Kernel Library. DTrace: Avoid redefined symbol errors when loading the module. Fix module autoload. IPFilter: Fix matching of ICMP queries when NAT'd through IPF. Fix lookup of original destination address when using a redirect rule. This is required for transparent proxying by squid, for example. ipsec(4) (http://netbsd.gw.com/cgi-bin/man-cgi?ipsec+4.i386+NetBSD-7.1): Fix NAT-T issue with NetBSD being the host behind NAT. Drivers Add vioscsi driver for the Google Compute Engine disk. ichsmb(4) (http://netbsd.gw.com/cgi-bin/man-cgi?ichsmb+4.i386+NetBSD-7.1): Add support for Braswell CPU and Intel 100 Series. wm(4) (http://netbsd.gw.com/cgi-bin/man-cgi?wm+4.i386+NetBSD-7.1): Add C2000 KX and 2.5G support. Add Wake On Lan support. Fixed a lot of bugs Security Fixes NetBSD-SA2017-001 (http://ftp.netbsd.org/pub/NetBSD/security/advisories/NetBSD-SA2017-001.txt.asc) Memory leak in the connect system call. NetBSD-SA2017-002 (http://ftp.netbsd.org/pub/NetBSD/security/advisories/NetBSD-SA2017-002.txt.asc) Several vulnerabilities in ARP. ARM related Support for Raspberry Pi Zero. ODROID-C1 Ethernet now works. Summary of the preliminary LLDB support project (http://blog.netbsd.org/tnf/entry/summary_of_the_preliminary_lldb) What has been done in NetBSD Verified the full matrix of combinations of wait(2) and ptrace(2) in the following GNU libstdc++ std::call_once bug investigation test-cases Improving documentation and other minor system parts Documentation of ptrace(2) and explanation how debuggers work Introduction of new siginfo(2) codes for SIGTRAP New ptrace(2) interfaces What has been done in LLDB Native Process NetBSD Plugin The MonitorCallback function Other LLDB code, out of the NativeProcessNetBSD Plugin Automated LLDB Test Results Summary Plan for the next milestone fix conflict with system-wide py-six add support for auxv read operation switch resolution of pid -> path to executable from /proc to sysctl(7) recognize Real-Time Signals (SIGRTMIN-SIGRTMAX) upstream !NetBSDProcessPlugin code switch std::callonce to llvm::callonce add new ptrace(2) interface to lock and unlock threads from execution switch the current PTWATCHPOINT interface to PTGETDBREGS and PT_SETDBREGS Actually building a FreeBSD Phone (https://hackaday.io/project/13145-bsd-based-secure-smartphone) There have been a number of different projects that have proposed building a FreeBSD based smart phone This project is a bit different, and I think that gives it a better chance to make progress It uses off-the-shelf parts, so while not as neatly integrated as a regular smartphone device, it makes a much better prototype, and is more readily available. Hardware overview: X86-based, long-lasting (user-replaceable) battery, WWAN Modem (w/LTE), 4-5" LCD Touchscreen (Preferably w/720p resolution, IPS), upgradable storage. Currently targeting the UDOO Ultra platform. It features Intel Pentium N3710 (2.56GHz Quad-core, HD Graphics 405 [16 EUs @ 700MHz], VT-x, AES-NI), 2x4GB DDR3L RAM, 32GB eMMC storage built-in, further expansion w/M.2 SSD & MicroSD slot, lots of connectivity onboard. Software: FreeBSD Hypervisor (bhyve or Xen) to run atop the hardware, hosting two separate hosts. One will run an instance of pfSense, the "World's Most Popular Open Source Firewall" to handle the WWAN connection, routing, and Firewall (as well as Secure VPN if desired). The other instance will run a slimmed down installation of FreeBSD. The UI will be tweaked to work best in this form factor & resources tuned for this platform. There will be a strong reliance on Google Chromium & Google's services (like Google Voice). The project has a detailed log, and it looks like the hardware it is based on will ship in the next few weeks, so we expect to see more activity. *** News Roundup NVME M.2 card road tests (Matt Dillon) (http://lists.dragonflybsd.org/pipermail/users/2017-March/313261.html) DragonFlyBSD’s Matt Dillon has posted a rundown of the various M.2 NVMe devices he has tested SAMSUNG 951 SAMSUNG 960 EVO TOSHIBA OCZ RD400 INTEL 600P WD BLACK 256G MYDIGITALSSD PLEXTOR M8Pe It is interesting to see the relative performance of each device, but also how they handle the workload and manage their temperature (or don’t in a few cases) The link provides a lot of detail about different block sizes and overall performance *** ZREP ZFS replication and failover (http://www.bolthole.com/solaris/zrep/) "zrep", a robust yet easy to use ZFS based replication and failover solution. It can also serve as the conduit to create a simple backup hub. The tool was originally written for Solaris, and is written in ksh However, it seems people have used it on FreeBSD and even FreeNAS by installing the ksh93 port Has anyone used this? How does it compare to tools like zxfer? There is a FreeBSD port, but it is a few versions behind, someone should update it We would be interested in hearing some feedback *** Catching up on some TrueOS News TrueOS Security and Wikileaks revelations (https://www.trueos.org/blog/trueos-security-wikileaks-revelations/) New Jail management utilities (https://www.trueos.org/blog/new-jail-management-utilities/) Ken Moore's talk about Sysadm from Linuxfest 2016 (https://www.youtube.com/watch?v=PyraePQyCGY) The Basics of using ZFS with TrueOS (https://www.trueos.org/blog/community-spotlight-basics-using-zfs-trueos/) *** Catching up on some OpenBSD News OpenBSD 6.1 coming May 1 (https://www.openbsd.org/61.html) OpenBSD Foundation 2016 Fundraising (goal: $250K actual: $573K) (http://undeadly.org/cgi?action=article&sid=20170223044255) The OpenBSD Foundation 2017 Fundraising Campaign (http://www.openbsdfoundation.org/campaign2017.html) OpenBSD MitM attack against WPA1/WPA2 (https://marc.info/?l=openbsd-announce&m=148839684520133&w=2) OpenBSD vmm/vmd Update (https://www.openbsd.org/papers/asiabsdcon2017-vmm-slides.pdf) *** Beastie Bits HardenedBSD News: Introducing CFI (https://hardenedbsd.org/article/shawn-webb/2017-03-02/introducing-cfi) New version of Iocage (Python 3) on FreshPorts (https://www.freshports.org/sysutils/py3-iocage/) DragonFly BSD Network performance comparison as of today (https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf) KnoxBUG recap (http://knoxbug.org/content/knoxbug-wants-you) *** Feedback/Questions Noel asks about moving to bhyve/jails (https://pastebin.com/7B47nuC0) ***

186: The Fast And the Firewall: Tokyo Drift

March 22, 2017 2:54:07 83.58 MB Downloads: 0

This week on BSDNow, reports from AsiaBSDcon, TrueOS and FreeBSD news, Optimizing IllumOS Kernel, your questions and more. This episode was brought to you by Headlines AsiaBSDcon Reports and Reviews () AsiaBSDcon schedule (https://2017.asiabsdcon.org/program.html.en) Schedule and slides from the 4th bhyvecon (http://bhyvecon.org/) Michael Dexter’s trip report on the iXsystems blog (https://www.ixsystems.com/blog/ixsystems-attends-asiabsdcon-2017) NetBSD AsiaBSDcon booth report (http://mail-index.netbsd.org/netbsd-advocacy/2017/03/13/msg000729.html) *** TrueOS Community Guidelines are here! (https://www.trueos.org/blog/trueos-community-guidelines/) TrueOS has published its new Community Guidelines The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories. These describe what is expected of community members and committers They also describe the process of getting commit access to the TrueOS repo: Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit: Create five or more pull requests in a TrueOS Project repository within a single six month period. Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.). Request commit access from the core developers via core@trueos.org OR Core developers contact you concerning commit access. Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities. At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”. The page also describes the perks of being a TrueOS committer: Contributors to the TrueOS Project enjoy a number of benefits, including: A personal TrueOS email alias: @trueos.org Full access for managing TrueOS issues on GitHub. Regular meetings with the core developers and other contributors. Access to private chat channels with the core developers. Recognition as part of an online Who’s Who of TrueOS developers. The eternal gratitude of the core developers of TrueOS. A warm, fuzzy feeling. Intel Donates 250.000 $ to the FreeBSD Foundation (https://www.freebsdfoundation.org/news-and-events/latest-news/new-uranium-level-donation-and-collaborative-partnership-with-intel/) More details about the deal: Systems Thinking: Intel and the FreeBSD Project (https://www.freebsdfoundation.org/blog/systems-thinking-intel-and-the-freebsd-project/) Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD. Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach. Intel Blog Post (https://01.org/blogs/imad/2017/intel-increases-support-freebsd-project) We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products. Thank you very much, Intel! *** Applied FreeBSD: Basic iSCSI (https://globalengineer.wordpress.com/2017/03/05/applied-freebsd-basic-iscsi/) iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well. This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered. The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it. Then, enable ctld and start it: sysrc ctld_enable=”YES” service ctld start You can use the ctladm command to see what is going on: root@bsdtarget:/dev # ctladm lunlist (7:0:0/0): Fixed Direct Access SPC-4 SCSI device (7:0:1/1): Fixed Direct Access SPC-4 SCSI device root@bsdtarget:/dev # ctladm devlist LUN Backend Size (Blocks) BS Serial Number Device ID 0 block 10485760 512 MYSERIAL 0 MYDEVID 0 1 block 10485760 512 MYSERIAL 1 MYDEVID 1 Now, let’s configure the client side: In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started. sysrc iscsid_enable=”YES” service iscsid start Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session: + iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget You should now have a new device (check dmesg), in this case, da1 The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it This it walks through how to disconnect iscsi, incase you don’t want it anymore This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows Ok, that wasn’t so bad. Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved. Interview - Philipp Buehler - pbuehler@sysfive.com (mailto:pbuehler@sysfive.com) Technical Lead at SysFive, and Former OpenBSD Committer News Roundup Half a dozen new features in mandoc -T html (http://undeadly.org/cgi?action=article&sid=20170316080827) mandoc (http://man.openbsd.org/mandoc.1)’s HTML output mode got some new features Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. [...] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them. In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark ('#') after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter. Check out the full report by Ingo Schwarze (schwarze@) and try out these new features *** Optimizing IllumOS Kernel Crypto (http://zfs-create.blogspot.com/2014/05/optimizing-illumos-kernel-crypto.html) Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking. The article also spends a few minutes on the different modes and how they work. Recently I've had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson.This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo). The initial tests of “what the hardware can do” were done in OpenSSL So now comes the test for the KCF. I wrote a quick'n'dirty crypto test module that just performed a bunch of encryption operations and timed the results. KCF got around 100 MB/s for each algorithm, except half that for AES-GCM OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did. When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction. This is a problem: 3.1.2 Instructions That Cause VM Exits ConditionallyCLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow. The CLTS instruction signals to the CPU that we're about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we've been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn't save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreemptdisable() and kpreemptenable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I'll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn't really conducive to high performance (we'll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get: AES-128/CTR: 439 MB/s AES-128/CBC: 483 MB/s AES-128/GCM: 252 MB/s Not disastrous anymore, but still, very, very bad. Of course, you've got keep in mind, the thing we're comparing it to, OpenSSL, is no slouch. It's got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That's a ton of code to maintain and optimize, but I'll be damned if I let this kind of performance gap persist. Fixing this, however, is not so trivial anymore. It pertains to how the KCF's block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it's inherently inefficient. ECB, CBC and CTR gained the ability to pass an algorithm-specific "fastpath" implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place. ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x. After all of this work, this is how the results now look on Illumos, even inside of a VM: Algorithm/Mode 128k ops AES-128/CTR: 3121 MB/s AES-128/CBC: 691 MB/s AES-128/GCM: 1053 MB/s So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL. On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we'll be hogging the CPU for at most ~0.1ms per run. Overall, we're even a little bit faster than OpenSSL in some tests, though that's probably down to us encrypting 128k blocks vs 8k in the "openssl speed" utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep. To made these tests repeatable, and to ensure that the changes didn’t break the crypto algorithms, Saso created a crypto_test kernel module. I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different. Performance results (https://wiki.freebsd.org/OpenCryptoPerformance) I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds. I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did. It currently seems like there isn’t a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful. *** Brendan Gregg’s special freeware tools for sysadmins (http://www.brendangregg.com/specials.html) These tools need to be in every (not so) serious sysadmins toolbox. Triple ROT13 encryption algorithm (beware: export restrictions may apply) /usr/bin/maybe, in case true and false don’t provide too little choice... The bottom command lists you all the processes using the least CPU cycles. Check out the rest of the tools. You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) *** A look at 2038 (http://www.lieberbiber.de/2017/03/14/a-look-at-the-year-20362038-problems-and-time-proofness-in-various-systems/) I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it’s all going to fail. Most didn’t even understand what the problem was, and I remember one magazine writing something like the following: Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year’s Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you’re in an elevator at this time, it will stop working and you may fall to your death. I still don’t know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents’ house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”. That was 17 years ago. One of the reasons why Y2K wasn’t as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”). The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now - before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong. A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals. The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don’t do it like the IBM S/370 mentioned before). But as we’ve learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did. sysgettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (timet data type) plus fraction (susecondst data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, timet is usually defined as a signed 32-Bit Integer. When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for timet push the problem back to 2106, but I don’t know about many of those. The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the timet data type itself, a number of other data structures have fields based on timet or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps 32-Bit Windows applications, or Windows applications defining _USE32BITTIMET, can be hit by the year 2038 problem too if they use the timet data type. The _time64t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded timet to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected. If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough. Then the article goes on to describe how all of this will break your file systems. Not to mention your databases and other file formats. Also see Theo De Raadt’s EuroBSDCon 2013 Presentation (https://www.openbsd.org/papers/eurobsdcon_2013_time_t/mgp00001.html) *** Beastie Bits Michael Lucas: Get your name in “Absolute FreeBSD 3rd Edition” (https://blather.michaelwlucas.com/archives/2895) ZFS compressed ARC stats to top (https://svnweb.freebsd.org/base?view=revision&revision=r315435) Matthew Dillon discovered HAMMER was repeating itself when writing to disk. Fixing that issue doubled write speeds (https://www.dragonflydigest.com/2017/03/14/19452.html) TedU on Meaningful Short Names (http://www.tedunangst.com/flak/post/shrt-nms-fr-clrty) vBSDcon and EuroBSDcon Call for Papers are open (https://www.freebsdfoundation.org/blog/submit-your-work-vbsdcon-and-eurobsdcon-cfps-now-open/) Feedback/Questions Craig asks about BSD server management (http://pastebin.com/NMshpZ7n) Michael asks about jails as a router between networks (http://pastebin.com/UqRwMcRk) Todd asks about connecting jails (http://pastebin.com/i1ZD6eXN) Dave writes in with an interesting link (http://pastebin.com/QzW5c9wV) > applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability. ***

185: Exit Interview

March 16, 2017 55:08 39.69 MB Downloads: 0

This is a very special BSD Now! New exciting changes are coming to the show and we’re gonna cover them, so stick around or you’ll miss it! Interview – Kris Moore – kris@trueos.org / @pcbsdKrisTrueOS founder, FreeNAS developer, BSD Now co-hostBenedict Reuschling – bcr@freebsd.org / @bsdbcrFreeBSD commiter & FreeBSD Foundation Vice President, BSD Now co-host Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

184: Tokyo Dreaming

March 08, 2017 1:34:57 68.36 MB Downloads: 0

This week on BSDNow, Allan and I are in Tokyo for AsiaBSDCon, but not to worry, we have a full episode lined up and ready to go. Hackathon reports This episode was brought to you by Headlines OpenBSD A2k17 hackathon reports a2k17 hackathon report: Patrick Wildt on the arm64 port (http://undeadly.org/cgi?action=article&sid=20170131101827) a2k17 hackathon report: Antoine Jacoutot on syspatch, rc.d improvements and more (http://undeadly.org/cgi?action=article&sid=20170203232049) a2k17 hackathon report: Martin Pieuchot on NET_LOCK and much more (http://undeadly.org/cgi?action=article&sid=20170127154356) a2k17 hackathon report: Kenneth Westerback on the hidden wonders of the build system, the network stack and more (http://undeadly.org/cgi?action=article&sid=20170127031836) a2k17 hackathon report: Bob Beck on LibreSSL progress and more (http://undeadly.org/cgi?action=article&sid=20170125225403) *** NetBSD is now reproducible (https://blog.netbsd.org/tnf/entry/netbsd_fully_reproducible_builds) Christos Zoulas posts to the NetBSD blog that he has completed his project to make fully reproducible NetBSD builds for amd64 and sparc64 I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build.sh) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work. I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of what's different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show what's different for each file. Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build.sh. Some of the stumbling blocks that were overcome: Timestamps Date/time/author embedded in source files Timezone sensitive code Directory order / build order Non-sanitized data stored in files Symbolic links / paths General tool inconsistencies: including gcc profiling, the fact that GPT partition tables, are by definition, globally unique each time they are created, and the iso9660 standard calls for a timestamp with a timezone. Toolchain Build information / tunables / environment. NetBSD now has a knob ‘MKREPRO’, if set to YES it sets a long list of variables to a consistent set of a values. The post walks through how these problems where solves Future Work: Vary more parameters and find more inconsistencies Verify that cross-building is reproducible Verify that unprivileged builds are reproducible Test on other platforms *** Features are faults redux (http://www.tedunangst.com/flak/post/features-are-faults-redux) From Ted Unangst Last week I gave a talk for the security class at Notre Dame based on features are faults but with some various commentary added. It was an exciting trip, with the opportunity to meet and talk with the computer vision group as well. Some other highlights include the Indiana skillet I had for breakfast, which came with pickles and was amazing, and explaining the many wonders of cvs to the Linux users group over lunch. After that came the talk, which went a little something like this. I got started with OpenBSD back about the same time I started college, although I had a slightly different perspective then. I was using OpenBSD because it included so many security features, therefore it must be the most secure system, right? For example, at some point I acquired a second computer. What’s the first thing anybody does when they get a second computer? That’s right, set up a kerberos domain. The idea that more is better was everywhere. This was also around the time that ipsec was getting its final touches, and everybody knew ipsec was going to be the most secure protocol ever because it had more options than any other secure transport. We’ll revisit this in a bit. There’s been a partial attitude adjustment since then, with more people recognizing that layering complexity doesn’t result in more security. It’s not an additive process. There’s a whole talk there, about the perfect security that people can’t or won’t use. OpenBSD has definitely switched directions, including less code, not more. All the kerberos code was deleted a few years ago. Let’s assume about one bug per 100 lines of code. That’s probably on the low end. Now say your operating system has 100 million lines of code. If I’ve done the math correctly, that’s literally a million bugs. So that’s one reason to avoid adding features. But that’s a solveable problem. If we pick the right language and the right compiler and the right tooling and with enough eyeballs and effort, we can fix all the bugs. We know how to build mostly correct software, we just don’t care. As we add features to software, increasing its complexity, new unexpected behaviors start to emerge. What are the bounds? How many features can you add before craziness is inevitable? We can make some guesses. Less than a thousand for sure. Probably less than a hundred? Ten maybe? I’ll argue the answer is quite possibly two. Interesting corollary is that it’s impossible to have a program with exactly two features. Any program with two features has at least a third, but you don’t know what it is My first example is a bug in the NetBSD ftp client. We had one feature, we added a second feature, and just like that we got a third misfeature (http://marc.info/?l=oss-security&m=141451507810253&w=2) Our story begins long ago. The origins of this bug are probably older than I am. In the dark times before the web, FTP sites used to be a pretty popular way of publishing files. You run an ftp client, connect to a remote site, and then you can browse the remote server somewhat like a local filesystem. List files, change directories, get files. Typically there would be a README file telling you what’s what, but you don’t need to download a copy to keep. Instead we can pipe the output to a program like more. Right there in the ftp client. No need to disconnect. Fast forward a few decades, and http is the new protocol of choice. http is a much less interactive protocol, but the ftp client has some handy features for batch downloads like progress bars, etc. So let’s add http support to ftp. This works pretty well. Lots of code reused. http has one quirk however that ftp doesn’t have. Redirects. The server can redirect the client to a different file. So now you’re thinking, what happens if I download http://somefile and the server sends back 302 http://|reboot. ftp reconnects to the server, gets the 200, starts downloading and saves it to a file called |reboot. Except it doesn’t. The function that saves files looks at the first character of the name and if it’s a pipe, runs that command instead. And now you just rebooted your computer. Or worse. It’s pretty obvious this is not the desired behavior, but where exactly did things go wrong? Arguably, all the pieces were working according to spec. In order to see this bug coming, you needed to know how the save function worked, you needed to know about redirects, and you needed to put all the implications together. The post then goes into a lot more detail about other issues. We just don’t have time to cover it all today, but you should go read it, it is very enlightening What do we do about this? That’s a tough question. It’s much easier to poke fun at all the people who got things wrong. But we can try. My attitudes are shaped by experiences with the OpenBSD project, and I think we are doing a decent job of containing the complexity. Keep paring away at dependencies and reducing interactions. As a developer, saying “no” to all feature requests is actually very productive. It’s so much faster than implementing the feature. Sometimes users complain, but I’ve often received later feedback from users that they’d come to appreciate the simplicity. There was a question about which of these vulnerabilities were found by researchers, as opposed to troublemakers. The answer was most, if not all of them, but it made me realize one additional point I hadn’t mentioned. Unlike the prototypical buffer overflow vulnerability, exploiting features is very reliable. Exploiting something like shellshock or imagetragick requires no customized assembly and is independent of CPU, OS, version, stack alignment, malloc implementation, etc. Within about 24 hours of the initial release of shellshock, I had logs of people trying to exploit it. So unless you’re on about a 12 hour patch cycle, you’re going to have a bad time. reimplement zfsctl (.zfs) support (https://svnweb.freebsd.org/changeset/base/314048) avg@ (Andriy Gapon) has rewritten the .zfs support in FreeBSD The current code is written on top of GFS, a library with the generic support for writing filesystems, which was ported from Illumos. Because of significant differences between illumos VFS and FreeBSD VFS models, both the GFS and zfsctl code were heavily modified to work on FreeBSD. Nonetheless, they still contain quite a few ugly hacks and bugs. This is a reimplementation of the zfsctl code where the VFS-specific bits are written from scratch and only the code that interacts with the rest of ZFS is reused. Some ideas are picked from an independent work by Will (wca@) This work improves the overall quality of the ZFS port to FreeBSD The code that provides support for ZFS .zfs/ directory functionality has been reimplemented. It is no longer possible to create a snapshot by mkdir under .zfs/snapshot/. That should be the only user visible change. TIL: On IllumOS, you can create, rename, and destroy snapshots, by manipulating the virtual directories in the .zfs/snapshots directory. If enough people would find this feature useful, maybe it could be implemented (rm and rename have never existed on FreeBSD). At the same time, it seems like rather a lot of work, when the ZFS command line tools work so well. Although wca@ pointed out on IRC, it can be useful to be able to create a snapshot over NFS, or SMB. Interview - Konrad Witaszczyk - def@freebsd.org (mailto:def@freebsd.org) Encrypted Kernel Crash Dumps *** News Roundup PBKDF2 Performance improvements on FreeBSD (https://svnweb.freebsd.org/changeset/base/313962) Joe Pixton did some research (https://jbp.io/2015/08/11/pbkdf2-performance-matters/) and found that, because of the way the spec is written, most PBKDF2 implementations are 2x slower than they need to be. Since the PBKDF is used to derive a key, used for encryption, this poses a problem. The attacker can derive a key twice as fast as you can. On FreeBSD the PBKDF2 was configured to derive a SHA512-HMAC key that would take approximately 2 seconds to calculate. That is 2 seconds on one core. So an attacker can calculate the same key in 1 second, and use many cores. Luckily, 1 second is still a long time for each brute force guess. On modern CPUs with the fast algorithm, you can do about 500,000 iterations of PBKDF per second (per core). Until a recent change, OpenBSD used only 8192 iterations. It now uses a similar benchmark of ~2 seconds, and uses bcrypt instead of a SHA1-HMAC. Joe’s research showed that the majority of implementations were done the ‘slow’ way. Calculating the initial part of the outer round each iteration, instead of reusing the initial calculation over and over for each round. Joe submitted a match to FreeBSD to solve this problem. That patch was improved, and a test of tests were added by jmg@, but then work stalled I picked up the work, and fixed some merge conflicts in the patch that had cropped up based on work I had done that moved the HMAC code to a separate file. This work is now committed. With this change, all newly generated GELI keys will be approximately 2x as strong. Previously generated keys will take half as long to calculate, resulting in faster mounting of encrypted volumes. Users may choose to rekey, to generate a new key with the larger default number of iterations using the geli(8) setkey command. Security of existing data is not compromised, as ~1 second per brute force attempt is still a very high threshold. If you are interested in the topic, I recommend the video of Joe’s presentation from the Passwords15 conference in Las Vegas *** Quick How-To: Updating a screenshot in the TrueOS Handbook (https://www.trueos.org/blog/quick-updating-screenshot-trueos-handbook/) Docs writers, might be time to pay attention. This week we have a good walk-through of adding / updating new screenshots to the TrueOS Sphinx Documentation. For those who have not looked in the past, TrueOS and FreeNAS both have fantastic docs by the team over at iXsystems using Sphinx as their doc engine. Often we get questions from users asking what “they can do to help” but don’t necessarily have programming skills to apply. The good news is that using Sphinx is relatively easy, and after learning some minio rst syntax you can easily help fix, or even contribute to new sections of the TrueOS (Or FreeNAS) documentation. In this example, Tim takes us through the process of replacing an old out of date screenshot in the handbook with the latest hotness. Starting with a .png file, he then locates the old screenshot name and adds the updated version “lumina-e.png” to “lumina-f.png”. With the file added to the tree, the relevant section of .rst code can be adjusted and the sphinx build run to verify the output HTML looks correct. Using this method you can easily start to get involved with other aspects of documentation and next thing you know you’ll be writing boot-loaders like Allan! *** Learn C Programming With 9 Excellent Open Source Books (https://www.ossblog.org/learn-c-programming-with-9-excellent-open-source-books/) Now that you’ve easily mastered all your documentation skills, you may be ready to take on a new challenge. (Come on, that boot-loader isn’t going to write itself!) We wanted to point out some excellent resources to get you started on your journey into writing C. Before you think, “oh, more books to purchase”, wait there’s good news. These are the top-9 open-source books that you can download in digital form free of charge. Now I bet we got your attention. We start the rundown with “The C Book”, by Mike Banahan, Declan Brady and Mark Doran, which will lay the groundwork with your introduction into the C language and concepts. Next up, if you are going to do anything, do it with style, so take a read through the “C Elements of Style” which will make you popular at all the parties. (We can’t vouch for that statement) From here we have a book on using C to build your own minimal “lisp” interpreter, reference guides on GNU C and some other excellent introduction / mastery books to help round-out your programming skill set. Your C adventure awaits, hopefully these books can not only teach you good C, but also make you feel confident when looking at bits of the FreeBSD world or kernel with a proper foundation to back it up. *** Running a Linux VM on OpenBSD (http://eradman.com/posts/linuxvm-on-openbsd.html) Over the past few years we’ve talked a lot about Virtualization, Bhyve or OpenBSD’s ‘vmm’, but qemu hasn’t gotten much attention. Today we have a blog post with details on how to deploy qemu to run Linux on top of an OpenBSD host system. The starts by showing us how to first provision the storage for qemu, using the handy ‘qemu-img’ command, which in this example only creates a 4GB disk, you’ll probably want more for real-world usage though. Next up the qemu command will be run, pay attention to the particular flags for network and memory setup. You’ll probably want to bump it up past the recommended 256M of memory. Networking is always the fun part, as the author describes his intended setup I want OpenBSD and Debian to be able to obtain an IP via DHCP on their wired interfaces and I don't want external networking required for an NFS share to the VM. To accomplish this I need two interfaces since dhclient will erase any other IPv4 addresses already assigned. We can't assign an address directly to the bridge, but we can configure a virtual Ethernet device and add it. The setup for this portion involves touching a few more files, but isn’t that painless. Some “pf” rules to enable NAT for and dhcpd setup to assign a “fixed” IP to the vm will get us going, along with some additional details on how to configure the networking for inside the debian VM. Once those steps are completed you should be able to mount NFS and share data from the host to the VM painlessly. Beastie Bits MacObserver: Interview with Open Source Developer & Former Apple Manager Jordan Hubbard (https://www.macobserver.com/podcasts/background-mode-jordan-hubbard/) 2016 Google Summer of Code Mentor Summit and MeetBSD Trip Report: Gavin Atkinson (https://www.freebsdfoundation.org/blog/2016-google-summer-of-code-mentor-summit-and-meetbsd-trip-report-gavin-atkinson/) Feedback/Questions Joe - BGP / Vultr Followup (http://pastebin.com/TNyHBYwT) Ryan Moreno asks about Laptops (http://pastebin.com/s4Ypezsz) ***

183: Getting Steamy Here

March 01, 2017 1:10:56 51.07 MB Downloads: 0

This week on BSDNow, we have “Weird Unix Things”, “Is it getting Steamy in here?” and an Interview about BSD Sockets API. (Those This episode was brought to you by Headlines playonbsd with TrueOS: It’s Getting Steamy in Here and I’ve Had Too Much Wine (https://www.trueos.org/blog/playonbsd-trueos-getting-steamy-ive-much-wine/) We’ve done a couple of tutorials in the past on using Steam and Wine with PC-BSD, but now with the addition of playonbsd to the AppCafe library, you have more options than ever before to game on your TrueOS system. We’re going to have a look today at playonbsd, how it works with TrueOS, and what you can expect if you want to give it a try on your own system. Let’s dive right in! Once playonbsd is installed, go back to your blank desktop, right-click on the wallpaper, and select terminal. Playonbsd does almost all the configuring for you, but there are still a couple of simple options you’ll want to configure to give yourself the best experience. In your open terminal, type: playonbsd. You can also find playonbsd by doing a fast search using Lumina’s built-in search function in the start menu after it’s been installed. Once opened, a graphical interface greets us with easy to navigate menus and even does most of the work for you. A nice graphical UI that hides the complexity of setting up WINE and Steam, and lets you pick select the game you want, and get it setup Start gaming quicker, without the headache If you’re a PC gamer, you should definitely give playonbsd a try! You may be surprised at how well it works. If you want to know ahead of time if your games are well supported or not, head on over to WineHQ and do a search. Many people have tested and provided feedback and even solutions for potential problems with a large variety of video games. This is a great resource if you run into a glitch or other problem. Weird Unix thing: 'cd //' (https://jvns.ca/blog/2017/02/08/weird-unix-things-cd/) So why can you do ‘cd //tmp’, and it isn’t the same as ‘cd /tmp’? The spec says: An implementation may further simplify curpath by removing any trailing characters that are not also leading characters, replacing multiple non-leading consecutive characters with a single , and replacing three or more leading characters with a single . If, as a result of this canonicalization, the curpath variable is null, no further steps shall be taken. “So! We can replace “three or more leading / characters with a single slash”. That does not say anything about what to do when there are 2 / characters though, which presumably is why cd //tmp leaves you at //tmp.” A pathname that begins with two successive slashes may be interpreted in an implementation-defined manner So what is it for? Well, the blog did a bit of digging and came up with this stackoverflow answer (http://unix.stackexchange.com/questions/256497/on-what-systems-is-foo-bar-different-from-foo-bar/256569#256569) In cygwin and some other systems // is treated as a unix-ified version of \, to access UNC windows file sharing paths like \server\share Perforce, the vcs, uses // to denote a path relative to the depot It seems to have been used in the path for a bunch of different network file systems, but also for myriad other things Testing out snapshots in Apple’s next-generation APFS file system (https://arstechnica.com/apple/2017/02/testing-out-snapshots-in-apples-next-generation-apfs-file-system/) Adam Leventhal takes his DTrace hammer to Apple’s new file system to see what is going on Back in June, Apple announced its new upcoming file system: APFS, or Apple File System. There was no mention of it in the WWDC keynote, but devotees needed no encouragement. They picked over every scintilla of data from the documentation on Apple’s developer site, extrapolating, interpolating, eager for whatever was about to come. In the WWDC session hall, the crowd buzzed with a nervous energy, eager for the grand unveiling of APFS. I myself badge-swapped my way into the conference just to get that first glimpse of Apple’s first original filesystem in the 30+ years since HFS Apple’s presentation didn’t disappoint the hungry crowd. We hoped for a modern filesystem, optimized for next generation hardware, rich with features that have become the norm for data centers and professionals. With APFS, Apple showed a path to meeting those expectations. Dominic Giampaolo and Eric Tamura, leaders of the APFS team, shared performance optimizations, data integrity design, volume management, efficient storage of copied data, and snapshots—arguably the feature of APFS most directly in the user’s control. It’s 2017, and Apple already appears to be making good on its promise with the revelation that the forthcoming iOS 10.3 will use APFS. The number of APFS tinkerers using it for their personal data has instantly gone from a few hundred to a few million. Beta users of iOS 10.3 have already made the switch apparently without incident. They have even ascribed unscientifically-significant performance improvements to APFS. Previously Adam had used DTrace to find a new syscall introduced in OS X, fs_snapshot, but he had not dug into how to use it. Now it seems, the time has come Learning from XNU and making some educated guesses, I wrote my first C program to create an APFS snapshot. This section has a bit of code, which you can find in this Github repo (https://github.com/ahl/apfs) That just returned “fs_snapshot: Operation not permitted” So, being Adam, he used DTrace to figure out what the problem was Running this DTrace script in one terminal while running the snapshot program in another shows the code flow through the kernel as the program executes In the code flow, the privcheckcred() function jumps out as a good place to continue because of its name, the fact that fs_snapshot calls it directly, and the fact that it returns 1 which corresponds with EPERM, the error we were getting. Turns out, it just requires some sudo With a little more testing I wrote my own version of Apple's unreleased snapUtil command from the WWDC demo We figured out the proper use of the fssnapshot system call and reconstructed the WWDC snapUtil. But all this time an equivalent utility has been lurking on macOS Sierra. If you look in /System/Library/Filesystems/apfs.fs/Contents/Resources/, Apple has included a number of APFS-related utilities, including apfssnapshot (and, tantalizingly, a tool called hfs_convert). Snapshots let you preserve state to later peruse; we can also revert an APFS volume to a previous state to restore its contents. The current APFS semantics around rollback are a little odd. The revert operation succeeds, but it doesn't take effect until the APFS volume is next mounted Another reason Apple may not have wanted people messing around with snapshots is that the feature appears to be incomplete. Winding yourself into a state where only a reboot can clear a mounted snapshot is easy, and using snapshots seems to break some of the diskutil APFS output It is interesting to see what you can do with DTrace, as well as to see what a DTrace and ZFS developer things of APFS *** Interview - Tom Jones - tj@enoti.me (mailto:tj@enoti.me) Replacing the BSD Sockets API *** News Roundup FreeBSD rc.d script to map ethernet device names by MAC address (https://github.com/eborisch/ethname) Self-contained FreeBSD rc.d script for re-naming devices based on their MAC address. I needed it due to USB Ethernet devices coming up in different orders across OS upgrades. Copy ethname into /usr/local/etc/rc.d/ Add the following to rc.conf: ethnameenable="YES" ethnamedevices="em0 ue0 ue1" # Replace with desired devices to rename Create /usr/local/etc/ifmap in the following format: 01:23:45:67:89:ab eth0 01:23:45:67:89:ac eth1 That's it. Use ifconfig_="" settings in rc.conf with the new names. I know MFSBSD has something like this, but a polished up hybrid of the two should likely be part of the base system if something is not already available This would be a great “Junior Job”, if say, a viewer wanted to get started with their first FreeBSD patch *** Mog: A different take on the Unix tool cat (https://github.com/witchard/mog) Do you abuse cat to view files? Did you know cat is meant for concatenating files, meaning: cat part1 part2 part3 > wholething.txt mog is a tool for actually viewing files, and it adds quite a few nice features Syntax highlight scripts Print a hex dump of binary files Show details of image files Perform objdump on executables List a directory mog reads the $HOME/.mogrc config file which describes a series of operations it can do in an ordered manner. Each operation has a match command and an action command. For each file you give to mog it will test each match command in turn, when one matches it will perform the action. A reasonably useful config file is generated when you first run it. How Unix erases things when you type a backspace while entering text (https://utcc.utoronto.ca/~cks/space/blog/unix/HowUnixBackspaces) Yesterday I mentioned in passing that printing a DEL character doesn't actually erase anything. This raises an interesting question, because when you're typing something into a Unix system and hit your backspace key, Unix sure erases the last character that you entered. So how is it doing that? The answer turns out to be basically what you'd expect, although the actual implementation rapidly gets complex. When you hit backspace, the kernel tty line discipline rubs out your previous character by printing (in the simple case) Ctrl-H, a space, and then another Ctrl-H. Of course just backing up one character is not always the correct way of erasing input, and that's when it gets complicated for the kernel. To start with we have tabs, because when you (the user) backspace over a tab you want the cursor to jump all the way back, not just move back one space. The kernel has a certain amount of code to work out what column it thinks you're on and then back up an appropriate number of spaces with Ctrl-Hs. Then we have the case when you quoted a control character while entering it, eg by typing Ctrl-V Ctrl-H; this causes the kernel to print the Ctrl-H instead of acting on it, and it prints it as the two character sequence ^H. When you hit backspace to erase that, of course you want both (printed) characters to be rubbed out, not just the 'H'. So the kernel needs to keep track of that and rub out two characters instead of just one. Chris then provides an example, from IllumOS, of the kernel trying to deal with multibyte characters FreeBSD also handles backspacing a space specially, because you don't need to actually rub that out with a '\b \b' sequence; you can just print a plain \b. Other kernels don't seem to bother with this optimization. The FreeBSD code for this is in sys/kern/ttyttydisc.c in the ttydiscrubchar function PS: If you want to see the kernel's handling of backspace in action, you usually can't test it at your shell prompt, because you're almost certainly using a shell that supports command line editing and readline and so on. Command line editing requires taking over input processing from the kernel, and so such shells are handling everything themselves. My usual way to see what the kernel is doing is to run 'cat >/dev/null' and then type away. And you thought the backspace key would be simple... *** FreeBSD ports now have Wayland (http://www.freshports.org/graphics/wayland/) We’ve discussed the pending Wayland work, but we wanted to point you today to the ports which are in mainline FreeBSD ports tree now. First of all, (And I was wondering how they would deal with this) it has landed in the “graphics” category, since Wayland is the Anti-X11, putting it in x11/ didn’t make a lot of sense. Couple of notes before you start installing new packages and expecting wayland to “just work” First, this does require that you have working DRM from the kernel side. You’ll want to grab TrueOS or build from Matt Macy’s FreeBSD branches on GitHub before testing on any kind of modern Intel GPU. Nvidia with modesetting should be supported. Next, not all desktops will “just work”. You may need to grab experimental Weston for compositor. KDE / Gnome (And Lumina) and friends will grow Wayland support in the future, so don’t expect to just fire up $whatever and have it all work out of box. Feedback is needed! This is brand new functionality for FreeBSD, and the maintainers will want to hear your results. For us on the TrueOS side we are interested as well, since we want to port Lumina over to Wayland soon(ish) Happy Experimenting! *** Beastie Bits Faces of FreeBSD 2017: Joseph Kong (https://www.freebsdfoundation.org/blog/faces-of-freebsd-2017-joseph-kong/) OPNsense 17.1 “Eclectic Eagle”, based on FreeBSD 11 Released (https://opnsense.org/opnsense-17-1-released/) Why you should start programming on UNIX (http://www.koszek.com/blog/2017/01/28/why-you-should-start-programming-on-unix/) OpenSMTPD Mail Filtering (http://eradman.com/posts/opensmtpd-filtering.html) Feedback/Questions Zane - Databases and Jails (http://pastebin.com/89AyGe5F) Mohammad - USB Install (http://pastebin.com/Te8sz9id) Chuck - Updating Jails (http://pastebin.com/G2SzahWL) David - Lumina / LXQt (http://pastebin.com/71ExJLpL) ***

182: Bloaty McBloatface

February 22, 2017 1:06:58 48.22 MB Downloads: 0

This week on the show, we’ve got FreeBSD quarterly Status reports to discuss, OpenBSD changes to the installer, EC2 and IPv6 and more. Stay This episode was brought to you by Headlines OpenBSD changes of note 6 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-6) OpenBSD can now be cross built with clang. Work on this continues Build ld.so with -fno-builtin because otherwise clang would optimize the local versions of functions like dlmemset into a call to memset, which doesn’t exist. Add connection timeout for ftp (http). Mostly for the installer so it can error out and try something else. Complete https support for the installer. I wonder how they handle certificate verification. I need to look into this as I’d like to switch the FreeBSD installer to this as well New ocspcheck utility to validate a certificate against its ocsp responder. net lock here, net lock there, net lock not quite everywhere but more than before. More per cpu counters in networking code as well. Disable and lock Silicon Debug feature on modern Intel CPUs. Prevent wireless frame injection attack described at 33C3 in the talk titled “Predicting and Abusing WPA2/802.11 Group Keys” by Mathy Vanhoef. Add support for multiple transmit ifqueues per network interface. Supported drivers include bge, bnx, em, myx, ix, hvn, xnf. pledge now tracks when a file as opened and uses this to permit or deny ioctl. Reimplement httpd’s support for byte ranges. Fixes a memory DOS. FreeBSD 2016Q4 Status Report (https://www.freebsd.org/news/status/report-2016-10-2016-12.html) An overview of some of the work that happened in October - December 2016 The ports tree saw many updates and surpassed 27,000 ports The core team was busy as usual, and the foundation attended and/or sponsored a record 24 events in 2016. CEPH on FreeBSD seems to be coming along nicely. For those that do not know, CEPH is a distributed filesystem that can sit on top of another filesystem. That is, you can use it to create a clustered filesystem out of a bunch of ZFS servers. Would love to have some viewers give it a try and report back. OpenBSM, the FreeBSD audit framework, got some updates Ed Schouten committed a front end to export sysctl data in a format usable by Prometheus, the open source monitoring system. This is useful for other monitoring software too. Lots of updates for various ARM boards There is an update on Reproducible Builds in FreeBSD, “ It is now possible to build the FreeBSD base system (kernel and userland) completely reproducibly, although it currently requires a few non-default settings”, and the ports tree is at 80% reproducible Lots of toolchain updates (gcc, lld, gdb) Various updates from major ports teams *** Amazon rolls out IPv6 support on EC2 (http://www.daemonology.net/blog/2017-01-26-IPv6-on-FreeBSD-EC2.html) A few hours ago Amazon announced that they had rolled out IPv6 support in EC2 to 15 regions — everywhere except the Beijing region, apparently. This seems as good a time as any to write about using IPv6 in EC2 on FreeBSD instances. First, the good news: Future FreeBSD releases will support IPv6 "out of the box" on EC2. I committed changes to HEAD last week, and merged them to the stable/11 branch moments ago, to have FreeBSD automatically use whatever IPv6 addresses EC2 makes available to it. Next, the annoying news: To get IPv6 support in EC2 from existing FreeBSD releases (10.3, 11.0) you'll need to run a few simple commands. I consider this unfortunate but inevitable: While Amazon has been unusually helpful recently, there's nothing they could have done to get support for their IPv6 networking configuration into FreeBSD a year before they launched it. You need the dual-dhclient port: pkg install dual-dhclient And the following lines in your /etc/rc.conf: ifconfigDEFAULT="SYNCDHCP acceptrtadv" ipv6activateallinterfaces="YES" dhclientprogram="/usr/local/sbin/dual-dhclient" + It is good to see FreeBSD being ready to use this feature on day 0, not something we would have had in the past Finally, one important caveat: While EC2 is clearly the most important place to have IPv6 support, and one which many of us have been waiting a long time to get, this is not the only service where IPv6 support is important. Of particular concern to me, Application Load Balancer support for IPv6 is still missing in many regions, and Elastic Load Balancers in VPC don't support IPv6 at all — which matters to those of us who run non-HTTP services. Make sure that IPv6 support has been rolled out for all the services you need before you start migrating. Colin’s blog also has the details on how to actually activate IPv6 from the Amazon side, if only it was as easy as configuring it on the FreeBSD side *** FreeBSD’s George Neville-Neil tries valiantly for over an hour to convince a Linux fan of the error of their ways (https://www.youtube.com/watch?v=cofKxtIO3Is) In today's episode of the Lunduke Hour I talk to George Neville-Neil -- author and FreeBSD advocate. He tries to convince me, a Linux user, that FreeBSD is better. + They cover quite a few topics, including: + licensing, and the motivations behind it + vendor relations + community + development model + drivers and hardware support + George also talks about his work with the FreeBSD Foundation, and the book he co-authored, “The Design and Implementation of the FreeBSD Operating System, 2nd Edition” News Roundup An interactive script that makes it easy to install 50+ desktop environments following a base install of FreeBSD 11 (https://github.com/rosedovell/unixdesktops) And I thought I was doing good when I wrote a patch for the installer that enables your choice of 3 desktop environments... This is a collection of scripts meant to install desktop environments on unix-like operating systems following a base install. I call one of these 'complete' when it meets the following requirements: + A graphical logon manager is presented without user intervention after powering on the machine + Logging into that graphical logon manager takes the user into the specified desktop environment + The user can open a terminal emulator I need to revive my patch, and add Lumina to it *** Firefox 51 on sparc64 - we did not hit the wall yet (https://blog.netbsd.org/tnf/entry/firefox_51_on_sparc64_we) A NetBSD developers tells the story of getting Firefox 51 running on their sparc64 machine It turns out the bug impacted amd64 as well, so it was quickly fixed They are a bit less hopeful about the future, since Firefox will soon require rust to compile, and rust is not working on sparc64 yet Although there has been some activity on the rust on sparc64 front, so maybe there is hope The post also look at a few alternative browsers, but it not hopeful *** Introducing Bloaty McBloatface: a size profiler for binaries (http://blog.reverberate.org/2016/11/07/introducing-bloaty-mcbloatface.html) I’m very excited to announce that today I’m open-sourcing a tool I’ve been working on for several months at Google. It’s called Bloaty McBloatface, and it lets you explore what’s taking up space in your .o, .a, .so, and executable binary files. Bloaty is available under the Apache 2 license. All of the code is available on GitHub: github.com/google/bloaty. It is quick and easy to build, though it does require a somewhat recent compiler since it uses C++11 extensively. Bloaty primarily supports ELF files (Linux, BSD, etc) but there is some support for Mach-O files on OS X too. I’m interested in expanding Bloaty’s capabilities to more platforms if there is interest! I need to try this one some of the boot code files, to see if there are places we can trim some fat We’ve been using Bloaty a lot on the Protocol Buffers team at Google to evaluate the binary size impacts of our changes. If a change causes a size increase, where did it come from? What sections/symbols grew, and why? Bloaty has a diff mode for understanding changes in binary size The diff mode looks especially interesting. It might be worth setting up some kind of CI testing that alerts if a change results in a significant size increase in a binary or library *** A BSD licensed mdns responder (https://github.com/kristapsdz/mdnsd) One of the things we just have to deal with in the modern world is service and system discovery. Many of us have fiddled with avahi or mdnsd and related “mdns” services. For various reasons those often haven’t been the best-fit on BSD systems. Today we have a github project to point you at, which while a bit older, has recently been updated with pledge() support for OpenBSD. First of all, why do we need an alternative? They list their reasons: This is an attempt to bring native mdns/dns-sd to OpenBSD. Mainly cause all the other options suck and proper network browsing is a nice feature these days. Why not Apple's mdnsd ? 1 - It sucks big time. 2 - No BSD License (Apache-2). 3 - Overcomplex API. 4 - Not OpenBSD-like. Why not Avahi ? 1 - No BSD License (LGPL). 2 - Overcomplex API. 3 - Not OpenBSD-like 4 - DBUS and lots of dependencies. Those already sound like pretty compelling reasons. What makes this “new” information again is the pledge support, and perhaps it’s time for more BSD’s to start considering importing something like mdnsd into their base system to make system discovery more “automatic” *** Beastie Bits Benno Rice at Linux.Conf.Au: The Trouble with FreeBSD (https://www.youtube.com/watch?v=Ib7tFvw34DM) State of the Port of VMS to x86 (http://vmssoftware.com/pdfs/State_of_Port_20170105.pdf) Microsoft Azure now offers Patent Troll Protection (https://thestack.com/cloud/2017/02/08/microsoft-azure-now-offers-patent-troll-ip-protection/) FreeBSD Storage Summit 2017 (https://www.freebsdfoundation.org/news-and-events/event-calendar/freebsd-storage-summit-2017/) If you are going to be in Tokyo, make sure you come to (http://bhyvecon.org/) Feedback/Questions Farhan - Laptops (http://pastebin.com/bVqsvM3r) Hjalti - rclone (http://pastebin.com/7KWYX2Mg) Ivan - Jails (http://pastebin.com/U5XyzMDR) Jungle - Traffic Control (http://pastebin.com/sK7uEDpn) ***

181: The Cantrillogy (Not special edition)

February 15, 2017 4:26:28 127.9 MB Downloads: 0

This week on BSDNow we have a cantrill special to bring you! All three interviews back to back in their original glory, you won’t want to miss This episode was brought to you by – Show Notes: –FOSDEM 2017 BSD Dev Room Videos Ubuntu Slaughters Kittens | BSD Now 103The Cantrill Strikes Back | BSD Now 117Return of the Cantrill | BSD Now 163

180: Illuminating the desktop

February 08, 2017 51:28 37.06 MB Downloads: 0

This week on BSDNow, I’m out of town but we have a great interview with Ken Moore (My brother) about the latest in BSD desktop computing and This episode was brought to you by Interview - Ken Moore - ken@trueos.org (mailto:ken@trueos.org) TrueOS, Lumina, Sys Admin, The BSD Desktop Ecosystem + KM: Thank you for joining us again, can you believe it has been an entire year? + AJ: Let’s start by getting an update on Lumina, what has happened in the last year? + KM: What is the change you are most proud of in that time? + AJ: What do you think of the recent introduction of Wayland to the ports tree? Do you think this will impact Lumina? Do you have any plans? + KM: + AJ: What has changed with SysAdm after a year of development? + KM: What plans do you have for the future of SysAdm? + AJ: How has it been working with the drm-next branch? Does it feel like that is progressing? + KM: Can you tell us about some of the other TrueOS work you have been doing? + AJ: What are your thoughts on how the BSD Desktop Ecosystem has changed over the last year? Do you think the future looks better or worse now? + KM: Do you think systemd is going to continue to make things work? Or does it seem like there is enough resistance to it that fewer projects are going to throw out support for anything not-systemd + AJ: Anything else you want to add?