
Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros. The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day.
Similar Podcasts

Elixir Outlaws
Elixir Outlaws is an informal discussion about interesting things happening in Elixir. Our goal is to capture the spirit of a conference hallway discussion in a podcast.

The Cynical Developer
A UK based Technology and Software Developer Podcast that helps you to improve your development knowledge and career,
through explaining the latest and greatest in development technology and providing you with what you need to succeed as a developer.

Programming Throwdown
Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.
Episode 262: OpenBSD Surfacing | BSD Now 262
OpenBSD on Microsoft Surface Go, FreeBSD Foundation August Update, What’s taking so long with Project Trident, pkgsrc config file versioning, and MacOS remnants in ZFS code. ##Headlines OpenBSD on the Microsoft Surface Go For some reason I like small laptops and the constraints they place on me (as long as they’re still usable). I used a Dell Mini 9 for a long time back in the netbook days and was recently using an 11" MacBook Air as my primary development machine for many years. Recently Microsoft announced a smaller, cheaper version of its Surface tablets called Surface Go which piqued my interest. Hardware The Surface Go is available in two hardware configurations: one with 4Gb of RAM and a 64Gb eMMC, and another with 8Gb of RAM with a 128Gb NVMe SSD. (I went with the latter.) Both ship with an Intel Pentium Gold 4415Y processor which is not very fast, but it’s certainly usable. The tablet measures 9.65" across, 6.9" tall, and 0.3" thick. Its 10" diagonal 3:2 touchscreen is covered with Gorilla Glass and has a resolution of 1800x1200. The bezel is quite large, especially for such a small screen, but it makes sense on a device that is meant to be held, to avoid accidental screen touches. The keyboard and touchpad are located on a separate, removable slab called the Surface Go Signature Type Cover which is sold separately. I opted for the “cobalt blue” cover which has a soft, cloth-like alcantara material. The cover attaches magnetically along the bottom edge of the device and presents USB-attached keyboard and touchpad devices. When the cover is folded up against the screen, it sends an ACPI sleep signal and is held to the screen magnetically. During normal use, the cover can be positioned flat on a surface or slightly raised up about 3/4" near the screen for better ergonomics. When using the device as a tablet, the cover can be rotated behind the screen which causes it to automatically stop sending keyboard and touchpad events until it is rotated back around. The keyboard has a decent amount of key travel and a good layout, with Home/End/Page Up/Page Down being accessible via Fn+Left/Right/Up/Down but also dedicated Home/End/Page Up/Page Down keys on the F9-F12 keys which I find quite useful since the keyboard layout is somewhat small. By default, the F1-F12 keys do not send F1-F12 key codes and Fn must be used, either held down temporarily or Fn pressed by itself to enable Fn-lock which annoyingly keeps the bright Fn LED illuminated. The keys are backlit with three levels of adjustment, handled by the keyboard itself with the F7 key. The touchpad on the Type Cover is a Windows Precision Touchpad connected via USB HID. It has a decent click feel but when the cover is angled up instead of flat on a surface, it sounds a bit hollow and cheap. Surface Go Pen The touchscreen is powered by an Elantech chip connected via HID-over-i2c, which also supports pen input. A Surface Pen digitizer is available separately from Microsoft and comes in the same colors as the Type Covers. The pen works without any pairing necessary, though the top button on it works over Bluetooth so it requires pairing to use. Either way, the pen requires an AAAA battery inside it to operate. The Surface Pen can attach magnetically to the left side of the screen when not in use. A kickstand can swing out behind the display to use the tablet in a laptop form factor, which can adjust to any angle up to about 170 degrees. The kickstand stays firmly in place wherever it is positioned, which also means it requires a bit of force to pull it out when initially placing the Surface Go on a desk. Along the top of the display are a power button and physical volume rocker buttons. Along the right side are the 3.5mm headphone jack, USB-C port, power port, and microSD card slot located behind the kickstand. Charging can be done via USB-C or the dedicated charge port, which accommodates a magnetically-attached, thin barrel similar to Apple’s first generation MagSafe adapter. The charging cable has a white LED that glows when connected, which is kind of annoying since it’s near the mid-line of the screen rather than down by the keyboard. Unlike Apple’s MagSafe, the indicator light does not indicate whether the battery is charged or not. The barrel charger plug can be placed up or down, but in either direction I find it puts an awkward strain on the power cable coming out of it due to the vertical position of the port. Wireless connectivity is provided by a Qualcomm Atheros QCA6174 802.11ac chip which also provides Bluetooth connectivity. Most of the sensors on the device such as the gyroscope and ambient light sensor are connected behind an Intel Sensor Hub PCI device, which provides some power savings as the host CPU doesn’t have to poll the sensors all the time. Firmware The Surface Go’s BIOS/firmware menu can be entered by holding down the Volume Up button, then pressing and releasing the Power button, and releasing Volume Up when the menu appears. Secure Boot as well as various hardware components can be disabled in this menu. Boot order can also be adjusted. A temporary boot menu can be brought up the same way but using Volume Down instead. ###FreeBSD Foundation Update, August 2018 MESSAGE FROM THE EXECUTIVE DIRECTOR Dear FreeBSD Community Member, It’s been a busy summer for the Foundation. From traveling around the globe spreading the word about FreeBSD to bringing on new team members to improve the Project’s Continuous Integration work, we’re very excited about what we’ve accomplished. Take a minute to check out the latest updates within our Foundation sponsored projects; read more about our advocacy efforts in Bangladesh and community building in Cambridge; don’t miss upcoming Travel Grant deadlines, and new Developer Summits; and be sure to find out how your support will ensure our progress continues into 2019. We can’t do this without you! Happy reading!! Deb August 2018 Development Projects Update Fundraising Update: Supporting the Project August 2018 Release Engineering Update BSDCam 2018 Recap October 2018 FreeBSD Developer Summit Call for Participation SANOG32 and COSCUP 2018 Recap MeetBSD 2018 Travel Grant Application Deadline: September 7 ##News Roundup Project Trident: What’s taking so long? What is taking so long? The short answer is that it’s complicated. Project Trident is quite literally a test of the new TrueOS build system. As expected, there have been quite a few bugs, undocumented features, and other optional bits that we discovered we needed that were not initially present. All of these things have to be addressed and retested in a constant back and forth process. While Ken and JT are both experienced developers, neither has done this kind of release engineering before. JT has done some release engineering back in his Linux days, but the TrueOS and FreeBSD build system is very different. Both Ken and JT are learning a completely new way of building a FreeBSD/TrueOS distribution. Please keep in mind that no one has used this new TrueOS build system before, so Ken and JT want to not only provide a good Trident release, but also provide a model or template for other potential TrueOS distributions too! Where are we now? Through perseverance, trial and error, and a lot of head-scratching we have reached the point of having successful builds. It took a while to get there, but now we are simply working out a few bugs with the new installer that Ken wrote as well as finding and fixing all the new Xorg configuration options which recently landed in FreeBSD. We also found that a number of services have been removed or replaced between TrueOS 18.03 and 18.06 so we are needing to adjust what we consider the “base” services for the desktop. All of these issues are being resolved and we are continually rebuilding and pulling in new patches from TrueOS as soon as they are committed. In the meantime we have made an early BETA release of Trident available to the users in our Telegram Channel for those who want to help out in testing these early versions. Do you foresee any other delays? At the moment we are doing many iterations of testing and tweaking the install ISO and package configurations in order to ensure that all the critical functionality works out-of-box (networking, sound, video, basic apps, etc). While we do not foresee any other major delays, sometimes things happen that our outside of our control. For an example, one of the recent delays that hit recently was completely unexpected: we had a hard drive failure on our build server. Up until recently, The aptly named “Poseidon” build server was running a Micron m500dc drive, but that drive is now constantly reporting errors. Despite ordering a replacement Western Digital Blue SSD several weeks ago, we just received it this past week. The drive is now installed with the builder back to full functionality, but we did lose many precious days with the delay. The build server for Project Trident is very similar to the one that JT donated to the TrueOS project. JT had another DL580 G7, so he donated one to the Trident Project for their build server. Poseidon also has 256GB RAM (64 x 4GB sticks) which is a smidge higher than what the TrueOS builder has. Since we are talking about hardware, we probably should address another question we get often, “What Hardware are the devs testing on?” So let’s go ahead and answer that one now. Developer Hardware JT: His main test box is a custom-built Intel i7 7700K system running 32GB RAM, dual Intel Optane 900P drives, and an Nvidia 1070 GTX with four 4K Acer Monitors. He also uses a Lenovo x250 ThinkPad alongside a desk full of x230t and x220 ThinkPads. One of which he gave away at SouthEast LinuxFest this year, which you can read about here. However it’s not done there, being a complete hardware hoarder, JT also tests on several Intel NUCs and his second laptop a Fujitsu t904, not to mention a Plethora of HP DL580 servers, a DL980 server, and a stack of BL485c, BL460c, and BL490c Blades in his HP c7000 and c3000 Bladecenter chassis. (Maybe it’s time for an intervention for his hardware collecting habits) Ken: For a laptop, he primarily uses a 3rd generation X1 Carbon, but also has an old Eee PC T101MT Netbook (dual core 1GHz, 2GB of memory) which he uses for verifying how well Trident works on low-end hardware. As far as workstations go, his office computer is an Intel i7 with an NVIDIA Geforce GTX 960 running three 4K monitors and he has a couple other custom-built workstations (1 AMD, 1 Intel+NVIDIA) at his home. Generally he assembled random workstations based on hardware that was given to him or that he could acquire cheap. Tim: is using a third gen X1 Carbon and a custom built desktop with an Intel Core i5-4440 CPU, 16 GiB RAM, Nvidia GeForce GTX 750 Ti, and a RealTek 8168 / 8111 network card. Rod: Rod uses… No one knows what Rod uses, It’s kinda like how many licks does it take to get to the center of a Tootsie-Roll Tootsie-Pop… the world may just never know. ###NetBSD GSoC: pkgsrc config file versioning A series of reports from the course of the summer on this Google Summer of Code project The goal of the project is to integrate with a VCS (Version Control System) to make managing local changes to config files for packages easier GSoC 2018 Reports: Configuration files versioning in pkgsrc, Part 1 Packages may install code (both machine executable code and interpreted programs), documentation and manual pages, source headers, shared libraries and other resources such as graphic elements, sounds, fonts, document templates, translations and configuration files, or a combination of them. Configuration files are usually the means through which the behaviour of software without a user interface is specified. This covers parts of the operating systems, network daemons and programs in general that don’t come with an interactive graphical or textual interface as the principal mean for setting options. System wide configuration for operating system software tends to be kept under /etc, while configuration for software installed via pkgsrc ends up under LOCALBASE/etc (e.g., /usr/pkg/etc). Software packaged as part of pkgsrc provides example configuration files, if any, which usually get extracted to LOCALBASE/share/examples/PKGBASE/. Don’t worry: automatic merging is disabled by default, set $VCSAUTOMERGE to enable it. In order to avoid breakage, installed configuration is backed up first in the VCS, separating user-modified files from files that have been already automatically merged in the past, in order to allow the administrator to easily restore the last manually edited file in case of breakage. VCS functionality only applies to configuration files, not to rc.d scripts, and only if the environment variable $NOVCS is unset. The version control system to be used as a backend can be set through $VCS. It default to RCS, the Revision Control System, which works only locally and doesn’t support atomic transactions. Other backends such as CVS are supported and more will come; these, being used at the explicit request of the administrator, need to be already installed and placed in a directory part of $PATH. GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 2: remote repositories (git and CVS) pkgsrc is now able to deploy configuration from packages being installed from a remote, site-specific vcs repository. User modified files are always tracked even if automerge functionality is not enabled, and a new tool, pkgconftrack(1), exists to manually store user changes made outside of package upgrade time. Version Control software is executed as the same user running pkgadd or make install, unless the user is “root”. In this case, a separate, unprivileged user, pkgvcsconf, gets created with its own home directory and a working login shell (but no password). The home directory is not strictly necessary, it exists to facilitate migrations betweens repositories and vcs changes; it also serves to store keys used to access remote repositories. Using git instead of rcs is simply done by setting VCS=git in pkginstall.conf GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 3: remote repositories (SVN and Mercurial) GSoC 2018 Reports: Configuration files versioning in pkgsrc, part 4: configuration deployment, pkgtools and future improvements Support for configuration tracking is in scripts, pkginstall scripts, that get built into binary packages and are run by pkgadd upon installation. The idea behind the proposal suggested that users of the new feature should be able to store revisions of their installed configuration files, and of package-provided default, both in local or remote repositories. With this capability in place, it doesn’t take much to make the scripts “pull” configuration from a VCS repository at installation time. That’s what setting VCSCONFPULL=yes in pkginstall.conf after having enabled VCSTRACKCONF does: You are free to use official, third party prebuilt packages that have no customization in them, enable these options, and point pkgsrc to a private conf repository. If it contains custom configuration for the software you are installing, an attempt will be made to use it and install it on your system. If it fails, pkginstall will fall back to using the defaults that come inside the package. RC scripts are always deployed from the binary package, if existing and PKGRCDSCRIPTS=yes in pkginstall.conf or the environment. This will be part of packages, not a separate solution like configuration management tools. It doesn’t support running scripts on the target system to customize the installation, it doesn’t come with its domain-specific language, it won’t run as a daemon or require remote logins to work. It’s quite limited in scope, but you can define a ROLE for your system in pkginstall.conf or in the environment, and pkgsrc will look for configuration you or your organization crafted for such a role (e.g., public, standalone webserver vs reverse proxy or node in a database cluster) ###A little bit of the one-time MacOS version still lingers in ZFS Once upon a time, Apple came very close to releasing ZFS as part of MacOS. Apple did this work in its own copy of the ZFS source base (as far as I know), but the people in Sun knew about it and it turns out that even today there is one little lingering sign of this hoped-for and perhaps prepared-for ZFS port in the ZFS source code. Well, sort of, because it’s not quite in code. Lurking in the function that reads ZFS directories to turn (ZFS) directory entries into the filesystem independent format that the kernel wants is the following comment: objnum = ZFSDIRENTOBJ(zap.zafirstinteger); / MacOS X can extract the object type here such as: * uint8t type = ZFSDIRENTTYPE(zap.zafirstinteger); */ Specifically, this is in zfsreaddir in zfsvnops.c . ZFS maintains file type information in directories. This information can’t be used on Solaris (and thus Illumos), where the overall kernel doesn’t have this in its filesystem independent directory entry format, but it could have been on MacOS (‘Darwin’), because MacOS is among the Unixes that support d_type. The comment itself dates all the way back to this 2007 commit, which includes the change ‘reserve bits in directory entry for file type’, which created the whole setup for this. I don’t know if this file type support was added specifically to help out Apple’s MacOS X port of ZFS, but it’s certainly possible, and in 2007 it seems likely that this port was at least on the minds of ZFS developers. It’s interesting but understandable that FreeBSD didn’t seem to have influenced them in the same way, at least as far as comments in the source code go; this file type support is equally useful for FreeBSD, and the FreeBSD ZFS port dates to 2007 too (per this announcement). Regardless of the exact reason that ZFS picked up maintaining file type information in directory entries, it’s quite useful for people on both FreeBSD and Linux that it does so. File type information is useful for any number of things and ZFS filesystems can (and do) provide this information on those Unixes, which helps make ZFS feel like a truly first class filesystem, one that supports all of the expected general system features. ##Beastie Bits Mac-like FreeBSD Laptop Syncthing on FreeBSD New ZFS Boot Environments Tool My system’s time was so wrong, that even ntpd didn’t work OpenSSH 7.8/7.8p1 (2018-08-24) EuroBSD (Sept 20-23rd) registration Early Bird Period is coming to an end MeetBSD (Oct 18-20th) is coming up fast, hurry up and register! AsiaBSDcon 2019 Dates ##Feedback/Questions Will - Kudos and a Question Peter - Fanless Computers Ron - ZFS disk clone or replace or something Bostjan - ZFS Record Size Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 261: FreeBSDcon Flashback | BSD Now 261
Insight into TrueOS and Trident, stop evildoers with pf-badhost, Flashback to FreeBSDcon ‘99, OpenBSD’s measures against TLBleed, play Morrowind on OpenBSD in 5 steps, DragonflyBSD developers shocked at Threadripper performance, and more. ##Headlines An Insight into the Future of TrueOS BSD and Project Trident Last month, TrueOS announced that they would be spinning off their desktop offering. The team behind the new project, named Project Trident, have been working furiously towards their first release. They did take a few minutes to answer some of our question about Project Trident and TrueOS. I would like to thank JT and Ken for taking the time to compile these answers. It’s FOSS: What is Project Trident? Project Trident: Project Trident is the continuation of the TrueOS Desktop. Essentially, it is the continuation of the primary “TrueOS software” that people have been using for the past 2 years. The continuing evolution of the entire TrueOS project has reached a stage where it became necessary to reorganize the project. To understand this change, it is important to know the history of the TrueOS project. Originally, Kris Moore created PC-BSD. This was a Desktop release of FreeBSD focused on providing a simple and user-friendly graphical experience for FreeBSD. PC-BSD grew and matured over many years. During the evolution of PC-BSD, many users began asking for a server focused version of the software. Kris agreed, and TrueOS was born as a scaled down server version of PC-BSD. In late 2016, more contributors and growth resulted in significant changes to the PC-BSD codebase. Because the new development was so markedly different from the original PC-BSD design, it was decided to rebrand the project. TrueOS was chosen as the name for this new direction for PC-BSD as the project had grown beyond providing only a graphical front to FreeBSD and was beginning to make fundamental changes to the FreeBSD operating system. One of these changes was moving PC-BSD from being based on each FreeBSD Release to TrueOS being based on the active and less outdated FreeBSD Current. Other major changes are using OpenRC for service management and being more aggressive about addressing long-standing issues with the FreeBSD release process. TrueOS moved toward a rolling release cycle, twice a year, which tested and merged FreeBSD changes directly from the developer instead of waiting months or even years for the FreeBSD review process to finish. TrueOS also deprecated and removed obsolete technology much more regularly. As the TrueOS Project grew, the developers found these changes were needed by other FreeBSD-based projects. These projects began expressing interest in using TrueOS rather than FreeBSD as the base for their project. This demonstrated that TrueOS needed to again evolve into a distribution framework for any BSD project to use. This allows port maintainers and source developers from any BSD project to pool their resources and use the same source repositories while allowing every distribution to still customize, build, and release their own self-contained project. The result is a natural split of the traditional TrueOS team. There were now naturally two teams in the TrueOS project: those working on the build infrastructure and FreeBSD enhancements – the “core” part of the project, and those working on end-user experience and utility – the “desktop” part of the project. When the decision was made to formally split the projects, the obvious question that arose was what to call the “Desktop” project. As TrueOS was already positioned to be a BSD distribution platform, the developers agreed the desktop side should pick a new name. There were other considerations too, one notable being that we were concerned that if we continued to call the desktop project “TrueOS Desktop”, it would prevent people from considering TrueOS as the basis for their distribution because of misconceptions that TrueOS was a desktop-focused OS. It also helps to “level the playing field” for other desktop distributions like GhostBSD so that TrueOS is not viewed as having a single “blessed” desktop version. It’s FOSS: What features will TrueOS add to the FreeBSD base? Project Trident: TrueOS has already added a number of features to FreeBSD: OpenRC replaces rc.d for service management LibreSSL in base Root NSS certificates out-of-box Scriptable installations (pc-sysinstall) The full list of changes can be seen on the TrueOS repository (https://github.com/trueos/trueos/blob/trueos-master/README.md). This list does change quite regularly as FreeBSD development itself changes. It’s FOSS: I understand that TrueOS will have a new feature that will make creating a desktop spin of TrueOS very easy. Could you explain that new feature? Project Trident: Historically, one of the biggest hurdles for creating a desktop version of FreeBSD is that the build options for packages are tuned for servers rather than desktops. This means a desktop distribution cannot use the pre-built packages from FreeBSD and must build, use, and maintain a custom package repository. Maintaining a fork of the FreeBSD ports tree is no trivial task. TrueOS has created a full distribution framework so now all it takes to create a custom build of FreeBSD is a single JSON manifest file. There is now a single “source of truth” for the source and ports repositories that is maintained by the TrueOS team and regularly tagged with “stable” build markers. All projects can use this framework, which makes updates trivial. It’s FOSS: Do you think that the new focus of TrueOS will lead to the creation of more desktop-centered BSDs? Project Trident: That is the hope. Historically, creating a desktop-centered BSD has required a lot of specialized knowledge. Not only do most people not have this knowledge, but many do not even know what they need to learn until they start troubleshooting. TrueOS is trying to drastically simplify this process to enable the wider Open Source community to experiment, contribute, and enjoy BSD-based projects. It’s FOSS: What is going to happen to TrueOS Pico? Will Project Trident have ARM support? Project Trident: Project Trident will be dependent on TrueOS for ARM support. The developers have talked about the possibility of supporting ARM64 and RISC-V architectures, but it is not possible at the current time. If more Open Source contributors want to help develop ARM and RISC-V support, the TrueOS project is definitely willing to help test and integrate that code. It’s FOSS: What does this change (splitting Trus OS into Project Trident) mean for the Lumina desktop environment? Project Trident: Long-term, almost nothing. Lumina is still the desktop environment for Project Trident and will continue to be developed and enhanced alongside Project Trident just as it was for TrueOS. Short-term, we will be delaying the release of Lumina 2.0 and will release an updated version of the 1.x branch (1.5.0) instead. This is simply due to all the extra overhead to get Project Trident up and running. When things settle down into a rhythm, the development of Lumina will pick up once again. It’s FOSS: Are you planning on including any desktop environments besides Lumina? Project Trident: While Lumina is included by default, all of the other popular desktop environments will be available in the package repo exactly as they had been before. It’s FOSS: Any plans to include Steam to increase the userbase? Project Trident: Steam is still unavailable natively on FreeBSD, so we do not have any plans to ship it out of the box currently. In the meantime, we highly recommend installing the Windows version of Steam through the PlayOnBSD utility. It’s FOSS: What will happen to the AppCafe? Project Trident: The AppCafe is the name of the graphical interface for the “pkg” utility integrated into the SysAdm client created by TrueOS. This hasn’t changed. SysAdm, the graphical client, and by extension AppCafe are still available for all TrueOS-based distributions to use. It’s FOSS: Does Project Trident have any corporate sponsors lined up? If not, would you be open to it or would you prefer that it be community supported? Project Trident: iXsystems is the first corporate sponsor of Project Trident and we are always open to other sponsorships as well. We would prefer smaller individual contributions from the community, but we understand that larger project needs or special-purpose goals are much more difficult to achieve without allowing larger corporate sponsorships as well. In either case, Project Trident is always looking out for the best interests of the community and will not allow intrusive or harmful code to enter the project even if a company or individual tries to make that code part of a sponsorship deal. It’s FOSS: BSD always seems to be lagging in terms of support for newer devices. Will TrueOS be able to remedy that with a quicker release cycle? Project Trident: Yes! That was a primary reason for TrueOS to start tracking the CURRENT branch of FreeBSD in 2016. This allows for the changes that FreeBSD developers are making, including new hardware support, to be available much sooner than if we followed the FreeBSD release cycle. It’s FOSS: Do you have any idea when Project Trident will have its first release? Project Trident: Right now we are targeting a late August release date. This is because Project Trident is “kicking the wheels” on the new TrueOS distribution system. We want to ensure everything is working smoothly before we release. Going forward, we plan on having regular package updates every week or two for the end-user packages and a new release of Trident with an updated OS version every 6 months. This will follow the TrueOS release schedule with a small time offset. ###pf-badhost: Stop the evil doers in their tracks! pf-badhost is a simple, easy to use badhost blocker that uses the power of the pf firewall to block many of the internet’s biggest irritants. Annoyances such as ssh bruteforcers are largely eliminated. Shodan scans and bots looking for webservers to abuse are stopped dead in their tracks. When used to filter outbound traffic, pf-badhost blocks many seedy, spooky malware containing and/or compromised webhosts. Filtering performance is exceptional, as the badhost list is stored in a pf table. To quote the OpenBSD FAQ page regarding tables: “the lookup time on a table holding 50,000 addresses is only slightly more than for one holding 50 addresses.” pf-badhost is simple and powerful. The blocklists are pulled from quality, trusted sources. The ‘Firehol’, ‘Emerging Threats’ and ‘Binary Defense’ block lists are used as they are popular, regularly updated lists of the internet’s most egregious offenders. The pf-badhost.sh script can easily be expanded to use additional or alternate blocklists. pf-badhost works best when used in conjunction with unbound-adblock for the ultimate badhost blocking. Notes: If you are trying to run pf-badhost on a LAN or are using NAT, you will want to add a rule to your pf.conf appearing BEFORE the pf-badhost rules allowing traffic to and from your local subnet so that you can still access your gateway and any DNS servers. Conversely, adding a line to pf-badhost.sh that removes your subnet range from the <pfbadhost> table should also work. Just make sure you choose a subnet range / CIDR block that is actually in the list. 192.168.0.0/16, 172.16.0.0/12 and 10.0.0.0/8 are the most common home/office subnet ranges. DigitalOcean https://do.co/bsdnow ###FLASHBACK: FreeBSDCon’99: Fans of Linux’s lesser-known sibling gather for the first time FreeBSD, a port of BSD Unix to Intel, has been around almost as long as Linux has – but without the media hype. Its developer and user community recently got a chance to get together for the first time, and they did it in the city where BSD – the Berkeley Software Distribution – was born some 25 years ago. October 17, 1999 marked a milestone in the history of FreeBSD – the first FreeBSD conference was held in the city where it all began, Berkeley, CA. Over 300 developers, users, and interested parties attended from around the globe. This was easily 50 percent more people than the conference organizers had expected. This first conference was meant to be a gathering mostly for developers and FreeBSD advocates. The turnout was surprisingly (and gratifyingly) large. In fact, attendance exceeded expectations so much that, for instance, Kirk McKusick had to add a second, identical tutorial on FreeBSD internals, because it was impossible for everyone to attend the first! But for a first-ever conference, I was impressed by how smoothly everything seemed to go. Sessions started on time, and the sessions I attended were well-run; nothing seemed to be too cold, dark, loud, late, or off-center. Of course, the best part about a conference such as this one is the opportunity to meet with other people who share similar interests. Lunches and breaks were a good time to meet people, as was the Tuesday night beer bash. The Wednesday night reception was of a type unusual for the technical conferences I usually attend – a three-hour Hornblower dinner cruise on San Francisco Bay. Not only did we all enjoy excellent food and company, but we all got to go up on deck and watch the lights of San Francisco and Berkeley as we drifted by. Although it’s nice when a conference attracts thousands of attendees, there are some things that can only be done with smaller groups of people; this was one of them. In short, this was a tiny conference, but a well-run one. Sessions Although it was a relatively small conference, the number and quality of the sessions belied the size. Each of the three days of the conference featured a different keynote speaker. In addition to Jordan Hubbard, Jeremy Allison spoke on “Samba Futures” on day two, and Brian Behlendorf gave a talk on “FreeBSD and Apache: A Perfect Combo” to start off the third day. The conference sessions themselves were divided into six tracks: advocacy, business, development, networking, security, and panels. The panels track featured three different panels, made up of three different slices of the community: the FreeBSD core team, a press panel, and a prominent user panel with representatives from such prominent commercial users as Yahoo! and USWest. I was especially interested in Apple Computer’s talk in the development track. Wilfredo Sanchez, technical lead for open source projects at Apple (no, that’s not an oxymoron!) spoke about Apple’s Darwin project, the company’s operating system road map, and the role of BSD (and, specifically, FreeBSD) in Apple’s plans. Apple and Unix have had a long and uneasy history, from the Lisa through the A/UX project to today. Personally, I’m very optimistic about the chances for the Darwin project to succeed. Apple’s core OS kernel team has chosen FreeBSD as its reference platform. I’m looking forward to what this partnership will bring to both sides. Other development track sessions included in-depth tutorials on writing device drivers, basics of the Vinum Volume Manager, Fibre Channel, development models (the open repository model), and the FreeBSD Documentation Project (FDP). If you’re interested in contributing to the FreeBSD project, the FDP is a good place to start. Advocacy sessions included “How One Person Can Make a Difference” (a timeless topic that would find a home at any technical conference!) and “Starting and Managing A User Group” (trials and tribulations as well as rewards). The business track featured speakers from three commercial users of FreeBSD: Cybernet, USWest, and Applix. Applix presented its port of Applixware Office for FreeBSD and explained how Applix has taken the core services of Applixware into open source. Commercial applications and open source were once a rare combination; we can only hope the trend away from that state of affairs will continue. Commercial use of FreeBSD The use of FreeBSD in embedded applications is increasing as well – and it is increasing at the same rate that hardware power is. These days, even inexpensive systems are able to run a BSD kernel. The BSD license and the solid TCP/IP stack prove significant enticements to this market as well. (Unlike the GNU Public License, the BSD license does not require that vendors make derivative works open source.) Companies such as USWest and Verio use FreeBSD for a wide variety of different Internet services. Yahoo! and Hotmail are examples of companies that use FreeBSD extensively for more specific purposes. Yahoo!, for example, has many hundreds of FreeBSD boxes, and Hotmail has almost 2000 FreeBSD machines at its data center in the San Francisco Bay area. Hotmail is owned by Microsoft, so the fact that it runs FreeBSD is a secret. Don’t tell anyone… When asked to comment on the increasing commercial interest in BSD, Hubbard said that FreeBSD is learning the Red Hat lesson. “Walnut Creek and others with business interests in FreeBSD have learned a few things from the Red Hat IPO,” he said, “and nobody is just sitting around now, content with business as usual. It’s clearly business as unusual in the open source world today.” Hubbard had also singled out some of BSD’s commercial partners, such as Whistle Communications, for praise in his opening day keynote. These partners play a key role in moving the project forward, he said, by contributing various enhancements and major new systems, such as Netgraph, as well as by contributing paid employee time spent on FreeBSD. Even short FreeBSD-related contacts can yield good results, Hubbard said. An example of this is the new jail() security code introduced in FreeBSD 3.x and 4.0, which was contributed by R & D Associates. A number of ISPs are also now donating the hardware and bandwidth that allows the project to provide more resource mirrors and experimental development sites. See you next year And speaking of corporate sponsors, thanks go to Walnut Creek for sponsoring the conference, and to Yahoo! for covering all the expenses involved in bringing the entire FreeBSD core team to Berkeley. As a fan of FreeBSD, I’m happy to see that the project has finally produced a conference. It was time: many of the 16 core team members had been working together on a regular basis for nearly seven years without actually meeting face to face. It’s been an interesting year for open source projects. I’m looking forward to the next year – and the next BSD conference – to be even better. ##News Roundup OpenBSD Recommends: Disable SMT/Hyperthreading in all Intel BIOSes Two recently disclosed hardware bugs affected Intel cpus: - TLBleed - T1TF (the name "Foreshadow" refers to 1 of 3 aspects of this bug, more aspects are surely on the way) Solving these bugs requires new cpu microcode, a coding workaround, *AND* the disabling of SMT / Hyperthreading. SMT is fundamentally broken because it shares resources between the two cpu instances and those shared resources lack security differentiators. Some of these side channel attacks aren't trivial, but we can expect most of them to eventually work and leak kernel or cross-VM memory in common usage circumstances, even such as javascript directly in a browser. There will be more hardware bugs and artifacts disclosed. Due to the way SMT interacts with speculative execution on Intel cpus, I expect SMT to exacerbate most of the future problems. A few months back, I urged people to disable hyperthreading on all Intel cpus. I need to repeat that: DISABLE HYPERTHREADING ON ALL YOUR INTEL MACHINES IN THE BIOS. Also, update your BIOS firmware, if you can. OpenBSD -current (and therefore 6.4) will not use hyperthreading if it is enabled, and will update the cpu microcode if possible. But what about 6.2 and 6.3? The situation is very complex, continually evolving, and is taking too much manpower away from other tasks. Furthermore, Intel isn't telling us what is coming next, and are doing a terrible job by not publically documenting what operating systems must do to resolve the problems. We are having to do research by reading other operating systems. There is no time left to backport the changes -- we will not be issuing a complete set of errata and syspatches against 6.2 and 6.3 because it is turning into a distraction. Rather than working on every required patch for 6.2/6.3, we will re-focus manpower and make sure 6.4 contains the best solutions possible. So please try take responsibility for your own machines: Disable SMT in the BIOS menu, and upgrade your BIOS if you can. I'm going to spend my money at a more trustworthy vendor in the future. ###Get Morrowind running on OpenBSD in 5 simple steps This article contains brief instructions on how to get one of the greatest Western RPGs of all time, The Elder Scrolls III: Morrowind, running on OpenBSD using the OpenMW open source engine recreation. These instructions were tested on a ThinkPad X1 Carbon Gen 3. The information was adapted from this OpenMW forum thread: https://forum.openmw.org/viewtopic.php?t=3510 Purchase and download the DRM-free version from GOG (also considered the best version due to the high quality PDF guide that it comes with): https://www.gog.com/game/theelderscrollsiiimorrowindgotyedition Install the required packages built from the ports tree as root. openmw is the recreated game engine, and innoextract is how we will get the game data files out of the win32 executable. pkgadd openmw innoextract Move the file from GOG setuptesmorrowindgoty2.0.0.7.exe into its own directory morrowind/ due to innoextract’s default behaviour of extracting into the current directory. Then type: innoextract setuptesmorrowindgoty2.0.0.7.exe Type openmw-wizard and follow the straightforward instructions. Note that you have a pre-existing installation, and select the morrowind/app/Data Files folder that innoextract extracted. Type in openmw-launcher, toggle the settings to your preferences, and then hit play! iXsystems https://twitter.com/allanjude/status/1034647571124367360 ###My First Clang Bug Part of the role of being a packager is compiling lots (and lots) of packages. That means compiling lots of code from interesting places and in a variety of styles. In my opinion, being a good packager also means providing feedback to upstream when things are bad. That means filing upstream bugs when possible, and upstreaming patches. One of the “exciting” moments in packaging is when tools change. So each and every major CMake update is an exercise in recompiling 2400 or more packages and adjusting bits and pieces. When a software project was last released in 2013, adjusting it to modern tools can become quite a chore (e.g. Squid Report Generator). CMake is excellent for maintaining backwards compatibility, generally accommodating old software with new policies. The most recent 3.12 release candidate had three issues filed from the FreeBSD side, all from fallout with older software. I consider the hours put into good bug reports, part of being a good citizen of the Free Software world. My most interesting bug this week, though, came from one line of code somewhere in Kleopatra: QUNUSED(gpgagentdata); That one line triggered a really peculiar link error in KDE’s FreeBSD CI system. Yup … telling the compiler something is unused made it fall over. Commenting out that line got rid of the link error, but introduced a warning about an unused function. Working with KDE-PIM’s Volker Krause, we whittled the problem down to a six-line example program — two lines if you don’t care much for coding style. I’m glad, at that point, that I could throw it over the hedge to the LLVM team with some explanatory text. Watching the process on their side reminds me ever-so-strongly of how things work in KDE (or FreeBSD for that matter): Bugzilla, Phabricator, and git combine to be an effective workflow for developers (perhaps less so for end-users). Today I got a note saying that the issue had been resolved. So brief a time for a bug. Live fast. Get squashed young. ###DragonFlyBSD Now Runs On The Threadripper 2990WX, Developer Shocked At Performance Last week I carried out some tests of BSD vs. Linux on the new 32-core / 64-thread Threadripper 2990WX. I tested FreeBSD 11, FreeBSD 12, and TrueOS – those benchmarks will be published in the next few days. I tried DragonFlyBSD, but at the time it wouldn’t boot with this AMD HEDT processor. But now the latest DragonFlyBSD development kernel can handle the 2990WX and the lead DragonFly developer calls this new processor “a real beast” and is stunned by its performance potential. When I tried last week, the DragonFlyBSD 5.2.2 stable release nor DragonFlyBSD 5.3 daily snapshot would boot on the 2990WX. But it turns out Matthew Dillon, the lead developer of DragonFlyBSD, picked up a rig and has it running now. So in time for the next 5.4 stable release or those using the daily snapshots can have this 32-core / 64-thread Zen+ CPU running on this operating system long ago forked from FreeBSD. In announcing his success in bringing up the 2990WX under DragonFlyBSD, which required a few minor changes, he shared his performance thoughts and hopes for the rig. “The cpu is a real beast, packing 32 cores and 64 threads. It blows away our dual-core Xeon to the tune of being +50% faster in concurrent compile tests, and it also blows away our older 4-socket Opteron (which we call ‘Monster’) by about the same margin. It’s an impressive CPU. For now the new beast is going to be used to help us improve I/O performance through the filesystem, further SMP work (but DFly scales pretty well to 64 threads already), and perhaps some driver to work to support the 10gbe on the mobo.” Dillon shared some results on the system as well. " The Threadripper 2990WX is a beast. It is at least 50% faster than both the quad socket opteron and the dual socket Xeon system I tested against. The primary limitation for the 2990WX is likely its 4 channels of DDR4 memory, and like all Zen and Zen+ CPUs, memory performance matters more than CPU frequency (and costs almost no power to pump up the performance). That said, it still blow away a dual-socket Xeon with 3x the number of memory channels. That is impressive!" The well known BSD developer also added, “This puts the 2990WX at par efficiency vs a dual-socket Xeon system, and better than the dual-socket Xeon with slower memory and a power cap. This is VERY impressive. I should note that the 2990WX is more specialized with its asymetric NUMA architecture and 32 cores. I think the sweet spot in terms of CPU pricing and efficiency is likely going to be with the 2950X (16-cores/32-threads). It is clear that the 2990WX (32-cores/64-threads) will max out 4-channel memory bandwidth for many workloads, making it a more specialized part. But still awesome…This thing is an incredible beast, I’m glad I got it.” While I have the FreeBSD vs. Linux benchmarks from a few days ago, it looks like now on my ever growing TODO list will be re-trying out the newest DragonFlyBSD daily snapshot for seeing how the performance compares in the mix. Stay tuned for the numbers that should be in the next day or two. ##Beastie Bits X11 on really small devices mandoc-1.14.4 released The pfSense Book is now available to everyone MWL: Burn it down! Burn it all down! Configuring OpenBSD: System and user config files for a more pleasant laptop FreeBSD Security Advisory: Resource exhaustion in TCP reassembly OpenBSD Foundation gets first 2018 Iridium donation New ZFS commit solves issue a few users reported in the feedback segment Project Trident should have a beta release by the end of next week Reminder about Stockholm BUG: September 5, 17:30-22:00 BSD-PL User Group: September 13, 18:30-21:00 Tarsnap ##Feedback/Questions Malcom - Having different routes per interface Bostjan - ZFS and integrity of data Michael - Suggestion for Monitoring Barry - Feedback Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 260: Hacking Tour of Europe | BSD Now 260
Trip reports from the Essen Hackathon and BSDCam, CfT: ZFS native encryption and UFS trim consolidation, ZFS performance benchmarks on a FreeBSD server, how to port your OS to EC2, Vint Cerf about traceability, Remote Access console to an RPi3 running FreeBSD, and more. ##Headlines Essen Hackathon & BSDCam 2018 trip report Allan and Benedict met at FRA airport and then headed to the Air Rail terminal for our train to Essen where the Hackathon would happen over the weekend of Aug 10 - 12, 2018. Once there, we did not have to wait long until other early-arrivals would show up and soon we had about 10 people gathered for lunch. After buying some take-out pizzas and bringing it back to the Linuxhotel (there was a training still going on there so we could not get into our rooms yet), we sat in the sunny park and talked. More and more people arrived and soon, people started hacking on their laptops. Some people would not arrive until a few hours before midnight, but we already had a record appearance of 20 people in total. On Saturday, we gathered everyone in one of the seminar rooms that had rooms and chairs for us. After some organizational infos, we did an introductory round and Benedict wrote down on the whiteboard what people were interested in. It was not long until groups formed to talk about SSL in base, weird ZFS scrubs that would go over 100% completion (fixed now). Other people started working on ports, fixing bugs, or wrote documentation. The day ended in a BBQ in the Linuxhotel park, which was well received by everyone. On Sunday, after attendees packed up their luggage and stored it in the seminar room, we continued hacking until lunchtime. After a quick group picture, we headed to a local restaurant for the social event (which was not open on Saturday, otherwise we would have had it then). In the afternoon, most people departed, a good half of them were heading for BSDCam. Commits from the hackathon (the ones from 2018) Overall, the hackathon was well received by attendees and a lot of them liked the fact that it was close to another BSD gathering so they could nicely combine the two. Also, people thought about doing their own hackathon in the future, which is an exciting prospect. Thanks to all who attended, helped out here and there when needed. Special Thanks to Netzkommune GmbH for sponsoring the social event and the Linuxhotel for having us. Benedict was having a regular work day on Monday after coming back from the hackathon, but flew out to Heathrow on Tuesday. Allan was in London a day earlier and arrived a couple of hours before Benedict in Cambridge. He headed for the Computer Lab even though the main event would not start until Wednesday. Most people gathered at the Maypole pub on Tuesday evening for welcomes, food and drinks. On Wednesday, a lot of people met in the breakfast room of Churchill College where most people were staying and went to the Computer Lab, which served as the main venue for BSDCam, together. The morning was spend with introductions and collecting what most people were interested in talking. This unconference style has worked well in the past and soon we had 10 main sessions together for the rest of this and the following two days (full schedule). Most sessions took notes, which you can find on the FreeBSD wiki. On Thursday evening, we had a nice formal dinner at Trinity Hall. BSDCam 2018 was a great success with a lot of fruitful discussions and planning sessions. We thank the organizers for BSDCam for making it happen. A special mentions goes out to Robert Watson and his family. Even though he was not there, he had a good reason to miss it: they had their first child born at the beginning of the week. Congratulations and best wishes to all three of them! ###Call for Testing: ZFS Native Encryption for FreeBSD A port of the ZoL (ZFS-on-Linux) feature that provides native crypto support for ZFS is ready for testing on FreeBSD Most of the porting was done by sef@freebsd.org (Sean Eric Fagan) The original ZoL commit is here: https://github.com/zfsonlinux/zfs/pull/5769/commits/5aef9bedc801830264428c64cd2242d1b786fd49 For an overview, see Tom Caputi’s presentation from the OpenZFS Developers Summit in 2016 Video: https://youtu.be/frnLiXclAMo Slides: https://drive.google.com/file/d/0B5hUzsxe4cdmU3ZTRXNxa2JIaDQ/view?usp=sharing WARNING: test in VMs or with spare disks etc, pools created with this code, or upgraded to this version, will no longer be importable on systems that do not support this feature. The on-disk format or other things may change before the final version, so you will likely have to ‘zfs send | zfs recv’ the data on to a new pool Thanks for testing to help this feature land in FreeBSD iXsystems ###Call for Testing: UFS TRIM Consolidation Kirk Mckusick posts to the FreeBSD mailing list looking for testers for the new UFS TRIM Consolidation code When deleting files on filesystems that are stored on flash-memory (solid-state) disk drives, the filesystem notifies the underlying disk of the blocks that it is no longer using. The notification allows the drive to avoid saving these blocks when it needs to flash (zero out) one of its flash pages. These notifications of no-longer-being-used blocks are referred to as TRIM notifications. In FreeBSD these TRIM notifications are sent from the filesystem to the drive using the BIODELETE command. Until now, the filesystem would send a separate message to the drive for each block of the file that was deleted. Each Gigabyte of file size resulted in over 3000 TRIM messages being sent to the drive. This burst of messages can overwhelm the drive’s task queue causing multiple second delays for read and write requests. This implementation collects runs of contiguous blocks in the file and then consolodates them into a single BIODELETE command to the drive. The BIODELETE command describes the run of blocks as a single large block being deleted. Each Gigabyte of file size can result in as few as two BIODELETE commands and is typically less than ten. Though these larger BIODELETE commands take longer to run, they do not clog the drive task queue, so read and write commands can intersperse effectively with them. Though this new feature has been throughly reviewed and tested, it is being added disabled by default so as to minimize the possibility of disrupting the upcoming 12.0 release. It can be enabled by running `sysctl vfs.ffs.dotrimcons=1’’. Users are encouraged to test it. If no problems arise, we will consider requesting that it be enabled by default for 12.0. This support is off by default, but I am hoping that I can get enough testing to ensure that it (a) works, and (b) is helpful that it will be reasonable to have it turned on by default in 12.0. The cutoff for turning it on by default in 12.0 is September 19th. So I am requesting your testing feedback in the near-term. Please let me know if you have managed to use it successfully (or not) and also if it provided any performance difference (good or bad). To enable TRIM consolidation usesysctl vfs.ffs.dotrimcons=1’ There is also a diff that adds additional statistics: https://lists.freebsd.org/pipermail/freebsd-current/2018-August/070798.html You can also watch the volume and latency of BIODELETE commands by running gstat with the -d flag ##News Roundup ZFS performance Aravindh Sampathkumar, a Performance Engineer and Sysadmin posts some simple benchmarks he did on a new ZFS server This is NOT an all-in post about ZFS performance. I built a FreeBSD+ZFS file server recently at work to serve as an offsite backup server. I wanted to run a few synthetic workloads on it and look at how it fares from performance perspective. Mostly for curiosity and learning purposes. As stated in the notes about building this server, performance was not one of the priorities, as this server will never face our active workload. What I care about from this server is its ability to work with rsync and keep the data synchronised with our primary storage server. With that context, I ran a few write tests to see how good our solution is and what to expect from it in terms of performance. The article then uses FIO to do some benchmarks. As the author did, make sure you match the FIO block size to the ZFS record size to avoid write amplification. Either tune FIO or adjust the recordsize property in ZFS You also want to consider compression and cache effects Write Performance: Incompressible: 1600-2600 MB/s, Compressible: 2500-6600 MB/s Another over 1200 MB/s is enough to keep your 10 gigabit network saturated The increased latency that is seen with higher number of writers working, may be the result of the ZFS backpressure system (the write throttle). There is some tuning that can be done there. Specifically, since this machine has 768 GB of ram, you might allow more than 4GB of dirty data, which would mean you’d be able to write larger batches and not have to push back while you wait for a transaction group to flush when dealing with gigabytes/sec of writes ###How to port your OS to EC2 Colin Percival reflects on his FreeBSD on EC2 maintainership efforts in his blog: I’ve been the maintainer of the FreeBSD/EC2 platform for about 7.5 years now, and as far as “running things in virtual machines” goes, that remains the only operating system and the only cloud which I work on. That said, from time to time I get questions from people who want to port other operating systems into EC2, and being a member of the open source community, I do my best to help them. I realized a few days ago that rather than replying to emails one by one it would be more efficient to post something publicly; so — for the benefit of the dozen or so people who want to port operating systems to run in EC2, and the curiosity of maybe a thousand more people who use EC2 but will never build AMIs themselves — here’s a rough guide to building EC2 images. Before we can talk about building images, there are some things you need: Your OS needs to run on x86 hardware. 64-bit (“amd64”, “x86-64”) is ideal, but I’ve managed to run 32-bit FreeBSD on “64-bit” EC2 instances so at least in some cases that’s not strictly necessary. You almost certainly want to have drivers for Xen block devices (for all of the pre-Nitro EC2 instances) or for NVMe disks (for the most recent EC2 instances). Theoretically you could make do without these since there’s some ATA emulation available for bootstrapping, but if you want to do any disk I/O after the kernel finishes booting you’ll want to have a disk driver. Similarly, you need support for the Xen network interface (older instances), Intel 10 GbE SR-IOV networking (some newer but pre-Nitro instances), or Amazon’s “ENA” network adapters (on Nitro instances), unless you plan on having instances which don’t communicate over the network. The ENA driver is probably the hardest thing to port, since as far as I know there’s no way to get your hands on the hardware directly, and it’s very difficult to do any debugging in EC2 without having a working network. Finally, the obvious: You need to have an AWS account, and appropriate API access keys. Building a disk image Building an AMI I wrote a simple tool for converting disk images into EC2 instances: bsdec2-image-upload. It uploads a disk image to Amazon S3; makes an API call to import that disk image into an EBS volume; creates a snapshot of that volume; then registers an EC2 AMI using that snapshot. To use bsdec2-image-upload, you’ll first need to create an S3 bucket for it to use as a staging area. You can call it anything you like, but I recommend that you Create it in a “nearby” region (for performance reasons), and Set an S3 “lifecycle policy” which deletes objects automatically after 1 day (since bsdec2-image-upload doesn’t clean up the S3 bucket, and those objects are useless once you’ve finished creating an AMI). Boot configuration Odds are that your instance started booting and got as far as the boot loader launching the kernel, but at some point after that things went sideways. Now we start the iterative process of building disk images, turning them into AMIs, launching said AMIs, and seeing where they break. Some things you’ll probably run into here: EC2 instances have two types of console available to them: A serial console and an VGA console. (Or rather, emulated serial and emulated VGA.) If you can have your kernel output go to both consoles, I recommend doing that. If you have to pick one, the serial console (which shows up as the “System Log” in EC2) is probably more useful than the VGA console (which shows up as “instance screenshot”) since it lets you see more than one screen of logs at once; but there’s a catch: Due to some bizarre breakage in EC2 — which I’ve been complaining about for ten years — the serial console is very “laggy”. If you find that you’re not getting any output, wait five minutes and try again. You may need to tell your kernel where to find the root filesystem. On FreeBSD we build our disk images using GPT labels, so we simply need to specify in /etc/fstab that the root filesystem is on /dev/gpt/rootfs; but if you can’t do this, you’ll probably need to have different AMIs for Nitro instances vs. non-Nitro instances since Xen block devices will typically show up with different device names from NVMe disks. On FreeBSD, I also needed to set the vfs.root.mountfrom kernel environment variable for a while; this also is no longer needed on FreeBSD but something similar may be needed on other systems. You’ll need to enable networking, using DHCP. On FreeBSD, this means placing ifconfigDEFAULT=“SYNCDHCP” into /etc/rc.conf; other systems will have other ways of specifying network parameters, and it may be necessary to specify a setting for the Xen network device, Intel SR-IOV network, and the Amazon ENA interface so that you’ll have the necessary configuration across all EC2 instance types. (On FreeBSD, ifconfigDEFAULT takes care of specifying the network settings which should apply for whatever network interface the kernel finds at boot time.) You’ll almost certainly want to turn on SSH, so that you can connect into newly launched instances and make use of them. Don’t worry about setting a password or creating a user to SSH into yet — we’ll take care of that later. EC2 configuration Now it’s time to make the AMI behave like an EC2 instance. To this end, I prepared a set of rc.d scripts for FreeBSD. Most importantly, they Print the SSH host keys to the console, so that you can veriy that they are correct when you first SSH in. (Remember, Verifying SSH host keys is more important than flossing every day.) Download the SSH public key you want to use for logging in, and create an account (by default, “ec2-user”) with that key set up for you. Fetch EC2 user-data and process it via configinit to allow you to configure the system as part of the process of launching it. If your OS has an rc system derived from NetBSD’s rc.d, you may be able to use these scripts without any changes by simply installing them and enabling them in /etc/rc.conf; otherwise you may need to write your own scripts using mine as a model. Firstboot scripts A feature I added to FreeBSD a few years ago is the concept of “firstboot” scripts: These startup scripts are only run the first time a system boots. The aforementioned configinit and SSH key fetching scripts are flagged this way — so if your OS doesn’t support the “firstboot” keyword on rc.d scripts you’ll need to hack around that — but EC2 instances also ship with other scripts set to run on the first boot: FreeBSD Update will fetch and install security and critical errata updates, and then reboot the system if necessary. The UFS filesystem on the “boot disk” will be automatically expanded to the full size of the disk — this makes it possible to specify a larger size of disk at EC2 instance launch time. Third-party packages will be automatically fetched and installed, according to a list in /etc/rc.conf. This is most useful if configinit is used to edit /etc/rc.conf, since it allows you to specify packages to install via the EC2 user-data. While none of these are strictly necessary, I find them to be extremely useful and highly recommend implementing similar functionality in your systems. Support my work! I hope you find this useful, or at very least interesting. Please consider supporting my work in this area; while I’m happy to contribute my time to supporting open source software, it would be nice if I had money coming in which I could use to cover incidental expenses (e.g., conference travel) so that I didn’t end up paying to contribute to FreeBSD. Digital Ocean https://do.co/bsdnow ###Traceability, by Vint Cerf A recent article from the August issue of the Communications of the ACM, for your contemplation: At a recent workshop on cybersecurity in the U.K., a primary topic of consideration was how to preserve the freedom and openness of the Internet while protecting against the harmful behaviors that have emerged in this global medium. That this is a significant challenge cannot be overstated. The bad behaviors range from social network bullying and misinformation to email spam, distributed denial of service attacks, direct cyberattacks against infrastructure, malware propagation, identity theft, and a host of other ills requiring a wide range of technical and legal considerations. That these harmful behaviors can and do cross international boundaries only makes it more difficult to fashion effective responses. In other columns, I have argued for better software development tools to reduce the common mistakes that lead to vulnerabilities that are exploited. Here, I want to focus on another aspect of response related to law enforcement and tracking down perpetrators. Of course, not all harms are (or perhaps are not yet) illegal, but discovering those who cause them may still be warranted. The recent adoption and implementation of the General Data Protection Regulation (GDPR) in the European Union creates an interesting tension because it highlights the importance and value of privacy while those who do direct or indirect harm must be tracked down and their identities discovered. In passing, I mention that cryptography has sometimes been blamed for protecting the identity or actions of criminals but it is also a tool for protecting privacy. Arguments have been made for “back doors” to cryptographic systems but I am of the opinion that such proposals carry extremely high risk to privacy and safety. It is not my intent to argue this question in this column. What is of interest to me is a concept to which I was introduced at the Ditchley workshop, specifically, differential traceability. The ability to trace bad actors to bring them to justice seems to me an important goal in a civilized society. The tension with privacy protection leads to the idea that only under appropriate conditions can privacy be violated. By way of example, consider license plates on cars. They are usually arbitrary identifiers and special authority is needed to match them with the car owners (unless, of course, they are vanity plates like mine: “Cerfsup”). This is an example of differential traceability; the police department has the authority to demand ownership information from the Department of Motor Vehicles that issues the license plates. Ordinary citizens do not have this authority. In the Internet environment there are a variety of identifiers associated with users (including corporate users). Domain names, IP addresses, email addresses, and public cryptography keys are examples among many others. Some of these identifiers are dynamic and thus ambiguous. For example, IP addresses are not always permanent and may change (for example, temporary IP addresses assigned at Wi-Fi hotspots) or may be ambiguous in the case of Network Address Translation. Information about the time of assignment and the party to whom an IP address was assigned may be needed to identify an individual user. There has been considerable debate and even a recent court case regarding requirements to register users in domain name WHOIS databases in the context of the adoption of GDPR. If we are to accomplish the simultaneous objectives of protecting privacy while apprehending those engaged in harmful or criminal behavior on the Internet, we must find some balance between conflicting but desirable outcomes. This suggests to me that the notion of traceability under (internationally?) agreed circumstances (that is, differential traceability) might be a fruitful concept to explore. In most societies today, it is accepted that we must be identifiable to appropriate authorities under certain conditions (consider border crossings, traffic violation stops as examples). While there are conditions under which apparent anonymity is desirable and even justifiable (whistle-blowing, for example) absolute anonymity is actually quite difficult to achieve (another point made at the Ditchley workshop) and might not be absolutely desirable given the misbehaviors apparent anonymity invites. I expect this is a controversial conclusion and I look forward to subsequent discussion. ###Remote Access Console using FreeBSD on an RPi3 Our friend, and FOSDEM Booth Neighbour, Jorge, has posted a tutorial on how he created a remote access console for his SmartOS server and other machines in his homelab Parts: Raspberry Pi 3 B+ NavoLabs micro POE Hat FT4232H based USB-to-RS232 (4x) adapter Official Raspberry Pi case (optional) Heat-sink kit (optional) USB-to-TTL adaptor (optional) Sandisk 16Gb microSD For the software I ended up using conserver. Below is a very brief tutorial on how to set everything up. I assume you have basic unix skills. Get an RPi3 image, make some minor modifications for RPi3+, and write it to the USB stick Configure FreeBSD on the RPi3 Load the ‘muge’ Ethernet Driver Load USB serial support Load the FTDI driver Enable SSHd and Conserver Configure Conserver Setup log rotation Start Conserver And you’re good to go A small bonus script I wrote to turn on the 2nd LED on the rPI once the system is booted, it will then blink the LED if someone is connected to any of the consoles. There is also a followup post with some additional tips: https://blackdot.be/2018/08/freebsd-uart-and-raspberry-pi-3-b/ ##Beastie Bits Annual Penguin Races Mscgen - Message Sequence Chart generator This patch makes FreeBSD boot 500 - 800ms faster, please test on your hardware FreeBSD’s arc4random() replaced with OpenBSD ChaCha20 implementation MeetBSD Devsummit open for registrations New Podcast interview with Michael W. Lucas Tarsnap ##Feedback/Questions We need more feedback emails. Please write to feedback@bsdnow.tv Additionally, we are considering a new segment to be added to the end of the show (to make it skippable), where we have a ~15 minute deep dive on a topic. Some initial ideas are on the Virtual Memory subsystem, the Scheduler, Capsicum, and GEOM. What topics would you like to get very detailed explanations of? Many of the explanations may have accompanying graphics, and not be very suitable for audio only listeners, that is why we are planning to put it at the very end of the episode. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 259: Long Live Unix | BSD Now 259
The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got creative. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The antiauthoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b<RXCSUM,TXCSUM,VLANMTU,VLANHWTAGGING,VLANHWCSUM,WOLMAGIC> [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23<PERFORMNUD,ACCEPTRTADV,AUTO_LINKLOCAL> media: Ethernet autoselect (1000baseT <full-duplex>) status: active lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 options=600003<RXCSUM,TXCSUM,RXCSUMIPV6,TXCSUMIPV6> inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6> inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL> Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 258: OS Foundations | BSD Now 258
FreeBSD Foundation July Newsletter, a bunch of BSDCan trip reports, HardenedBSD Foundation status, FreeBSD and OSPFd, ZFS disk structure overview, and more Spectre mitigations in OpenBSD. ##Headlines FreeBSD Foundation Update, July 2018 MESSAGE FROM THE EXECUTIVE DIRECTOR We’re in the middle of summer here, in Boulder, CO. While the days are typically hot, they can also be quite unpredictable. Thanks to the Rocky Mountains, waking up to 50-degree (~10 C) foggy weather is not surprising. In spite of the unpredictable weather, many of us took some vacation this month. Whether it was extending the Fourth of July celebration, spending time with family, or relaxing and enjoying the summer weather, we appreciated our time off, while still managing to accomplish a lot! In this newsletter, Glen Barber enlightens us about the upcoming 12.0 release. I gave a recap of OSCON, that Ed Maste and I attended, and Mark Johnston explains the work on his improved microcode loading project, that we are funding. Finally, Anne Dickison gives us a rundown on upcoming events and information on submitting a talk for MeetBSD. Your support helps us continue this work. Please consider making a donation today. We can’t do it without you. Happy reading!! June 2018 Development Projects Update Fundraising Update: Supporting the Project July 2018 Release Engineering Update OSCON 2018 Recap Submit Your Work: MeetBSD 2018 FreeBSD Discount for 2018 SNIA Developer Conference EuroBSDcon 2018 Travel Grant Application Deadline: August 2 iXsystems ###BSDCan Trip Reports BSDCan 2018 Trip Report: Constantin Stan BSDCan 2018 Trip Report: Danilo G. Baio BSDCan 2018 Trip Report: Rodrigo Osorio BSDCan 2018 Trip Report: Dhananjay Balan BSDCan 2018 Trip Report: Kyle Evans ##News Roundup FreeBSD and OSPFd With FreeBSD jails deployed around the world, static routing was getting a bit out of hand. Plus, when I needed to move a jail from one data center to another, I would have to update routing tables across multiple sites. Not ideal. Enter dynamic routing… OSPF (open shortest path first) is an internal dynamic routing protocol that provides the autonomy that I needed and it’s fairly easy to setup. This article does not cover configuration of VPN links, ZFS, or Freebsd jails, however it’s recommended that you use seperate ZFS datasets per jail so that migration between hosts can be done with zfs send & receive. In this scenario, we have five FreeBSD servers in two different data centers. Each physical server runs anywhere between three to ten jails. When jails are deployed, they are assigned a /32 IP on lo2. From here, pf handles inbound port forwarding and outbound NAT. Links between each server are provided by OpenVPN TAP interfaces. (I used TAP to pass layer 2 traffic. I seem to remember that I needed TAP interfaces due to needing GRE tunnels on top of TUN interfaces to get OSPF to communicate. I’ve heard TAP is slower than TUN so I may revisit this.) In this example, we will use 172.16.2.0/24 as the range for OpenVPN P2P links and 172.16.3.0/24 as the range of IPs available for assignment to each jail. Previously, when deploying a jail, I assigned IPs based on the following groups: Server 1: 172.16.3.0/28 Server 2: 172.16.3.16/28 Server 3: 172.16.3.32/28 Server 4: 172.16.3.48/28 Server 5: 172.16.3.64/28 When statically routing, this made routing tables a bit smaller and easier to manage. However, when I needed to migrate a jail to a new host, I had to add a new /32 to all routing tables. Now, with OSPF, this is no longer an issue, nor is it required. To get started, first we install the Quagga package. The two configuration files needed to get OSPFv2 running are /usr/local/etc/quagga/zebra.conf and /usr/local/etc/quagga/ospfd.conf. Starting with zebra.conf, we’ll define the hostname and a management password. Second, we will populate the ospfd.conf file. To break this down: service advanced-vty allows you to skip the en or enable command. Since I’m the only one who uses this service, it’s one less command to type. ip ospf authentication message-digest and ip ospf message-diget-key… ignores non-authenticated OSPF communication. This is useful when communicating over the WAN and to prevent a replay attack. Since I’m using a VPN to communicate, I could exclude these. passive-interface default turns off the active communication of OSPF messages on all interfaces except for the interfaces listed as no passive-interface [interface name]. Since my ospf communication needs to leverage the VPNs, this prevents the servers from trying to send ospf data out the WAN interface (a firewall would work too). network 172.16.2.0/23 area 0.0.0.0 lists a supernet of both 172.16.2.0/24 and 172.16.3.0/24. This ensures routes for the jails are advertised along with the P2P links used by OpenVPN. The OpenVPN links are not required but can provide another IP to access your server if one of the links goes down. (See the suggested tasks below). At this point, we can enable the services in rc.conf.local and start them. We bind the management interface to 127.0.0.1 so that it’s only accessable to local telnet sessions. If you want to access this service remotely, you can bind to a remotely accessable IP. Remember telnet is not secure. If you need remote access, use a VPN. To manage the services, you can telnet to your host’s localhost address. Use 2604 for the ospf service. Remember, this is accessible by non-root users so set a good password. ###A broad overview of how ZFS is structured on disk When I wrote yesterday’s entry, it became clear that I didn’t understand as much about how ZFS is structured on disk (and that this matters, since I thought that ZFS copy on write updates updated a lot more than they do). So today I want to write down my new broad understanding of how this works. (All of this can be dug out of the old, draft ZFS on-disk format specification, but that spec is written in a very detailed way and things aren’t always immediately clear from it.) Almost everything in ZFS is in DMU object. All objects are defined by a dnode, and object dnodes are almost always grouped together in an object set. Object sets are themselves DMU objects; they store dnodes as basically a giant array in a ‘file’, which uses data blocks and indirect blocks and so on, just like anything else. Within a single object set, dnodes have an object number, which is the index of their position in the object set’s array of dnodes. (Because an object number is just the index of the object’s dnode in its object set’s array of dnodes, object numbers are basically always going to be duplicated between object sets (and they’re always relative to an object set). For instance, pretty much every object set is going to have an object number ten, although not all object sets may have enough objects that they have an object number ten thousand. One corollary of this is that if you ask zdb to tell you about a given object number, you have to tell zdb what object set you’re talking about. Usually you do this by telling zdb which ZFS filesystem or dataset you mean.) Each ZFS filesystem has its own object set for objects (and thus dnodes) used in the filesystem. As I discovered yesterday, every ZFS filesystem has a directory hierarchy and it may go many levels deep, but all of this directory hierarchy refers to directories and files using their object number. ZFS organizes and keeps track of filesystems, clones, and snapshots through the DSL (Dataset and Snapshot Layer). The DSL has all sorts of things; DSL directories, DSL datasets, and so on, all of which are objects and many of which refer to object sets (for example, every ZFS filesystem must refer to its current object set somehow). All of these DSL objects are themselves stored as dnodes in another object set, the Meta Object Set, which the uberblock points to. To my surprise, object sets are not stored in the MOS (and as a result do not have ‘object numbers’). Object sets are always referred to directly, without indirection, using a block pointer to the object set’s dnode. (I think object sets are referred to directly so that snapshots can freeze their object set very simply.) The DSL directories and datasets for your pool’s set of filesystems form a tree themselves (each filesystem has a DSL directory and at least one DSL dataset). However, just like in ZFS filesystems, all of the objects in this second tree refer to each other indirectly, by their MOS object number. Just as with files in ZFS filesystems, this level of indirection limits the amount of copy on write updates that ZFS had to do when something changes. PS: If you want to examine MOS objects with zdb, I think you do it with something like ‘zdb -vvv -d ssddata 1’, which will get you object number 1 of the MOS, which is the MOS object directory. If you want to ask zdb about an object in the pool’s root filesystem, use ‘zdb -vvv -d ssddata/ 1’. You can tell which one you’re getting depending on what zdb prints out. If it says ‘Dataset mos [META]’ you’re looking at objects from the MOS; if it says ‘Dataset ssddata [ZPL]’, you’re looking at the pool’s root filesystem (where object number 1 is the ZFS master node). PPS: I was going to write up what changed on a filesystem write, but then I realized that I didn’t know how blocks being allocated and freed are reflected in pool structures. So I’ll just say that I think that ignoring free space management, only four DMU objects get updated; the file itself, the filesystem’s object set, the filesystem’s DSL dataset object, and the MOS. (As usual, doing the research to write this up taught me things that I didn’t know about ZFS.) Digital Ocean ###HardenedBSD Foundation Status On 09 July 2018, the HardenedBSD Foundation Board of Directors held the kick-off meeting to start organizing the Foundation. The following people attended the kick-off meeting: Shawn Webb (in person) George Saylor (in person) Ben Welch (in person) Virginia Suydan (in person) Ben La Monica (phone) Dean Freeman (phone) Christian Severt (phone) We discussed the very first steps that need to be taken to organize the HardenedBSD Foundation as a 501©(3) not-for-profit organization in the US. We determined we could file a 1023EZ instead of the full-blown 1023. This will help speed the process up drastically. The steps are laid out as follows: Register a Post Office Box (PO Box) (completed on 10 Jul 2018). Register The HardenedBSD Foundation as a tax-exempt nonstock corporation in the state of Maryland (started on 10 Jul 2018, submitted on 18 Jul 2018, granted 20 Jul 2018). Obtain a federal tax ID (obtained 20 Jul 2018). Close the current bank account and create a new one using the federal tax ID (completed on 20 Jul 2018). File the 1023EZ paperwork with the federal government (started on 20 Jul 2018). Hire an attorney to help draft the organization bylaws. Each of the steps must be done serially and in order. We added Christian Severt, who is on Emerald Onion’s Board of Directors, to the HardenedBSD Foundation Board of Directors as an advisor. He was foundational in getting Emerald Onion their 501©(3) tax-exempt, not-for-profit status and has really good insight. Additionally, he’s going to help HardenedBSD coordinate hosting services, figuring out the best deals for us. We promoted George Saylor to Vice President and changed Shawn Webb’s title to President and Director. This is to help resolve potential concerns both the state and federal agencies might have with an organization having only a single President role. We hope to be granted our 501©(3) status before the end of the year, though that may be subject to change. We are excited for the formation of the HardenedBSD Foundation, which will open up new opportunities not otherwise available to HardenedBSD. ###More mitigations against speculative execution vulnerabilities Philip Guenther (guenther@) and Bryan Steele (brynet@) have added more mitigations against speculative execution CPU vulnerabilities on the amd64 platform. CVSROOT: /cvs Module name: src Changes by: guenther@cvs.openbsd.org 2018/07/23 11:54:04 Modified files: sys/arch/amd64/amd64: locore.S sys/arch/amd64/include: asm.h cpufunc.h frameasm.h Log message: Do "Return stack refilling", based on the "Return stack underflow" discussion and its associated appendix at https://support.google.com/faqs/answer/7625886 This should address at least some cases of "SpectreRSB" and earlier Spectre variants; more commits to follow. The refilling is done in the enter-kernel-from-userspace and return-to-userspace-from-kernel paths, making sure to do it before unblocking interrupts so that a successive interrupt can't get the CPU to C code without doing this refill. Per the link above, it also does it immediately after mwait, apparently in case the low-power CPU states of idle-via-mwait flush the RSB. ok mlarkin@ deraadt@``` and: ```CVSROOT: /cvs Module name: src Changes by: guenther@cvs.openbsd.org 2018/07/23 20:42:25 Modified files: sys/arch/amd64/amd64: locore.S vector.S vmm_support.S sys/arch/amd64/include: asm.h cpufunc.h Log message: Also do RSB refilling when context switching, after vmexits, and when vmlaunch or vmresume fails. Follow the lead of clang and the intel recommendation and do an lfence after the pause in the speculation-stop path for retpoline, RSB refill, and meltover ASM bits. ok kettenis@ deraadt@``` "Mitigation G-2" for AMD processors: ```CVSROOT: /cvs Module name: src Changes by: brynet@cvs.openbsd.org 2018/07/23 17:25:03 Modified files: sys/arch/amd64/amd64: identcpu.c sys/arch/amd64/include: specialreg.h Log message: Add "Mitigation G-2" per AMD's Whitepaper "Software Techniques for Managing Speculation on AMD Processors" By setting MSR C001_1029[1]=1, LFENCE becomes a dispatch serializing instruction. Tested on AMD FX-4100 "Bulldozer", and Linux guest in SVM vmd(8) ok deraadt@ mlarkin@``` Beastie Bits HardenedBSD will stop supporting 10-STABLE on 10 August 2018 (https://groups.google.com/a/hardenedbsd.org/forum/#!topic/users/xvU0g-g1l5U) GSoC 2018 Reports: Integrate libFuzzer with the Basesystem, Part 2 (https://blog.netbsd.org/tnf/entry/gsoc_2018_reports_integrate_libfuzzer1) ZFS Boot Environments at PBUG (https://vermaden.wordpress.com/2018/07/30/zfs-boot-environments-at-pbug/) Second Editions versus the Publishing Business (https://blather.michaelwlucas.com/archives/3229) Theo de Raadt on "unveil(2) usage in base" (https://undeadly.org/cgi?action=article;sid=20180728063716) rtadvd(8) has been replaced by rad(8) (https://undeadly.org/cgi?action=article;sid=20180724072205) BSD Users Stockholm Meetup #3 (https://www.meetup.com/BSD-Users-Stockholm/events/253447019/) Changes to NetBSD release support policy (https://blog.netbsd.org/tnf/entry/changes_to_netbsd_release_support) The future of HAMMER1 (http://lists.dragonflybsd.org/pipermail/users/2018-July/357832.html) *** Tarsnap Feedback/Questions Rodriguez - A Question (http://dpaste.com/0Y1B75Q#wrap) Shane - About ZFS Mostly (http://dpaste.com/32YGNBY#wrap) Leif - ZFS less than 8gb (http://dpaste.com/2GY6HHC#wrap) Wayne - ZFS vs EMC (http://dpaste.com/17PSCXC#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
Episode 257: Great NetBSD 8 | BSD Now 257
NetBSD 8.0 available, FreeBSD on Scaleway’s ARM64 VPS, encrypted backups with OpenBSD, Dragonfly server storage upgrade, zpool checkpoints, g2k18 hackathon reports, and more. ##Headlines NetBSD v8.0 Released The NetBSD Project is pleased to announce NetBSD 8.0, the sixteenth major release of the NetBSD operating system. This release brings stability improvements, hundreds of bug fixes, and many new features. Some highlights of the NetBSD 8.0 release are: USB stack rework, USB3 support added. In-kernel audio mixer (audio_system(9)). Reproducible builds (MKREPRO, see mk.conf(5)). Full userland debug information (MKDEBUG, see mk.conf(5)) available. While most install media do not come with them (for size reasons), the debug and xdebug sets can be downloaded and extracted as needed later. They provide full symbol information for all base system and X binaries and libraries and allow better error reporting and (userland) crash analysis. PaX MPROTECT (W^X) memory protection enforced by default on some architectures with fine-grained memory protection and suitable ELF formats: i386, amd64, evbarm, landisk. PaX ASLR (Address Space Layout Randomization) enabled by default on: i386, amd64, evbarm, landisk, sparc64. Position independent executables by default for userland on: i386, amd64, arm, m68k, mips, sh3, sparc64. A new socket layer can(4) has been added for communication of devices on a CAN bus. A special pseudo interface ipsecif(4) for route-based VPNs has been added. Parts of the network stack have been made MP-safe. The kernel option NET_MPSAFE is required to enable this. Hardening of the network stack in general. Various WAPBL (the NetBSD file system “log” option) stability and performance improvements. Specific to i386 and amd64 CPUs: Meltdown mitigation: SVS (Separate Virtual Space), enabled by default. SpectreV2 mitigation: retpoline (support in gcc), used by default for kernels. Other hardware mitigations are also available. SpectreV4 mitigations available for Intel and AMD. PopSS workaround: user access to debug registers is turned off by default. Lazy FPU saving disabled on vulnerable Intel CPUs (“eagerfpu”). SMAP support. Improvement and hardening of the memory layout: W^X, fewer writable pages, better consistency, better performance. (U)EFI bootloader. Many evbarm kernels now use FDT (flat device tree) information (loadable at boot time from an external file) for device configuration, the number of kernels has decreased but the number of boards has vastly increased. Lots of updates to 3rd party software included: GCC 5.5 with support for Address Sanitizer and Undefined Behavior Sanitizer GDB 7.12 GNU binutils 2.27 Clang/LLVM 3.8.1 OpenSSH 7.6 OpenSSL 1.0.2k mdocml 1.14.1 acpica 20170303 ntp 4.2.8p11-o dhcpcd 7.0.6 Lua 5.3.4 ###Running FreeBSD on the ARM64 VPS from Scaleway I’ve been thinking about this 6 since 2017, but only yesterday signed up for an account and played around with the ARM64 offering. Turns out it’s pretty great! KVM boots into UEFI, there’s a local VirtIO disk attached, no NBD junk required. So we can definitely run FreeBSD. I managed to “depenguinate” a running instance, the notes are below. Would be great if Scaleway offered an official image instead :wink: For some reason, unlike on x86 4, mounting additional volumes is not allowed 4 on ARM64 instances. So we’ll have to move the running Linux to a ramdisk using pivotroot and then we can do whatever to our one and only disk. Spin up an instance with Ubuntu Zesty and ssh in. Prepare the system and change the root to a tmpfs: apt install gdisk mount -t tmpfs tmpfs /tmp cp -r /bin /sbin /etc /dev /root /home /lib /run /usr /var /tmp mkdir /tmp/proc /tmp/sys /tmp/oldroot mount /dev/vda /tmp/oldroot mount --make-rprivate / pivotroot /tmp /tmp/oldroot for i in dev proc sys run; do mount --move /oldroot/$i /$i; done systemctl daemon-reload systemctl restart sshd Now reconnect to ssh from a second terminal (note: rm the connection file if you use ControlPersist in ssh config), then exit the old session. Kill the old sshd process, restart or stop the rest of the stuff using the old disk: pkill -f notty sed -ibak 's/RefuseManualStart.$//g' /lib/systemd/system/dbus.service systemctl daemon-reload systemctl restart dbus systemctl daemon-reexec systemctl stop user@0 ntp cron systemd-logind systemctl restart systemd-journald systemd-udevd pkill agetty pkill rsyslogd Check that nothing is touching /oldroot: lsof | grep oldroot There will probably be an old dbus-daemon, kill it. And finally, unmount the old root and overwrite the hard disk with a memstick image: umount -R /oldroot wget https://download.freebsd.org/ftp/snapshots/arm64/aarch64/ISO-IMAGES/12.0/FreeBSD-12.0-CURRENT-arm64-aarch64-20180719-r336479-mini-memstick.img.xz xzcat FreeBSD-12.0-CURRENT-arm64-aarch64-20180719-r336479-mini-memstick.img.xz | dd if=/dev/stdin of=/dev/vda bs=1M (Look for the newest snapshot, don’t copy paste the July 19 link above if you’re reading this in the future. Actually maybe use a release instead of CURRENT…) Now, fix the GPT: move the secondary table to the end of the disk and resize the table. It’s important to resize here, as FreeBSD does not do that and silently creates partitions that won’t persist across reboots gdisk /dev/vda x e s 4 w y And reboot. (You might actually want to hard reboot here: for some reason on the first reboot from Linux, pressing the any-key to enter the prompt in the loader hangs the console for me.) I didn’t have to go into the ESC menu and choose the local disk in the boot manager, it seems to boot from disk automatically. Now we’re in the FreeBSD EFI loader. For some reason, the (recently fixed? 2) serial autodetection from EFI is not working correctly. Or something. So you don’t get console output by default. To fix, you have to run these commands in the boot loader command prompt: set console=comconsole,efi boot Ignore the warning about comconsole not being a valid console. Since there’s at least one (efi) that the loader thinks is valid, it sets the whole variable.) (UPD: shouldn’t be necessary in the next snapshot) Now it’s a regular installation process! When asked about partitioning, choose Shell, and manually add a partition and set up a root filesystem: gpart add -t freebsd-zfs -a 4k -l zroot vtbd0 zpool create -R /mnt -O mountpoint=none -O atime=off zroot /dev/gpt/zroot zfs create -o canmount=off -o mountpoint=none zroot/ROOT zfs create -o mountpoint=/ zroot/ROOT/default zfs create -o mountpoint=/usr zroot/ROOT/default/usr zfs create -o mountpoint=/var zroot/ROOT/default/var zfs create -o mountpoint=/var/log zroot/ROOT/default/var/log zfs create -o mountpoint=/usr/home zroot/home zpool set bootfs=zroot/ROOT/default zroot exit (In this example, I set up ZFS with a beadm-compatible layout which allows me to use Boot Environments.) In the post-install chroot shell, fix some configs like so: echo 'zfsload="YES"' >> /boot/loader.conf echo 'console="comconsole,efi"' >> /boot/loader.conf echo 'vfs.zfs.arcmax="512M"' >> /boot/loader.conf sysrc zfsenable=YES exit (Yeah, for some reason, the loader does not load zfs.ko’s dependency opensolaris.ko automatically here. idk what even. It does on my desktop and laptop.) Now you can reboot into the installed system!! Here’s how you can set up IPv6 (and root’s ssh key) auto configuration on boot: Pkg bootstrap pkg install curl curl https://raw.githubusercontent.com/scaleway/image-tools/master/bases/overlay-common/usr/local/bin/scw-metadata > /usr/local/bin/scw-metadata chmod +x /usr/local/bin/scw-metadata echo '#!/bin/sh' > /etc/rc.local echo 'PATH=/usr/local/bin:$PATH' >> /etc/rc.local echo 'eval $(scw-metadata)' >> /etc/rc.local echo 'echo $SSHPUBLICKEYS0KEY > /root/.ssh/authorizedkeys' >> /etc/rc.local echo 'chmod 0400 /root/.ssh/authorizedkeys' >> /etc/rc.local echo 'ifconfig vtnet0 inet6 $IPV6ADDRESS/$IPV6NETMASK' >> /etc/rc.local echo 'route -6 add default $IPV6GATEWAY' >> /etc/rc.local mkdir /run mkdir /root/.ssh sh /etc/rc.local And to fix incoming TCP connections, configure the DHCP client to change the broadcast address: echo 'interface "vtnet0" { supersede broadcast-address 255.255.255.255; }' >> /etc/dhclient.conf killall dhclient dhclient vtnet0 Other random notes: keep in mind that -CURRENT snapshots come with a debugging kernel by default, which limits syscall performance by a lot, you might want to build your own 2 with config GENERIC-NODEBUG also disable heavy malloc debugging features by running ln -s ‘abort:false,junk:false’ /etc/malloc.conf (yes that’s storing config in a symlink) you can reuse the installer’s partition for swap * Digital Ocean ** http://do.co/bsdnow ###Easy encrypted backups on OpenBSD with base tools Today’s topic is “Encrypted backups” using only OpenBSD base tools. I am planning to write a bigger article later about backups but it’s a wide topic with a lot of software to cover and a lot of explanations about the differents uses cases, needs, issues an solutions. Here I will stick on explaining how to make reliable backups for an OpenBSD system (my laptop). What we need is the dump command (see man 8 dump for its man page). It’s an utility to make a backup for a filesystem, it can only make a backup of one filesystem at a time. On my laptop I only backup /home partition so this solution is suitable for me while still being easy. Dump can do incremental backups, it means that it will only save what changed since the last backup of lower level. If you do not understand this, please refer to the dump man page. What is very interesting with dump is that it honors nodump flag which is an extended attribute of a FFS filesystem. One can use the command chflags nodump /home/solene/Downloads to tells dump not do save that folder (under some circumstances). By default, dump will not save thoses files, EXCEPT for a level 0 backup. Important features of this backup solution: save files with attributes, permissions and flags can recreate a partition from a dump, restore files interactively, from a list or from its inode number (useful when you have files in lost+found) one dump = one file My process is to make a huge dump of level 0 and keep it on a remote server, then, once a week I make a level 1 backup which will contain everything changed since the last dump of level 0, and everyday I do a level 2 backup of my files. The level 2 will contain latest files and the files changing a lot, which are often the most interesting. The level 1 backup is important because it will offload a lot of changes for the level 2. Let me explain: let says my full backup is 60 GB, full of pictures, sources files, GUI applications data files etc… A level 1 backup will contain every new picture, new projects, new GUI files etc… since the full backup, which will produce bigger and bigger dump over time, usually it is only 100 MB to 1GB. As I don’t add new pictures everyday or use new software everyday, the level 2 will take care of most littles changes to my data, like source code edited, little works on files etc… The level 2 backup is really small, I try to keep it under 50 MB so I can easily send it on my remote server everyday. One could you more dump level, up to level 9, but keep in mind that those are incremental. In my case, if I need to restore all my partition, I will need to use level 0, 1 and 2 to get up to latest backup state. If you want to restore a file deleted a few days ago, you need to remember in which level its latest version is. History note: dump was designed to be used with magnetic tapes. See the article for the remainder of the article ##News Roundup Status of DFly server storage upgrades (Matt Dillon) Last month we did some storage upgrades, particularly of internet-facing machines for package and OS distribution. Yesterday we did a number of additional upgrades, described below. All using funds generously donated by everyone! The main repository server received a 2TB SSD to replace the HDDs it was using before. This will improve access to a number of things maintained by this server, including the mail archives, and gives the main repo server more breathing room for repository expansion. Space was at a premium before. Now there’s plenty. Monster, the quad socket opteron which we currently use as the database builder and repository that we export to our public grok service (grok.dragonflybsd.org) received a 512G SSD to add swap space for swapcache, to help cache the grok meta-data. It now has 600GB of swapcache configured. Over the next few weeks we will also be changing the grok updates to ping-pong between the two 4TB data drives it received in the last upgrade so we can do concurrent updates and web accesses without them tripping over each other performance-wise. The main developer box, Leaf, received a 2TB SSD and we are currently in the midst of migrating all the developer accounts in /home and /build from its old HDDs to its new SSD. This machine serves developer repos, developer web stuff, our home page and wiki, etc, so those will become snappier as well. Hard drives are becoming real dinosaurs. We still have a few left from the old days but in terms of active use the only HDDs we feel we really need to keep now are the ones we use for backups and grok data, owing to the amount of storage needed for those functions. Five years ago when we received the blade server that now sits in the colo, we had a small 256G SSD for root on every blade, and everything else used HDDs. To make things operate smoothly, most of that 256G root SSD was assigned to swapcache (200G of it, in fact, in most cases). Even just 2 years ago replacing all those HDDs with SSDs, even just the ones being used to actively serve data and support developers, would have been cost prohibitive. But today it isn’t and the only HDDs we really need anywhere are for backups or certain very large bits of bulk data (aka the grok source repository and index). The way things are going, even the backup drives will probably become SSDs over the next two years. ###iX ad spot OSCON 2018 Recap ###zpool checkpoints In March, to FreeBSD landed a very interesting feature called ‘zpool checkpoints’. Before we jump straight into the topic, let’s take a step back and look at another ZFS feature called ‘snapshot’. Snapshot allows us to create an image of our single file systems. This gives us the option to modify data on the dataset without the fear of losing some data. A very good example of how to use ZFS snapshot is during an upgrade of database schema. Let us consider a situation where we have a few scripts which change our schema. Sometimes we are unable to upgrade in one transaction (for example, when we attempt to alter a table and then update it in single transaction). If our database is on dataset, we can just snapshot it, and if something goes wrong, simply rollback the file system to its previous state. The problem with snapshot is that it works only on a single dataset. If we added some dataset, we wouldn’t then be able to create the snapshot which would rollback that operation. The same with changing the attributes of a dataset. If we change the compression on the dataset, we cannot rollback it. We would need to change that manually. Another interesting problem involves upgrading the whole operating system when we upgrade system with a new ZFS version. What if we start upgrading our dataset and our kernel begins to crash? (If you use FreeBSD, I doubt you will ever have had that experience but still…). If we rollback to the old kernel, there is a chance the dataset will stop working because the new kernel doesn’t know how to use the new features. Zpool checkpoints is the solution to all those problems. Instead of taking a single snapshot of the dataset, we can now take a snapshot of the whole pool. That means we will not only rollback the data but also all the metadata. If we rewind to the checkpoint, all our ZFS properties will be rolled back; the upgrade will be rolledback, and even the creation/deletion of the dataset, and the snapshot, will be rolledback. Zpool Checkpoint has introduced a few simple functions: For a creating checkpoint: zpool checkpoint <pool> Rollbacks state to checkpoint and remove the checkpoint: zpool import -- rewind-to-checkpoint <pool> Mount the pool read only - this does not rollback the data: zpool import --read-only=on --rewind-to-checkpoint Remove the checkpoint zpool checkpoint --discard <pool> or zpool checkpoint -d <pool> With this powerful feature we need to remember some safety rules: Scrub will work only on data that isn’t in checkpool. You can’t remove vdev if you have a checkpoint. You can’t split mirror. Reguid will not work either. Create a checkpoint when one of the disks is removed… For me, this feature is incredibly useful, especially when upgrading an operating system, or when I need to experiment with additional data sets. If you speak Polish, I have some additional information for you. During the first Polish BSD user group meeting, I had the opportunity to give a short talk about this feature. Here you find the video of that talk, and here is the slideshow. I would like to offer my thanks to Serapheim Dimitropoulos for developing this feature, and for being so kind in sharing with me so many of its intricacies. If you are interested in knowing more about the technical details of this feature, you should check out Serapheim’s blog, and his video about checkpoints. ###g2k18 Reports g2k18 hackathon report: Ingo Schwarze on sed(1) bugfixing with Martijn van Duren, and about other small userland stuff g2k18 hackathon report: Kenneth Westerback on dhcpd(8) fixes, disklabel(8) refactoring and more g2k18 Hackathon Report: Marc Espie on ports and packages progress g2k18 hackathon report: Antoine Jacoutot on porting g2k18 hackathon report: Matthieu Herrb on font caches and xenodm g2k18 hackathon report: Florian Obser on rtadvd(8) -> rad(8) progress (actually, rewrite) g2k18 Hackathon Report: Klemens Nanni on improvements to route(8), pfctl(8), and mount(2) g2k18 hackathon report: Carlos Cardenas on vmm/vmd progress, LACP g2k18 hackathon report: Claudio Jeker on OpenBGPD developments Picture of the last day of the g2k18 hackathon in Ljubljana, Slovenia ##Beastie Bits Something blogged (on pkgsrcCon 2018) GSoC 2018 Reports: Configuration files versioning in pkgsrc, Part 1 There should be a global ‘awareness’ week for developers Polish BSD User Group – Upcoming Meeting: Aug 9th 2018 London BSD User Group – Upcoming Meeting: Aug 14th 2018 Phillip Smith’s collection of reasons why ZFS is better so that he does not have to repeat himself all the time EuroBSDCon 2018: Sept 20-23rd in Romania – Register NOW! MeetBSD 2018: Oct 19-20 in Santa Clara, California. Call for Papers closes on Aug 12 Tarsnap ##Feedback/Questions Dale - L2ARC recommendations & drive age question Todd - ZFS & S3 efraim - License Poem Henrick - Yet another ZFS question Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 256: Because Computers | BSD Now 2^8
FreeBSD ULE vs. Linux CFS, OpenBSD on Tuxedo InfinityBook, how zfs diff reports filenames efficiently, why choose FreeBSD over Linux, PS4 double free exploit, OpenBSD’s wifi autojoin, and FreeBSD jails the hard way. Win Celebrate our 256th episode with us. You can win a Mogics Power Bagel (not sponsored). To enter, go find the 4 episodes we did in December of 2017. In the opening, find the 4 letters in the bookshelf behind me. They spell different words in each of the 4 episodes. Send us these words in order to feedback@bsdnow.tv with the subject “bsdnow256” until August 8th, 2018 18:00 UTC and we’ll randomly draw the winner on the live show. We’ll then contact you to ship the item. Only one item to win. All decisions are final. Better luck next time. Headlines Battle of the Schedulers: FreeBSD ULE vs. Linux CFS Introduction This paper analyzes the impact on application performance of the design and implementation choices made in two widely used open-source schedulers: ULE, the default FreeBSD scheduler, and CFS, the default Linux scheduler. We compare ULE and CFS in otherwise identical circumstances. We have ported ULE to Linux, and use it to schedule all threads that are normally scheduled by CFS. We compare the performance of a large suite of applications on the modified kernel running ULE and on the standard Linux kernel running CFS. The observed performance differences are solely the result of scheduling decisions, and do not reflect differences in other subsystems between FreeBSD and Linux. There is no overall winner. On many workloads the two schedulers perform similarly, but for some workloads there are significant and even surprising differences. ULE may cause starvation, even when executing a single application with identical threads, but this starvation may actually lead to better application performance for some workloads. The more complex load balancing mechanism of CFS reacts more quickly to workload changes, but ULE achieves better load balance in the long run. Operating system kernel schedulers are responsible for maintaining high utilization of hardware resources (CPU cores, memory, I/O devices) while providing fast response time to latency-sensitive applications. They have to react to workload changes, and handle large numbers of cores and threads with minimal overhead [12]. This paper provides a comparison between the default schedulers of two of the most widely deployed open-source operating systems: the Completely Fair Scheduler (CFS) used in Linux, and the ULE scheduler used in FreeBSD. Our goal is not to declare an overall winner. In fact, we find that for some workloads ULE is better and for others CFS is better. Instead, our goal is to illustrate how differences in the design and the implementation of the two schedulers are reflected in application performance under different workloads. ULE and CFS are both designed to schedule large numbers of threads on large multicore machines. Scalability considerations have led both schedulers to adopt per-core run-queues. On a context switch, a core accesses only its local run-queue to find the next thread to run. Periodically and at select times, e.g., when a thread wakes up, both ULE and CFS perform load balancing, i.e., they try to balance the amount of work waiting in the run-queues of different cores. ULE and CFS, however, differ greatly in their design and implementation choices. FreeBSD ULE is a simple scheduler (2,950 lines of code in FreeBSD 11.1), while Linux CFS is much more complex (17,900 lines of code in the latest LTS Linux kernel, Linux 4.9). FreeBSD run-queues are FIFO. For load balancing, FreeBSD strives to even out the number of threads per core. In Linux, a core decides which thread to run next based on prior execution time, priority, and perceived cache behavior of the threads in its runqueue. Instead of evening out the number of threads between cores, Linux strives to even out the average amount of pending work. Performance analysis We now analyze the impact of the per-core scheduling on the performance of 37 applications. We define “performance” as follows: for database workloads and NAS applications, we compare the number of operations per second, and for the other applications we compare “execution time”. The higher the “performance”, the better a scheduler performs. Figure 5 presents the performance difference between CFS and ULE on a single core, with percentages above 0 meaning that the application executes faster with ULE than CFS. Overall, the scheduler has little influence on most workloads. Indeed, most applications use threads that all perform the same work, thus both CFS and ULE endup scheduling all of the threads in a round-robin fashion. The average performance difference is 1.5%, in favor of ULE. Still, scimark is 36% slower on ULE than CFS, and apache is 40% faster on ULE than CFS. Scimark is a single-threaded Java application. It launches one compute thread, and the Java runtime executes other Java system threads in the background (for the garbage collector, I/O, etc.). When the application is executed with ULE, the compute thread can be delayed, because Java system threads are considered interactive and get priority over the computation thread. The apache workload consists of two applications: the main server (httpd) running 100 threads, and ab, a single-threaded load injector. The performance difference between ULE and CFS is explained by different choices regarding thread preemption. In ULE, full preemption is disabled, while CFS preempts the running thread when the thread that has just been woken up has a vruntime that is much smaller than the vruntime of the currently executing thread (1ms difference in practice). In CFS, ab is preempted 2 million times during the benchmark, while it never preempted with ULE. This behavior is explained as follows: ab starts by sending 100 requests to the httpd server, and then waits for the server to answer. When ab is woken up, it checks which requests have been processed and sends new requests to the server. Since ab is single-threaded, all requests sent to the server are sent sequentially. In ULE, ab is able to send as many new requests as it has received responses. In CFS, every request sent by ab wakes up a httpd thread, which preempts ab. Conclusion Scheduling threads on a multicore machine is hard. In this paper, we perform a fair comparison of the design choices of two widely used schedulers: the ULE scheduler from FreeBSD and CFS from Linux. We show that they behave differently even on simple workloads, and that no scheduler performs better than the other on all workloads. OpenBSD 6.3 on Tuxedo InfinityBook Disclaimer: I came across the Tuxedo Computers InfinityBook last year at the Open! Conference where Tuxedo had a small booth. Previously they came to my attention since they’re a member of the OSB Alliance on whose board I’m a member. Furthermore Tuxedo Computers are a sponsor of the OSBAR which I’m part of the organizational team. OpenBSD on the Tuxedo InfinityBook I’ve asked the guys over at Tuxedo Computers whether they would be interested to have some tests with *BSD done and that I could test drive one of their machines and give feedback on what works and what does not - and possibly look into it.+ Within a few weeks they shipped me a machine and last week the InfinityBook Pro 14” arrived. Awesome. Thanks already to the folks at Tuxedo Computers. The machine arrived accompanied by lot’s of swag :) The InfinityBook is a very nice machine and allows a wide range of configuration. The configuration that was shipped to me: Intel Core i7-8550U 1x 16GB RAM 2400Mhz Crucial Ballistix Sport LT 250 GB Samsung 860 EVO (M.2 SATAIII) I used a USB-stick to boot install63.fs and re-installed the machine with OpenBSD. Full dmesg. The installation went flawlessly, the needed intel firmware is being installed after installation automatically via fw_update(1). Out of the box the graphics works and once installed the machine presents the login. Video When X starts the display is turned off for some reason. You will need to hit fn+f12 (the key with the moon on it) then the display will go on. Aside from that little nit, X works just fine and presents one the expected resolution. External video is working just fine as well. Either via hdmi output or via the mini displayport connector. The buttons for adjusting brightness (fn+f8 and fn+f9) are not working. Instead one has to use wsconsctl(8) to adjust the brightness. Networking The infinityBook has built-in ethernet, driven by re(4) And for the wireless interface the iwm(4) driver is being used. Both work as expected. ACPI Neither suspend nor hibernate work. Reporting of battery status is bogus as well. Some of the keyboard function keys work: LCD on/off works (fn+f2) Keyboard backlight dimming works (fn+f4) Volume (fn+f5 / fn+f6) works Sound The azalia chipset is being used for audio processing. Works as expected, volume can be controlled via buttons (fn+f5, fn+f6) or via mixerctl. Touchpad Can be controlled via wsconsctl(8). So far I must say, that the InfinityBook makes a nice machine - and I’m enjoying working with it. iXsystems iXsystems - Its all NAS How ZFS makes things like ‘zfs diff’ report filenames efficiently As a copy on write (file)system, ZFS can use the transaction group (txg) numbers that are embedded in ZFS block pointers to efficiently find the differences between two txgs; this is used in, for example, ZFS bookmarks. However, as I noted at the end of my entry on block pointers, this doesn’t give us a filesystem level difference; instead, it essentially gives us a list of inodes (okay, dnodes) that changed. In theory, turning an inode or dnode number into the path to a file is an expensive operation; you basically have to search the entire filesystem until you find it. In practice, if you’ve ever run ‘zfs diff’, you’ve likely noticed that it runs pretty fast. Nor is this the only place that ZFS quickly turns dnode numbers into full paths, as it comes up in ‘zpool status’ reports about permanent errors. At one level, zfs diff and zpool status do this so rapidly because they ask the ZFS code in the kernel to do it for them. At another level, the question is how the kernel’s ZFS code can be so fast. The interesting and surprising answer is that ZFS cheats, in a way that makes things very fast when it works and almost always works in normal filesystems and with normal usage patterns. The cheat is that ZFS dnodes record their parent’s object number. If you’re familiar with the twists and turns of Unix filesystems, you’re now wondering how ZFS deals with hardlinks, which can cause a file to be in several directories at once and so have several parents (and then it can be removed from some of the directories). The answer is that ZFS doesn’t; a dnode only ever tracks a single parent, and ZFS accepts that this parent information can be inaccurate. I’ll quote the comment in zfsobjto_pobj: When a link is removed [the file’s] parent pointer is not changed and will be invalid. There are two cases where a link is removed but the file stays around, when it goes to the delete queue and when there are additional links. Before I get into the details, I want to say that I appreciate the brute force elegance of this cheat. The practical reality is that most Unix files today don’t have extra hardlinks, and when they do most hardlinks are done in ways that won’t break ZFS’s parent stuff. The result is that ZFS has picked an efficient implementation that works almost all of the time; in my opinion, the great benefit we get from having it around are more than worth the infrequent cases where it fails or malfunctions. Both zfs diff and having filenames show up in zpool status permanent error reports are very useful (and there may be other cases where this gets used). The current details are that any time you hardlink a file to somewhere or rename it, ZFS updates the file’s parent to point to the new directory. Often this will wind up with a correct parent even after all of the dust settles; for example, a common pattern is to write a file to an initial location, hardlink it to its final destination, and then remove the initial location version. In this case, the parent will be correct and you’ll get the right name. News Roundup What is FreeBSD? Why Should You Choose It Over Linux? Not too long ago I wondered if and in what situations FreeBSD could be faster than Linux and we received a good amount of informative feedback. So far, Linux rules the desktop space and FreeBSD rules the server space. In the meantime, though, what exactly is FreeBSD? And at what times should you choose it over a GNU/Linux installation? Let’s tackle these questions. FreeBSD is a free and open source derivative of BSD (Berkeley Software Distribution) with a focus on speed, stability, security, and consistency, among other features. It has been developed and maintained by a large community ever since its initial release many years ago on November 1, 1993. BSD is the version of UNIX® that was developed at the University of California in Berkeley. And being a free and open source version, “Free” being a prefix to BSD is a no-brainer. What’s FreeBSD Good For? FreeBSD offers a plethora of advanced features and even boasts some not available in some commercial Operating Systems. It makes an excellent Internet and Intranet server thanks to its robust network services that allow it to maximize memory and work with heavy loads to deliver and maintain good response times for thousands of simultaneous user processes. FreeBSD runs a huge number of applications with ease. At the moment, it has over 32,000 ported applications and libraries with support for desktop, server, and embedded environments. with that being said, let me also add that FreeBSD is excellent for working with advanced embedded platforms. Mail and web appliances, timer servers, routers, MIPS hardware platforms, etc. You name it! FreeBSD is available to install in several ways and there are directions to follow for any method you want to use; be it via CD-ROM, over a network using NFS or FTP, or DVD. FreeBSD is easy to contribute to and all you have to do is to locate the section of the FreeBSD code base to modify and carefully do a neat job. Potential contributors are also free to improve on its artwork and documentation, among other project aspects. FreeBSD is backed by the FreeBSD Foundation, a non-profit organization that you can contribute to financially and all direct contributions are tax deductible. FreeBSD’s license allows users to incorporate the use of proprietary software which is ideal for companies interested in generating revenues. Netflix, for example, could cite this as one of the reasons for using FreeBSD servers. Why Should You Choose It over Linux? From what I’ve gathered about both FreeBSD and Linux, FreeBSD has a better performance on servers than Linux does. Yes, its packaged applications are configured to offer better a performance than Linux and it is usually running fewer services by default, there really isn’t a way to certify which is faster because the answer is dependent on the running hardware and applications and how the system is tuned. FreeBSD is reportedly more secure than Linux because of the way the whole project is developed and maintained. Unlike with Linux, the FreeBSD project is controlled by a large community of developers around the world who fall into any of these categories; core team, contributors, and committers. FreeBSD is much easier to learn and use because there aren’t a thousand and one distros to choose from with different package managers, DEs, etc. FreeBSD is more convenient to contribute to because it is the entire OS that is preserved and not just the kernel and a repo as is the case with Linux. You can easily access all of its versions since they are sorted by release numbers. Apart from the many documentations and guides that you can find online, FreeBSD has a single official documentation wherein you can find the solution to virtually any issue you will come across. So, you’re sure to find it resourceful. FreeBSD has close to no software issues compared to Linux because it has Java, is capable of running Windows programs using Wine, and can run .NET programs using Mono. FreeBSD’s ports/packages system allows you to compile software with specific configurations, thereby avoiding conflicting dependency and version issues. Both the FreeBSD and GNU/Linux project are always receiving updates. The platform you decide to go with is largely dependent on what you want to use it for, your technical know-how, willingness to learn new stuff, and ultimately your preference. What is your take on the topic? For what reasons would you choose FreeBSD over Linux if you would? Let us know what you think about both platforms in the comments section below. PS4 5.05 BPF Double Free Kernel Exploit Writeup Introduction Welcome to the 5.0x kernel exploit write-up. A few months ago, a kernel vulnerability was discovered by qwertyoruiopz and an exploit was released for BPF which involved crafting an out-of-bounds (OOB) write via use-after-free (UAF) due to the lack of proper locking. It was a fun bug, and a very trivial exploit. Sony then removed the write functionality from BPF, so that exploit was patched. However, the core issue still remained (being the lack of locking). A very similar race condition still exists in BPF past 4.55, which we will go into detail below on. The full source of the exploit can be found here. This bug is no longer accessible however past 5.05 firmware, because the BPF driver has finally been blocked from unprivileged processes - WebKit can no longer open it. Sony also introduced a new security mitigation in 5.0x firmwares to prevent the stack pointer from pointing into user space, however we’ll go more in detail on this a bit further down. Assumptions Some assumptions are made of the reader’s knowledge for the writeup. The avid reader should have a basic understanding of how memory allocators work - more specifically, how malloc() and free() allocate and deallocate memory respectively. They should also be aware that devices can be issued commands concurrently, as in, one command could be received while another one is being processed via threading. An understanding of C, x86, and exploitation basics is also very helpful, though not necessarily required. Background This section contains some helpful information to those newer to exploitation, or are unfamiliar with device drivers, or various exploit techniques such as heap spraying and race conditions. Feel free to skip to the “A Tale of Two Free()'s” section if you’re already familiar with this material. What Are Drivers? There are a few ways that applications can directly communicate with the operating system. One of which is system calls, which there are over 600 of in the PS4 kernel, ~500 of which are FreeBSD - the rest are Sony-implemented. Another method is through something called “Device Drivers”. Drivers are typically used to bridge the gap between software and hardware devices (usb drives, keyboard/mouse, webcams, etc) - though they can also be used just for software purposes. There are a few operations that a userland application can perform on a driver (if it has sufficient permissions) to interface with it after opening it. In some instances, one can read from it, write to it, or in some cases, issue more complex commands to it via the ioctl() system call. The handlers for these commands are implemented in kernel space - this is important, because any bugs that could be exploited in an ioctl handler can be used as a privilege escalation straight to ring0 - typically the most privileged state. Drivers are often the more weaker points of an operating system for attackers, because sometimes these drivers are written by developers who don’t understand how the kernel works, or the drivers are older and thus not wise to newer attack methods. The BPF Device Driver If we take a look around inside of WebKit’s sandbox, we’ll find a /dev directory. While this may seem like the root device driver path, it’s a lie. Many of the drivers that the PS4 has are not exposed to this directory, but rather only ones that are needed for WebKit’s operation (for the most part). For some reason though, BPF (aka. the “Berkely Packet Filter”) device is not only exposed to WebKit’s sandbox - it also has the privileges to open the device as R/W. This is very odd, because on most systems this driver is root-only (and for good reason). If you want to read more into this, refer to my previous write-up with 4.55FW. What Are Packet Filters? Below is an excerpt from the 4.55 bpfwrite writeup. Since the bug is directly in the filter system, it is important to know the basics of what packet filters are. Filters are essentially sets of pseudo-instructions that are parsed by bpf_filter() (which are ran when packets are received). While the pseudo-instruction set is fairly minimal, it allows you to do things like perform basic arithmetic operations and copy values around inside it’s buffer. Breaking down the BPF VM in it’s entirety is far beyond the scope of this write-up, just know that the code produced by it is ran in kernel mode - this is why read/write access to /dev/bpf should be privileged. Race Conditions Race conditions occur when two processes/threads try to access a shared resource at the same time without mutual exclusion. The problem was ultimately solved by introducing concepts such as the “mutex” or “lock”. The idea is when one thread/process tries to access a resource, it will first acquire a lock, access it, then unlock it once it’s finished. If another thread/process tries to access it while the other has the lock, it will wait until the other thread is finished. This works fairly well - when it’s used properly. Locking is hard to get right, especially when you try to implement fine-grained locking for performance. One single instruction or line of code outside the locking window could introduce a race condition. Not all race conditions are exploitable, but some are (such as this one) - and they can give an attacker very powerful bugs to work with. Heap Spraying The process of heap spraying is fairly simple - allocate a bunch of memory and fill it with controlled data in a loop and pray your allocation doesn’t get stolen from underneath you. It’s a very useful technique when exploiting something such as a use-after-free(), as you can use it to get controlled data into your target object’s backing memory. By extension, it’s useful to do this for a double free() as well, because once we have a stale reference, we can use a heap spray to control the data. Since the object will be marked “free” - the allocator will eventually provide us with control over this memory, even though something else is still using it. That is, unless, something else has already stolen the pointer from you and corrupts it - then you’ll likely get a system crash, and that’s no fun. This is one factor that adds to the variance of exploits, and typically, the smaller the object, the more likely this is to happen. Follow the link to read more of the article DigitalOcean http://do.co/bsdnow OpenBSD gains Wi-Fi “auto-join” In a change which is bound to be welcomed widely, -current has gained “auto-join” for Wi-Fi networks. Peter Hessler (phessler@) has been working on this for quite some time and he wrote about it in his p2k18 hackathon report. He has committed the work from the g2k18 hackathon in Ljubljana: CVSROOT: /cvs Module name: src Changes by: phessler@cvs.openbsd.org 2018/07/11 14:18:09 Modified files: sbin/ifconfig : ifconfig.8 ifconfig.c sys/net80211 : ieee80211ioctl.c ieee80211ioctl.h ieee80211node.c ieee80211node.h ieee80211_var.h Log message: Introduce 'auto-join' to the wifi 802.11 stack. This allows a system to remember which ESSIDs it wants to connect to, any relevant security configuration, and switch to it when the network we are currently connected to is no longer available. Works when connecting and switching between WPA2/WPA1/WEP/clear encryptions. example hostname.if: join home wpakey password join work wpakey mekmitasdigoat join open-lounge join cafe wpakey cafe2018 join "wepnetwork" nwkey "12345" dhcp inet6 autoconf up OK stsp@ reyk@ and enthusiasm from every hackroom I've been in for the last 3 years The usage should be clear from the commit message, but basically you ‘join’ all the networks you want to auto-join as you would previously use ‘nwid’ to connect to one specific network. Then the kernel will join the network that’s actually in range and do the rest automagically for you. When you move out of range of that network you lose connectivity until you come in range of the original (where things will continue to work as you’ve been used to) or one of the other networks (where you will associate and then get a new lease). Thanks to Peter for working on this feature - something many a Wi-Fi using OpenBSD user will be able to benefit from. FreeBSD Jails the hard way There are many great options for managing FreeBSD Jails. iocage, warden and ez-jail aim to streamline the process and make it quick an easy to get going. But sometimes the tools built right into the OS are overlooked. This post goes over what is involved in creating and managing jails using only the tools built into FreeBSD. For this guide, I’m going to be putting my jails in /usr/local/jails. I’ll start with a very simple, isolated jail. Then I’ll go over how to use ZFS snapshots, and lastly nullfs mounts to share the FreeBSD base files with multiple jails. I’ll also show some examples of how to use the templating power of jail.conf to apply similar settings to all your jails. Full Jail Make a directory for the jail, or a zfs dataset if you prefer. Download the FreeBSD base files, and any other parts of FreeBSD you want. In this example I’ll include the 32 bit libraries as well. Update your FreeBSD base install. Verify your download. We’re downloading these archives over FTP after all, we should confirm that this download is valid and not tampered with. The freebsd-update IDS command verifies the installation using a PGP key which is in your base system, which was presumably installed with an ISO that you verified using the FreeBSD signed checksums. Admittedly this step is a bit of paranoia, but I think it’s prudent. Make sure you jail has the right timezone and dns servers and a hostname in rc.conf. Edit jail.conf with the details about your jail. Start and login to your jail. 11 commands and a config file, but this is the most tedious way to make a jail. With a little bit of templating it can be even easier. So I’ll start by making a template. Making a template is basically the same as steps 1, 2 and 3 above, but with a different destination folder, I’ll condense them here. Creating a template Create a template or a ZFS dataset. If you’d like to use the zfs clone method of deploying templates, you’ll need to create a zfs dataset instead of a folder. Update your template with freebsd-update. Verify your install And that’s it, now you have a fully up to date jail template. If you’ve made this template with zfs, you can easily deploy it using zfs snapshots. Deploying a template with ZFS snapshots Create a snapshot. My last freebsd-update to my template brought it to patch level 17, so I’ll call my snapshot p10. Clone the snapshot to a new jail. Configure the jail hostname. Add the jail definition to jail.conf, make sure you have the global jail settings from jail.conf listed in the fulljail example. Start the jail. The downside with the zfs approach is that each jail is now a fully independent, and if you need to update your jails, you have to update them all individually. By sharing a template using nullfs mounts you can have only one copy of the base system that only needs to be updated once. Follow the link to see the rest of the article about Thin jails using NullFS mounts Simplifying jail.conf Hopefully this has helped you understand the process of how to create and manage FreeBSD jails without tools that abstract away all the details. Those tools are often quite useful, but there is always benefit in learning to do things the hard way. And in this case, the hard way doesn’t seem to be that hard after all. Beastie Bits Meetup in Zurich #4, July edition (July 19) – Which you likely missed, but now you know to look for the August edition! The next two BSD-PL User group meetings in Warsaw have been scheduled for July 30th and Aug 9th @ 1830 CEST – Submit your topic proposals now Linux Geek Books - Humble Bundle Extend loader(8) geli support to all architectures and all disk-like devices Upgrading from a bootpool to a single encrypted pool – skip the gptzfsboot part, and manually update your EFI partition with loader.efi The pkgsrc 2018Q2 for Illumos is available with 18500+ binary packages NetBSD ARM64 Images Available with SMP for RPi3 / NanoPi / Pine64 Boards Recently released CDE 2.3.0 running on Tribblix (Illumos) An Interview With Tech & Science Fiction Author Michael W Lucas A reminder : MeetBSD CFP EuroBSDCon talk acceptances have gone out, and once the tutorials are confirmed, registration will open. That will likely have happened by time you see this episode, so go register! See you in Romania Tarsnap Feedback/Questions Wilyarti - Adblocked on FreeBSD Continued… Andrew - A Question and a Story Matthew - Thanks Brian - PCI-E Controller Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 255: What Are You Pointing At | BSD Now 255
What ZFS blockpointers are, zero-day rewards offered, KDE on FreeBSD status, new FreeBSD core team, NetBSD WiFi refresh, poor man’s CI, and the power of Ctrl+T. ##Headlines What ZFS block pointers are and what’s in them I’ve mentioned ZFS block pointers in the past; for example, when I wrote about some details of ZFS DVAs, I said that DVAs are embedded in block pointers. But I’ve never really looked carefully at what is in block pointers and what that means and implies for ZFS. The very simple way to describe a ZFS block pointer is that it’s what ZFS uses in places where other filesystems would simply put a block number. Just like block numbers but unlike things like ZFS dnodes, a block pointer isn’t a separate on-disk entity; instead it’s an on disk data format and an in memory structure that shows up in other things. To quote from the (draft and old) ZFS on-disk specification (PDF): A block pointer (blkptr_t) is a 128 byte ZFS structure used to physically locate, verify, and describe blocks of data on disk. Block pointers are embedded in any ZFS on disk structure that points directly to other disk blocks, both for data and metadata. For instance, the dnode for a file contains block pointers that refer to either its data blocks (if it’s small enough) or indirect blocks, as I saw in this entry. However, as I discovered when I paid attention, most things in ZFS only point to dnodes indirectly, by giving their object number (either in a ZFS filesystem or in pool-wide metadata). So what’s in a block pointer itself? You can find the technical details for modern ZFS in spa.h, so I’m going to give a sort of summary. A regular block pointer contains: various metadata and flags about what the block pointer is for and what parts of it mean, including what type of object it points to. Up to three DVAs that say where to actually find the data on disk. There can be more than one DVA because you may have set the copies property to 2 or 3, or this may be metadata (which normally has two copies and may have more for sufficiently important metadata). The logical size (size before compression) and ‘physical’ size (the nominal size after compression) of the disk block. The physical size can do odd things and is not necessarily the asize (allocated size) for the DVA(s). The txgs that the block was born in, both logically and physically (the physical txg is apparently for dva[0]). The physical txg was added with ZFS deduplication but apparently also shows up in vdev removal. The checksum of the data the block pointer describes. This checksum implicitly covers the entire logical size of the data, and as a result you must read all of the data in order to verify it. This can be an issue on raidz vdevs or if the block had to use gang blocks. Just like basically everything else in ZFS, block pointers don’t have an explicit checksum of their contents. Instead they’re implicitly covered by the checksum of whatever they’re embedded in; the block pointers in a dnode are covered by the overall checksum of the dnode, for example. Block pointers must include a checksum for the data they point to because such data is ‘out of line’ for the containing object. (The block pointers in a dnode don’t necessarily point straight to data. If there’s more than a bit of data in whatever the dnode covers, the dnode’s block pointers will instead point to some level of indirect block, which itself has some number of block pointers.) There is a special type of block pointer called an embedded block pointer. Embedded block pointers directly contain up to 112 bytes of data; apart from the data, they contain only the metadata fields and a logical birth txg. As with conventional block pointers, this data is implicitly covered by the checksum of the containing object. Since block pointers directly contain the address of things on disk (in the form of DVAs), they have to change any time that address changes, which means any time ZFS does its copy on write thing. This forces a change in whatever contains the block pointer, which in turn ripples up to another block pointer (whatever points to said containing thing), and so on until we eventually reach the Meta Object Set and the uberblock. How this works is a bit complicated, but ZFS is designed to generally make this a relatively shallow change with not many levels of things involved (as I discovered recently). As far as I understand things, the logical birth txg of a block pointer is the transaction group in which the block pointer was allocated. Because of ZFS’s copy on write principle, this means that nothing underneath the block pointer has been updated or changed since that txg; if something changed, it would have been written to a new place on disk, which would have forced a change in at least one DVA and thus a ripple of updates that would update the logical birth txg. However, this doesn’t quite mean what I used to think it meant because of ZFS’s level of indirection. If you change a file by writing data to it, you will change some of the file’s block pointers, updating their logical birth txg, and you will change the file’s dnode. However, you won’t change any block pointers and thus any logical birth txgs for the filesystem directory the file is in (or anything else up the directory tree), because the directory refers to the file through its object number, not by directly pointing to its dnode. You can still use logical birth txgs to efficiently find changes from one txg to another, but you won’t necessarily get a filesystem level view of these changes; instead, as far as I can see, you will basically get a view of what object(s) in a filesystem changed (effectively, what inode numbers changed). (ZFS has an interesting hack to make things like ‘zfs diff’ work far more efficiently than you would expect in light of this, but that’s going to take yet another entry to cover.) ###Rewards of Up to $500,000 Offered for FreeBSD, OpenBSD, NetBSD, Linux Zero-Days Exploit broker Zerodium is offering rewards of up to $500,000 for zero-days in UNIX-based operating systems like OpenBSD, FreeBSD, NetBSD, but also for Linux distros such as Ubuntu, CentOS, Debian, and Tails. The offer, first advertised via Twitter earlier this week, is available as part of the company’s latest zero-day acquisition drive. Zerodium is known for buying zero-days and selling them to government agencies and law enforcement. The company runs a regular zero-day acquisition program through its website, but it often holds special drives with more substantial rewards when it needs zero-days of a specific category. BSD zero-day rewards will be on par with Linux payouts The US-based company held a previous drive with increased rewards for Linux zero-days in February, with rewards going as high as $45,000. In another zero-day acquisition drive announced on Twitter this week, the company said it was looking again for Linux zero-days, but also for exploits targeting BSD systems. This time around, rewards can go up to $500,000, for the right exploit. Zerodium told Bleeping Computer they’ll be aligning the temporary rewards for BSD systems with their usual payouts for Linux distros. The company’s usual payouts for Linux privilege escalation exploits can range from $10,000 to $30,000. Local privilege escalation (LPE) rewards can even reach $100,000 for “an exploit with an exceptional quality and coverage,” such as, for example, a Linux kernel exploit affecting all major distributions. Payouts for Linux remote code execution (RCE) exploits can bring in from $50,000 to $500,000 depending on the targeted software/service and its market share. The highest rewards are usually awarded for LPEs and RCEs affecting CentOS and Ubuntu distros. Zero-day price varies based on exploitation chain The acquisition price of a submitted zero-day is directly tied to its requirements in terms of user interaction (no click, one click, two clicks, etc.), Zerodium said. Other factors include the exploit reliability, its success rate, the number of vulnerabilities chained together for the final exploit to work (more chained bugs means more chances for the exploit to break unexpectedly), and the OS configuration needed for the exploit to work (exploits are valued more if they work against default OS configs). Zero-days in servers “can reach exceptional amounts” “Price difference between systems is mostly driven by market shares,” Zerodium founder Chaouki Bekrar told Bleeping Computer via email. Asked about the logic behind these acquisition drives that pay increased rewards, Bekrar told Bleeping Computer the following: "Our aim is to always have, at any time, two or more fully functional exploits for every major software, hardware, or operating systems, meaning that from time to time we would promote a specific software/system on our social media to acquire new codes and strengthen our existing capabilities or extend them.” “We may also react to customers’ requests and their operational needs,” Bekrar said. It’s becoming a crowded market Since Zerodium drew everyone’s attention to the exploit brokerage market in 2015, the market has gotten more and more crowded, but also more sleazy, with some companies being accused of selling zero-days to government agencies in countries with oppressive or dictatorial regimes, where they are often used against political oponents, journalists, and dissidents, instead of going after real criminals. The latest company who broke into the zero-day brokerage market is Crowdfense, who recently launched an acquisition program with prizes of $10 million, of which it already paid $4.5 million to researchers. Twitter Announcement Digital Ocean http://do.co/bsdnow ###KDE on FreeBSD – June 2018 The KDE-FreeBSD team (a half-dozen hardy individuals, with varying backgrounds and varying degrees of involvement depending on how employment is doing) has a status message in the #kde-freebsd channel on freenode. Right now it looks like this: http://FreeBSD.kde.org | Bleeding edge http://FreeBSD.kde.org/area51.php | Released: Qt 5.10.1, KDE SC 4.14.3, KF5 5.46.0, Applications 18.04.1, Plasma-5.12.5, Kdevelop-5.2.1, Digikam-5.9.0 It’s been a while since I wrote about KDE on FreeBSD, what with Calamares and third-party software happening as well. We’re better at keeping the IRC topic up-to-date than a lot of other sources of information (e.g. the FreeBSD quarterly reports, or the f.k.o website, which I’ll just dash off and update after writing this). In no particular order: Qt 5.10 is here, in a FrankenEngine incarnation: we still use WebEnging from Qt 5.9 because — like I’ve said before — WebEngine is such a gigantic pain in the butt to update with all the necessary patches to get it to compile. Our collection of downstream patches to Qt 5.10 is growing, slowly. None of them are upstreamable (e.g. libressl support) though. KDE Frameworks releases are generally pushed to ports within a week or two of release. Actually, now that there is a bigger stack of KDE software in FreeBSD ports the updates take longer because we have to do exp-runs. Similarly, Applications and Plasma releases are reasonably up-to-date. We dodged a bullet by not jumping on Plasma 5.13 right away, I see. Tobias is the person doing almost all of the drudge-work of these updates, he deserves a pint of something in Vienna this summer. The freebsd.kde.org website has been slightly updated; it was terribly out-of-date. So we’re mostly-up-to-date, and mostly all packaged up and ready to go. Much of my day is spent in VMs packaged by other people, but it’s good to have a full KDE developer environment outside of them as well. (PS. Gotta hand it to Tomasz for the amazing application for downloading and displaying a flamingo … niche usecases FTW) ##News Roundup New FreeBSD Core Team Elected Active committers to the project have elected your tenth FreeBSD Core Team. Allan Jude (allanjude) Benedict Reuschling (bcr) Brooks Davis (brooks) Hiroki Sato (hrs) Jeff Roberson (jeff) John Baldwin (jhb) Kris Moore (kmoore) Sean Chittenden (seanc) Warner Losh (imp) Let’s extend our gratitude to the outgoing Core Team members: Baptiste Daroussin (bapt) Benno Rice (benno) Ed Maste (emaste) George V. Neville-Neil (gnn) Matthew Seaman (matthew) Matthew, after having served as the Core Team Secretary for the past four years, will be stepping down from that role. The Core Team would also like to thank Dag-Erling Smørgrav for running a flawless election. To read about the responsibilities of the Core Team, refer to https://www.freebsd.org/administration.html#t-core. ###NetBSD WiFi refresh The NetBSD Foundation is pleased to announce a summer 2018 contract with Philip Nelson (phil%NetBSD.org@localhost) to update the IEEE 802.11 stack basing the update on the FreeBSD current code. The goals of the project are: Minimizing the differences between the FreeBSD and NetBSD IEEE 802.11 stack so future updates are easier. Adding support for the newer protocols 801.11/N and 802.11/AC. Improving SMP support in the IEEE 802.11 stack. Adding Virtual Access Point (VAP) support. Updating as many NIC drivers as time permits for the updated IEEE 802.11 stack and VAP changes. Status reports will be posted to tech-net%NetBSD.org@localhost every other week while the contract is active. iXsystems ###Poor Man’s CI - Hosted CI for BSD with shell scripting and duct tape Poor Man’s CI (PMCI - Poor Man’s Continuous Integration) is a collection of scripts that taken together work as a simple CI solution that runs on Google Cloud. While there are many advanced hosted CI systems today, and many of them are free for open source projects, none of them seem to offer a solution for the BSD operating systems (FreeBSD, NetBSD, OpenBSD, etc.) The architecture of Poor Man’s CI is system agnostic. However in the implementation provided in this repository the only supported systems are FreeBSD and NetBSD. Support for additional systems is possible. Poor Man’s CI runs on the Google Cloud. It is possible to set it up so that the service fits within the Google Cloud “Always Free” limits. In doing so the provided CI is not only hosted, but is also free! (Disclaimer: I am not affiliated with Google and do not otherwise endorse their products.) ARCHITECTURE A CI solution listens for “commit” (or more usually “push”) events, builds the associated repository at the appropriate place in its history and reports the results. Poor Man’s CI implements this very basic CI scenario using a simple architecture, which we present in this section. Poor Man’s CI consists of the following components and their interactions: Controller: Controls the overall process of accepting GitHub push events and starting builds. The Controller runs in the Cloud Functions environment and is implemented by the files in the controller source directory. It consists of the following components: Listener: Listens for GitHub push events and posts them as work messages to the workq PubSub. Dispatcher: Receives work messages from the workq PubSub and a free instance name from the Builder Pool. It instantiates a builder instance named name in the Compute Engine environment and passes it the link of a repository to build. Collector: Receives done messages from the doneq PubSub and posts the freed instance name back to the Builder Pool. PubSub Topics: workq: Transports work messages that contain the link of the repository to build. poolq: Implements the Builder Pool, which contains the name’s of available builder instances. To acquire a builder name, pull a message from the poolq. To release a builder name, post it back into the poolq. doneq: Transports done messages (builder instance terminate and delete events). These message contain the name of freed builder instances. builder: A builder is a Compute Engine instance that performs a build of a repository and shuts down when the build is complete. A builder is instantiated from a VM image and a startx (startup-exit) script. Build Logs: A Storage bucket that contains the logs of builds performed by builder instances. Logging Sink: A Logging Sink captures builder instance terminate and delete events and posts them into the doneq. BUGS The Builder Pool is currently implemented as a PubSub; messages in the PubSub contain the names of available builder instances. Unfortunately a PubSub retains its messages for a maximum of 7 days. It is therefore possible that messages will be discarded and that your PMCI deployment will suddenly find itself out of builder instances. If this happens you can reseed the Builder Pool by running the commands below. However this is a serious BUG that should be fixed. For a related discussion see https://tinyurl.com/ybkycuub. $ ./pmci queuepost poolq builder0 # ./pmci queuepost poolq builder1 # ... repeat for as many builders as you want The Dispatcher is implemented as a Retry Background Cloud Function. It accepts work messages from the workq and attempts to pull a free name from the poolq. If that fails it returns an error, which instructs the infrastructure to retry. Because the infrastructure does not provide any retry controls, this currently happens immediately and the Dispatcher spins unproductively. This is currently mitigated by a “sleep” (setTimeout), but the Cloud Functions system still counts the Function as running and charges it accordingly. While this fits within the “Always Free” limits, it is something that should eventually be fixed (perhaps by the PubSub team). For a related discussion see https://tinyurl.com/yb2vbwfd. ###The Power of Ctrl-T Did you know that you can check what a process is doing by pressing CTRL+T? Has it happened to you before that you were waiting for something to be finished that can take a lot of time, but there is no easy way to check the status. Like a dd, cp, mv and many others. All you have to do is press CTRL+T where the process is running. This will output what’s happening and will not interrupt or mess with it in any way. This causes the operating system to output the SIGINFO signal. On FreeBSD it looks like this: ping pingtest.com PING pingtest.com (5.22.149.135): 56 data bytes 64 bytes from 5.22.149.135: icmpseq=0 ttl=51 time=86.232 ms 64 bytes from 5.22.149.135: icmpseq=1 ttl=51 time=85.477 ms 64 bytes from 5.22.149.135: icmpseq=2 ttl=51 time=85.493 ms 64 bytes from 5.22.149.135: icmpseq=3 ttl=51 time=85.211 ms 64 bytes from 5.22.149.135: icmpseq=4 ttl=51 time=86.002 ms load: 1.12 cmd: ping 94371 [select] 4.70r 0.00u 0.00s 0% 2500k 5/5 packets received (100.0%) 85.211 min / 85.683 avg / 86.232 max 64 bytes from 5.22.149.135: icmpseq=5 ttl=51 time=85.725 ms 64 bytes from 5.22.149.135: icmp_seq=6 ttl=51 time=85.510 ms As you can see it not only outputs the name of the running command but the following parameters as well: 94371 – PID 4.70r – since when is the process running 0.00u – user time 0.00s – system time 0% – CPU usage 2500k – resident set size of the process or RSS `` > An even better example is with the following cp command: cp FreeBSD-11.1-RELEASE-amd64-dvd1.iso /dev/null load: 0.99 cmd: cp 94412 [runnable] 1.61r 0.00u 0.39s 3% 3100k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 15% load: 0.91 cmd: cp 94412 [runnable] 2.91r 0.00u 0.80s 6% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 32% load: 0.91 cmd: cp 94412 [runnable] 4.20r 0.00u 1.23s 9% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 49% load: 0.91 cmd: cp 94412 [runnable] 5.43r 0.00u 1.64s 11% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 64% load: 1.07 cmd: cp 94412 [runnable] 6.65r 0.00u 2.05s 13% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 79% load: 1.07 cmd: cp 94412 [runnable] 7.87r 0.00u 2.43s 15% 3104k FreeBSD-11.1-RELEASE-amd64-dvd1.iso -> /dev/null 95% > I prcessed CTRL+T six times. Without that, all the output would have been is the first line. > Another example how the process is changing states: wget https://download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-dvd1.iso –2018-06-17 18:47:48– https://download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-dvd1.iso Resolving download.freebsd.org (download.freebsd.org)… 96.47.72.72, 2610:1c1:1:606c::15:0 Connecting to download.freebsd.org (download.freebsd.org)|96.47.72.72|:443… connected. HTTP request sent, awaiting response… 200 OK Length: 3348465664 (3.1G) [application/octet-stream] Saving to: ‘FreeBSD-11.1-RELEASE-amd64-dvd1.iso’ FreeBSD-11.1-RELEASE-amd64-dvd1.iso 1%[> ] 41.04M 527KB/s eta 26m 49sload: 4.95 cmd: wget 10152 waiting 0.48u 0.72s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 1%[> ] 49.41M 659KB/s eta 25m 29sload: 12.64 cmd: wget 10152 waiting 0.55u 0.85s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 2%[=> ] 75.58M 6.31MB/s eta 20m 6s load: 11.71 cmd: wget 10152 running 0.73u 1.19s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 2%[=> ] 85.63M 6.83MB/s eta 18m 58sload: 11.71 cmd: wget 10152 waiting 0.80u 1.32s FreeBSD-11.1-RELEASE-amd64-dvd1.iso 14%[==============> ] 460.23M 7.01MB/s eta 9m 0s 1 > The bad news is that CTRl+T doesn’t work with Linux kernel, but you can use it on MacOS/OS-X: —> Fetching distfiles for gmp —> Attempting to fetch gmp-6.1.2.tar.bz2 from https://distfiles.macports.org/gmp —> Verifying checksums for gmp —> Extracting gmp —> Applying patches to gmp —> Configuring gmp load: 2.81 cmd: clang 74287 running 0.31u 0.28s > PS: If I recall correctly Feld showed me CTRL+T, thank you! Beastie Bits Half billion tries for a HAMMER2 bug (http://lists.dragonflybsd.org/pipermail/commits/2018-May/672263.html) OpenBSD with various Desktops OpenBSD 6.3 running twm window manager (https://youtu.be/v6XeC5wU2s4) OpenBSD 6.3 jwm and rox desktop (https://youtu.be/jlSK2oi7CBc) OpenBSD 6.3 cwm youtube video (https://youtu.be/mgqNyrP2CPs) pf: Increase default state table size (https://svnweb.freebsd.org/base?view=revision&revision=336221) *** Tarsnap Feedback/Questions Ben Sims - Full feed? (http://dpaste.com/3XVH91T#wrap) Scott - Questions and Comments (http://dpaste.com/08P34YN#wrap) Troels - Features of FreeBSD 11.2 that deserve a mention (http://dpaste.com/3DDPEC2#wrap) Fred - Show Ideas (http://dpaste.com/296ZA0P#wrap) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) iXsystems It's all NAS (https://www.ixsystems.com/blog/its-all-nas/)
Episode 254: Bare the OS | BSD Now 254
Control flow integrity with HardenedBSD, fixing bufferbloat with OpenBSD’s pf, Bareos Backup Server on FreeBSD, MeetBSD CfP, crypto simplified interface, twitter gems, interesting BSD commits, and more. ##Headlines Silent Fanless FreeBSD Desktop/Server Today I will write about silent fanless FreeBSD desktop or server computer … or NAS … or you name it, it can have multa##Headlines ###Cross-DSO CFI in HardenedBSD Control Flow Integrity, or CFI, raises the bar for attackers aiming to hijack control flow and execute arbitrary code. The llvm compiler toolchain, included and used by default in HardenedBSD 12-CURRENT/amd64, supports forward-edge CFI. Backward-edge CFI support is gained via a tangential feature called SafeStack. Cross-DSO CFI builds upon ASLR and PaX NOEXEC for effectiveness. HardenedBSD supports non-Cross-DSO CFI in base for 12-CURRENT/amd64 and has it enabled for a few individual ports. The term “non-Cross-DSO CFI” means that CFI is enabled for code within an application’s codebase, but not for the shared libraries it depends on. Supporting non-Cross-DSO CFI is an important initial milestone for supporting Cross-DSO CFI, or CFI applied to both shared libraries and applications. This article discusses where HardenedBSD stands with regards to Cross-DSO CFI in base. We have made a lot of progress, yet we’re not even half-way there. Brace yourself: This article is going to be full of references to “Cross-DSO CFI.” Make a drinking game out of it. Or don’t. It’s your call. ;) Using More llvm Toolchain Components CFI requires compiling source files with Link-Time Optimization (LTO). I remembered hearing a few years back that llvm developers were able to compile the entirety of FreeBSD’s source code with LTO. Compiling with LTO produces intermediate object files as LLVM IR bitcode instead of ELF objects. In March of 2017, we started compiling all applications with LTO and non-Cross-DSO CFI. This also enabled ld.lld as the default linker in base since CFI requires lld. Commit f38b51668efcd53b8146789010611a4632cafade made the switch to ld.lld as the default linker while enabling non-Cross-DSO CFI at the same time. Building libraries in base requires applications like ar, ranlib, nm, and objdump. In FreeBSD 12-CURRENT, ar and ranlib are known as “BSD ar” and “BSD ranlib.” In fact, ar and ranlib are the same applications. One is hardlinked to another and the application changes behavior depending on arvgv[0] ending in “ranlib”. The ar, nm, and objdump used in FreeBSD do not support LLVM IR bitcode object files. In preparation for Cross-DSO CFI support, commit fe4bb0104fc75c7216a6dafe2d7db0e3f5fe8257 in October 2017 saw HardenedBSD switching ar, ranlib, nm, and objdump to their respective llvm components. The llvm versions due support LLVM IR bitcode object files (surprise!) There has been some fallout in the ports tree and we’ve added LLVM_AR_UNSAFE and friends to help transition those ports that dislike llvm-ar, llvm-ranlib, llvm-nm, and llvm-objdump. With ld.lld, llvm-ar, llvm-ranlib, llvm-nm, and llvm-objdump the default, HardenedBSD has effectively switched to a full llvm compiler toolchain in 12-CURRENT/amd64. Building Libraries With LTO The primary 12-CURRENT development branch in HardenedBSD (hardened/current/master) only builds applications with LTO as mentioned in the secion above. My first attempt at building all static and shared libraries failed due to issues within llvm itself. I reported these issues to FreeBSD. Ed Maste (emaste@), Dimitry Andric (dim@), and llvm’s Rafael Espindola expertly helped address these issues. Various commits within the llvm project by Rafael fully and quickly resolved the issues brought up privately in emails. With llvm fixed, I could now build nearly every library in base with LTO. I noticed, however, that if I kept non-Cross-DSO CFI and SafeStack enabled, all applications would segfault. Even simplistic applications like /bin/ls. Disabling both non-Cross-DSO CFI and SafeStack, but keeping LTO produced a fully functioning world! I have spent the last few months figuring out why enabling either non-Cross-DSO CFI or SafeStack caused issues. This brings us to today. The Sanitizers in FreeBSD FreeBSD brought in all the files required for SafeStack and CFI. When compiling with SafeStack, llvm statically links a full sanitization framework into the application. FreeBSD includes a full copy of the sanitization framework in SafeStack, including the common C++ sanization namespaces. Thus, libclang_rt.safestack included code meant to be shared among all the sanitizers, not just SafeStack. I had naively taken a brute-force approach to setting up the libclang_rt.cfi static library. I copied the Makefile from libclang_rt.safestack and used that as a template for libclang_rt.cfi. This approach was incorrect due to breaking the One Definition Rule (ODR). Essentially, I ended up including a duplicate copy of the C++ classes and sanitizer runtime if both CFI and SafeStack were used. In my Cross-DSO CFI development VM, I now have SafeStack disabled across-the-board and am only compiling in CFI. As of 26 May 2018, an LTO-ified world (libs + apps) works in my limited testing. /bin/ls does not crash anymore! The second major milestone for Cross-DSO CFI has now been reached. Known Issues And Limitations There are a few known issues and regressions. Note that this list of known issues essentially also constitutes a “work-in-progress” and every known issue will be fixed prior to the official launch of Cross-DSO CFI. It seems llvm does not like statically compiling applications with LTO that have a mixture of C and C++ code. /sbin/devd is one of these applications. As such, when Cross-DSO CFI is enabled, devd is compiled as a Position-Independent Executable (PIE). Doing this breaks UFS systems where /usr is on a separate partition. We are currently looking into solving this issue to allow devd to be statically compiled again. NO_SHARED is now unset in the tools build stage (aka, bootstrap-tools, cross-tools). This is related to the static compilation issue above. Unsetting NO_SHARED for to tools build stage is only a band-aid until we can resolve static compliation with LTO. One goal of our Cross-DSO CFI integration work is to be able to support the cfi-icall scheme when dlopen(3) and dlsym(3)/dlfunc(3) is used. This means the runtime linker (RTLD), must be enhanced to know and care about the CFI runtime. This enhancement is not currently implemented, but is planned. When Cross-DSO CFI is enabled, SafeStack is disabled. This is because compiling with Cross-DSO CFI brings in a second copy of the sanitizer runtime, violating the One Definition Rule (ODR). Resolving this issue should be straightforward: Unify the sanitizer runtime into a single common library that both Cross-DSO CFI and SafeStack can link against. When the installed world has Cross-DSO CFI enabled, performing a buildworld with Cross-DSO CFI disabled fails. This is somewhat related to the static compilation issue described above. Current Status I’ve managed to get a Cross-DSO CFI world booting on bare metal (my development laptop) and in a VM. Some applications failed to work. Curiously, Firefox still worked (which also means xorg works). I’m now working through the known issues list, researching and learning. Future Work Fixing pretty much everything in the “Known Issues And Limitations” section. ;P I need to create a static library that includes only a single copy of the common sanitizer framework code. Applications compiled with CFI or SafeStack will then only have a single copy of the framework. Next I will need to integrate support in the RTLD for Cross-DSO CFI. Applications with the cfi-icall scheme enabled that call functions resolved through dlsym(3) currently crash due to the lack of RTLD support. I need to make a design decision as to whether to only support adding cfi-icall whitelist entries only with dlfunc(3) or to also whitelist cfi-icall entries with the more widely used dlsym(3). There’s likely more items in the “TODO” bucket that I am not currently aware of. I’m treading in uncharted territory. I have no firm ETA for any bit of this work. We may gain Cross-DSO CFI support in 2018, but it’s looking like it will be later in either 2019 or 2020. Conclusion I have been working on Cross-DSO CFI support in HardenedBSD for a little over a year now. A lot of progress is being made, yet there’s still some major hurdles to overcome. This work has already helped improve llvm and I hope more commits upstream to both FreeBSD and llvm will happen. We’re getting closer to being able to send out a preliminary Call For Testing (CFT). At the very least, I would like to solve the static linking issues prior to publishing the CFT. Expect it to be published before the end of 2018. I would like to thank Ed Maste, Dimitry Andric, and Rafael Espindola for their help, guidance, and support. iXsystems FreeNAS 11.2-BETAs are starting to appear ###Bareos Backup Server on FreeBSD Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula? Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages. I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group). This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that. The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result. Not bad for my taste. Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine: bareos-dir bareos-sd bareos-webui bareos-fd I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares. To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers. Also this diagram may be useful for You to get some grip into the Bareos world. System As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.
Episode 253: Silence of the Fans | BSD Now 253
Fanless server setup with FreeBSD, NetBSD on pinebooks, another BSDCan trip report, transparent network audio, MirBSD's Korn Shell on Plan9, static site generators on OpenBSD, and more. ##Headlines Silent Fanless FreeBSD Desktop/Server Today I will write about silent fanless FreeBSD desktop or server computer … or NAS … or you name it, it can have multiple purposes. It also very low power solution, which also means that it will not overheat. Silent means no fans at all, even for the PSU. The format of the system should also be brought to minimum, so Mini-ITX seems best solution here. I have chosen Intel based solutions as they are very low power (6-10W), if you prefer AMD (as I often do) the closest solution in comparable price and power is Biostar A68N-2100 motherboard with AMD E1-2100 CPU and 9W power. Of course AMD has even more low power SoC solutions but finding the Mini-ITX motherboard with decent price is not an easy task. For comparison Intel has lots of such solutions below 6W whose can be nicely filtered on the ark.intel.com page. Pity that AMD does not provide such filtration for their products. I also chosen AES instructions as storage encryption (GELI on FreeBSD) today seems as obvious as HTTPS for the web pages. Here is how the system look powered up and working This motherboard uses Intel J3355 SoC which uses 10W and has AES instructions. It has two cores at your disposal but it also supports VT-x and EPT extensions so you can even run Bhyve on it. Components Now, an example system would look like that one below, here are the components with their prices. $49 CPU/Motherboard ASRock J3355B-ITX Mini-ITX $14 RAM Crucial 4 GB DDR3L 1.35V (low power) $17 PSU 12V 160W Pico (internal) $11 PSU 12V 96W FSP (external) $5 USB 2.0 Drive 16 GB ADATA $4 USB Wireless 802.11n $100 TOTAL The PSU 12V 160W Pico (internal) and PSU 12V 96W FSP can be purchased on aliexpress.com or ebay.com for example, at least I got them there. Here is the 12V 160W Pico (internal) PSU and its optional additional cables to power the optional HDDs. If course its one SATA power and one MOLEX power so additional MOLEX-SATA power adapter for about 1$ would be needed. Here is the 12V 96W FSP (external) PSU without the power cord. This gives as total silent fanless system price of about $120. Its about ONE TENTH OF THE COST of the cheapest FreeNAS hardware solution available – the FreeNAS Mini (Diskless) costs $1156 also without disks. You can put plain FreeBSD on top of it or Solaris/Illumos distribution OmniOSce which is server oriented. You can use prebuilt NAS solution based on FreeBSD like FreeNAS, NAS4Free, ZFSguru or even Solaris/Illumos based storage with napp-it appliance. ###An annotated look at a NetBSD Pinebook’s startup Pinebook is an affordable 64-bit ARM notebook. Today we’re going to take a look at the kernel output at startup and talk about what hardware support is available on NetBSD. Photo Pinebook comes with 2GB RAM standard. A small amount of this is reserved by the kernel and framebuffer. NetBSD uses flattened device-tree (FDT) to enumerate devices on all Allwinner based SoCs. On a running system, you can inspect the device tree using the ofctl(8) utility: Pinebook’s Allwinner A64 processor is based on the ARM Cortex-A53. It is designed to run at frequencies up to 1.2GHz. The A64 is a quad core design. NetBSD’s aarch64 pmap does not yet support SMP, so three cores are disabled for now. The interrupt controller is a standard ARM GIC-400 design. Clock drivers for managing PLLs, module clock dividers, clock gating, software resets, etc. Information about the clock tree is exported in the hw.clk sysctl namespace (root access required to read these values). # sysctl hw.clk.sun50ia64ccu0.mmc2 hw.clk.sun50ia64ccu0.mmc2.rate = 200000000 hw.clk.sun50ia64ccu0.mmc2.parent = pllperiph02x hw.clk.sun50ia64ccu0.mmc2.parent_domain = sun50ia64ccu0 Digital Ocean http://do.co/bsdnow ###BSDCan 2018 Trip Report: Mark Johnston BSDCan is a highlight of my summers: the ability to have face-to-face conversations with fellow developers and contributors is invaluable and always helps refresh my enthusiasm for FreeBSD. While in a perfect world we would all be able to communicate effectively over the Internet, it’s often noted that locking a group of developers together in a room can be a very efficient way to make progress on projects that otherwise get strung out over time, and to me this is one of the principal functions of BSD conferences. In my case I was able to fix some kgdb bugs that had been hindering me for months; get some opinions on the design of a feature I’ve been working on for FreeBSD 12.0; hear about some ongoing usage of code that I’ve worked on; and do some pair-debugging of an issue that has been affecting another developer. As is tradition, on Tuesday night I dropped off my things at the university residence where I was staying, and headed straight to the Royal Oak. This year it didn’t seem quite as packed with BSD developers, but I did meet several long-time colleagues and get a chance to catch up. In particular, I chatted with Justin Hibbits and got to hear about the bring-up of FreeBSD on POWER9, a new CPU family released by IBM. Justin was able to acquire a workstation based upon this CPU, which is a great motivator for getting FreeBSD into shape on that platform. POWER9 also has some promise in the server market, so it’s important for FreeBSD to be a viable OS choice there. Wednesday morning saw the beginning of the two-day FreeBSD developer summit, which precedes the conference proper. Gordon Tetlow led the summit and did an excellent job organizing things and keeping to the schedule. The first presentation was by Deb Goodkin of the FreeBSD Foundation, who gave an overview of the Foundation’s role and activities. After Deb’s presentation, present members of the FreeBSD core team discussed the work they had done over the past two years, as well as open tasks that would be handed over to the new core team upon completion of the ongoing election. Finally, Marius Strobl rounded off the day’s presentations by discussing the state and responsibilities of FreeBSD’s release engineering team. One side discussion of interest to me was around the notion of tightening integration with our Bugzilla instance; at moment we do not have any good means to mark a given bug as blocking a release, making it easy for bugs to slip into releases and thus lowering our overall quality. With FreeBSD 12.0 upon us, I plan to help with the triage and fixes for known regressions before the release process begins. After a break, the rest of the morning was devoted to plans for features in upcoming FreeBSD releases. This is one of my favorite discussion topics and typically takes the form of have/need/want, where developers collectively list features that they’ve developed and intend to upstream (have), features that they are missing (need), and nice-to-have features (want). This year, instead of the usual format, we listed features that are intended to ship in FreeBSD 12.0. The compiled list ended up being quite ambitious given how close we are to the beginning of the release cycle, but many individual developers (including myself) have signed up to deliver work. I’m hopeful that most, if not all of it, will make it into the release. After lunch, I attended a discussion led by Matt Ahrens and Alexander Motin on OpenZFS. Of particular interest to me were some observations made regarding the relative quantity and quality of contributions made by different “camps” of OpenZFS users (illumos, FreeBSD and ZoL), and their respective track records of upstreaming enhancements to the OpenZFS project. In part due to the high pace of changes in ZoL, the definition of “upstream” for ZFS has become murky, and of late ZFS changes have been ported directly from ZoL. Alexander discussed some known problems with ZFS on FreeBSD that have been discovered through performance testing. While I’m not familiar with ZFS internals, Alexander noted that ZFS’ write path has poor SMP scalability on FreeBSD owing to some limitations in a certain kernel API called taskqueue(9). I would like to explore this problem further and perhaps integrate a relatively new alternative interface which should perform better. Friday and Saturday were, of course, taken up by BSDCan talks. Friday’s keynote was by Benno Rice, who provided some history of UNIX boot systems as a precursor to some discussion of systemd and the difficulties presented by a user and developer community that actively resist change. The rest of the morning was consumed by talks and passed by quickly. First was Colin Percival’s detailed examination of where the FreeBSD kernel spends time during boot, together with an overview of some infrastructure he added to track boot times. He also provided a list of improvements that have been made since he started taking measurements, and some areas we can further improve. Colin’s existing work in this area has already brought about substantial reductions in boot time; amusingly, one of the remaining large delays comes from the keyboard driver, which contains a workaround for old PS/2 keyboards. While there seems to be general agreement that the workaround is probably no longer needed on most systems, the lingering uncertainty around this prevents us from removing the workaround. This is, sadly, a fairly typical example of an OS maintenance burden, and underscores the need to carefully document hardware bug workarounds. After this talk, I got to see some rather novel demonstrations of system tracing using dwatch, a new utility by Devin Teske, which aims to provide a user-friendly interface to DTrace. After lunch, I attended talks on netdump, a protocol for transmitting kernel dumps over a network after the system has panicked, and on a VPC implementation for FreeBSD. After the talks ended, I headed yet again to the hacker lounge and had some fruitful discussions on early microcode loading (one of my features for FreeBSD 12.0). These led me to reconsider some aspects of my approach and saved me a lot of time. Finally, I continued my debugging session from Wednesday with help from a couple of other developers. Saturday’s talks included a very thorough account by Li-Wen Hsu of his work in organizing a BSD conference in Taipei last year. As one of the attendees, I had felt that the conference had gone quite smoothly and was taken aback by the number of details and pitfalls that Li-Wen enumerated during his talk. This was followed by an excellent talk by Baptiste Daroussin on the difficulties one encounters when deploying FreeBSD in new environments. Baptiste offered criticisms of a number of aspects of FreeBSD, some of which hit close to home as they involved portions of the system that I’ve worked on. At the conclusion of the talks, we all gathered in the main lecture hall, where Dan led a traditional and quite lively auction for charity. I managed to snag a Pine64 board and will be getting FreeBSD installed on it the first chance I get. At the end of the auction, we all headed to ByWard for dinner, concluding yet another BSDCan. Thanks to Mark for sharing his experiences at this years BSDCan ##News Roundup Transparent network audio with mpd & sndiod Landry Breuil (landry@ when wearing his developer hat) wrote in… I've been a huge fan of MPD over the years to centralize my audio collection, and i've been using it with the http output to stream the music as a radio on the computer i'm currently using… audio_output { type "sndio" name "Local speakers" mixer_type "software" } audio_output { type "httpd" name "HTTP stream" mixer_type "software" encoder "vorbis" port "8000" format "44100:16:2" } this setup worked for years, allows me to stream my home radio to $work by tunnelling the port 8000 over ssh via LocalForward, but that still has some issues: a distinct timing gap between the 'local output' (ie the speakers connected to the machine where MPD is running) and the 'http output' caused by the time it takes to reencode the stream, which is ugly when you walk through the house and have a 15s delay sometimes mplayer as a client doesn't detect the pauses in the stream and needs to be restarted i need to configure/start a client on each computer and point it at the sound server url (can do via gmpc shoutcast client plugin…) it's not that elegant to reencode the stream, and it wastes cpu cycles So the current scheme is: mpd -> http output -> network -> mplayer -> sndiod on remote machine | -> sndio output -> sndiod on soundserver Fiddling a little bit with mpd outputs and reading the sndio output driver, i remembered sndiod has native network support… and the mpd sndio output allows you to specify a device (it uses SIO_DEVANY by default). So in the end, it's super easy to: enable network support in sndio on the remote machine i want the audio to play by adding -L<local ip> to sndiod_flags (i have two audio devices, with an input coming from the webcam): sndiod_flags="-L10.246.200.10 -f rsnd/0 -f rsnd/1" open pf on port 11025 from the sound server ip: pass in proto tcp from 10.246.200.1 to any port 11025 configure a new output in mpd: audio_output { type "sndio" name "sndio on renton" device "snd@10.246.200.10/0" mixer_type "software" } and enable the new output in mpd: $mpc enable 2 Output 1 (Local speakers) is disabled Output 2 (sndio on renton) is enabled Output 3 (HTTP stream) is disabled Results in a big win: no gap anymore with the local speakers, no reencoding, no need to configure a client to play the stream, and i can still probably reproduce the same scheme over ssh from $work using a RemoteForward. mpd -> sndio output 2 -> network -> sndiod on remote machine | -> sndio output 1 -> sndiod on soundserver Thanks ratchov@ for sndiod :) ###MirBSD’s Korn Shell on Plan9 Jehanne Let start by saying that I’m not really a C programmer. My last public contribution to a POSIX C program was a little improvement to the Snort’s react module back in 2008. So while I know the C language well enough, I do not know anything about the subtleness of the standard library and I have little experience with POSIX semantics. This is not a big issue with Plan 9, since the C library and compiler are not standard anyway, but with Jehanne (a Plan 9 derivative of my own) I want to build a simple, loosely coupled, system that can actually run useful free software ported from UNIX. So I ported RedHat’s newlib to Jehanne on top of a new system library I wrote, LibPOSIX, that provides the necessary emulations. I wrote several test, checking they run the same on Linux and Jehanne, and then I begun looking for a real-world, battle tested, application to port first. I approached MirBSD’s Korn Shell for several reason: it is simple, powerful and well written it has been ported to several different operating systems it has few dependencies it’s the default shell in Android, so it’s really battle tested I was very confident. I had read the POSIX standard after all! And I had a test suite! I remember, I thought “Given newlib, how hard can it be?” The porting begun on September 1, 2017. It was completed by tg on January 5, 2018. 125 nights later. Turn out, my POSIX emulation was badly broken. Not just because of the usual bugs that any piece of C can have: I didn’t understood most POSIX semantics at all! iXsystems ###Static site generator with rsync and lowdown on OpenBSD ssg is a tiny POSIX-compliant shell script with few dependencies: lowdown(1) to parse markdown, rsync(1) to copy temporary files, and entr(1) to watch file changes. It generates Markdown articles to a static website. It copies the current directory to a temporary on in /tmp skipping .* and _*, renders all Markdown articles to HTML, generates RSS feed based on links from index.html, extracts the first <h1> tag from every article to generate a sitemap and use it as a page title, then wraps articles with a single HTML template, copies everything from the temporary directory to $DOCS/ Why not Jekyll or “$X”? ssg is one hundred times smaller than Jekyll. ssg and its dependencies are about 800KB combined. Compare that to 78MB of ruby with Jekyll and all the gems. So ssg can be installed in just few seconds on almost any Unix-like operating system. Obviously, ssg is tailored for my needs, it has all features I need and only those I use. Keeping ssg helps you to master your Unix-shell skills: awk, grep, sed, sh, cut, tr. As a web developer you work with lots of text: code and data. So you better master these wonderful tools. Performance 100 pps. On modern computers ssg generates a hundred pages per second. Half of a time for markdown rendering and another half for wrapping articles into the template. I heard good static site generators work—twice as fast—at 200 pps, so there’s lots of performance that can be gained. ;) ###Why does FreeBSD have virtually no (0%) desktop market share? Because someone made a horrible design decision back in 1984. In absolute fairness to those involved, it was an understandable decision, both from a research perspective, and from an economic perspective, although likely not, from a technology perspective. Why and what. The decision was taken because the X Window System was intended to run on cheap hardware, and, at the time, that meant reduced functionality in the end-point device with the physical display attached to it. At the same time, another force was acting to also limit X displays to display services only, rather than rolling in both window management and specific widget instances for common operational paradigms. Mostly, common operational paradigms didn’t really exist for windowing systems because they also simply didn’t exist at the time, and no one really knew how people were going to use the things, and so researchers didn’t want to commit future research to a set of hard constraints. So a decision was made: separate the display services from the application at the lowest level of graphics primitives currently in use at the time. The ramifications of this were pretty staggering. First, it guaranteed that all higher level graphics would live on the host side of the X protocol, instead of on the display device side of the protocol. Despite a good understanding of Moore’s law, and the fact that, since no X Terminals existed at the time as hardware, but were instead running as emulations on workstations that had sufficient capability, this put the higher level GUI object libraries — referred to as “widgets” — in host libraries linked into the applications. Second, it guaranteed that display organization and management paradigms would also live on the host side of the protocol — assumed, in contradiction to the previous decision, to be running on the workstation. But, presumably, at some point, as lightweight X Terminals became available, to migrate to a particular host computer managing compute resource login/access services. Between these early decisions reigned chaos. Specifically, the consequences of these decisions have been with us ever since: Look-and-feel are a consequence of the toolkit chosen by the application programmer, rather than a user decision which applies universally to all applications. You could call this “lack of a theme”, and — although I personally despise the idea of customizing or “theming” desktops — this meant that one paradigm chosen by the user would not apply universally across all applications, no matter who had written them. Window management style is a preference. You could call this a more radical version of “theming” — which you will remember, I despise — but a consequence to this is that training is not universal across personnel using such systems, nor is it transferrable. In other words, I can’t send someone to a class, and have them come back and use the computers in the office as a tool, with the computer itself — and the elements not specific to the application itself — disappearing into the background. Both of these ultimately render an X-based system unsuitable for desktops. I can’t pay once for training. Training that I do pay for does not easily and naturally translate between applications. Each new version may radically alter the desktop management paradigm into unrecognizability. Is there hope for the future? Well, the Linux community has been working on something called Wayland, and it is very promising… …In the same way X was “very promising” in 1984, because, unfortunately, they are making exactly the same mistakes X made in 1984, rather than correcting them, now that we have 20/20 hindsight, and know what a mature widget library should look like. So Wayland is screwing up again. But hey, it only took us, what, 25 years to get from X in 1987 to Wayland in in 2012. Maybe if we try again in 2037, we can get to where Windows was in 1995. ##Beastie Bits New washing machine comes with 7 pages of open source licenses! BSD Jobs Site FreeBSD Foundation Update, May 2018 FreeBSD Journal looking for book reviewers zedenv ZFS Boot Environment Manager Tarsnap ##Feedback/Questions Wouter - Feedback Efraim - OS Suggestion kevr - Raspberry Pi2/FreeBSD/Router on a Stick Vanja - Interview Suggestion Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 252: Goes to 11.2 | BSD Now 252
FreeBSD 11.2 has been released, setting up an MTA behind Tor, running pfsense on DigitalOcean, one year of C, using OpenBGPD to announce VM networks, the power to serve, and a BSDCan trip report. ##Headlines FreeBSD 11.2-RELEASE Available FreeBSD 11.2 was released today (June 27th) and is ready for download Highlights: OpenSSH has been updated to version 7.5p1. OpenSSL has been updated to version 1.0.2o. The clang, llvm, lldb and compiler-rt utilities have been updated to version 6.0.0. The libarchive(3) library has been updated to version 3.3.2. The libxo(3) library has been updated to version 0.9.0. Major Device driver updates to: cxgbe(4) – Chelsio 10/25/40/50/100 gigabit NICs – version 1.16.63.0 supports T4, T5 and T6 ixl(4) – Intel 10 and 40 gigabit NICs, updated to version 1.9.9-k ng_pppoe(4) – driver has been updated to add support for user-supplied Host-Uniq tags New drivers: + drm-next-kmod driver supporting integrated Intel graphics with the i915 driver. mlx5io(4) – a new IOCTL interface for Mellanox ConnectX-4 and ConnectX-5 10/20/25/40/50/56/100 gigabit NICs ocs_fc(4) – Emulex Fibre Channel 8/16/32 gigabit Host Adapters smartpqi(4) – HP Gen10 Smart Array Controller Family The newsyslog(8) utility has been updated to support RFC5424-compliant messages when rotating system logs The diskinfo(8) utility has been updated to include two new flags, -s which displays the disk identity (usually the serial number), and -p which displays the physical path to the disk in a storage controller. The top(1) utility has been updated to allow filtering on multiple user names when the -U flag is used The umount(8) utility has been updated to include a new flag, -N, which is used to forcefully unmount an NFS mounted filesystem. The ps(1) utility has been updated to display if a process is running with capsicum(4) capability mode, indicated by the flag ‘C’ The service(8) utility has been updated to include a new flag, -j, which is used to interact with services running within a jail(8). The argument to -j can be either the name or numeric jail ID The mlx5tool(8) utility has been added, which is used to manage Connect-X 4 and Connect-X 5 devices supported by mlx5io(4). The ifconfig(8) utility has been updated to include a random option, which when used with the ether option, generates a random MAC address for an interface. The dwatch(1) utility has been introduced The efibootmgr(8) utility has been added, which is used to manipulate the EFI boot manager. The etdump(1) utility has been added, which is used to view El Torito boot catalog information. The linux(4) ABI compatibility layer has been updated to include support for musl consumers. The fdescfs(5) filesystem has been updated to support Linux®-specific fd(4) /dev/fd and /proc/self/fd behavior Support for virtio_console(4) has been added to bhyve(4). The length of GELI passphrases entered when booting a system with encrypted disks is now hidden by default. See the configuration options in geli(8) to restore the previous behavior. In addition to the usual CD/DVD ISO, Memstick, and prebuilt VM images (raw, qcow2, vhd, and vmdk), FreeBSD 11.2 is also available on: Amazon EC2 Google Compute Engine Hashicorp/Atlas Vagrant Microsoft Azure In addition to a generic ARM64 image for devices like the Pine64 and Raspberry Pi 3, specific images are provided for: GUMSTIX BANANAPI BEAGLEBONE CUBIEBOARD CUBIEBOARD2 CUBOX-HUMMINGBOARD RASPBERRY PI 2 PANDABOARD WANDBOARD Full Release Notes ###Setting up an MTA Behind Tor This article will document how to set up OpenSMTPD behind a fully Tor-ified network. Given that Tor’s DNS resolver code does not support MX record lookups, care must be taken for setting up an MTA behind a fully Tor-ified network. OpenSMTPD was chosen because it was easy to modify to force it to fall back to A/AAAA lookups when MX lookups failed with a DNS result code of NOTIMP (4). Note that as of 08 May 2018, the OpenSMTPD project is planning a configuration file language change. The proposed change has not landed. Once it does, this article will be updated to reflect both the old language and new. The reason to use an MTA behing a fully Tor-ified network is to be able to support email behind the .onion TLD. This setup will only allow us to send and receive email to and from the .onion TLD. Requirements: A fully Tor-ified network HardenedBSD as the operating system A server (or VM) running HardenedBSD behind the fully Tor-ified network. /usr/ports is empty Or is already pre-populated with the HardenedBSD Ports tree Why use HardenedBSD? We get all the features of FreeBSD (ZFS, DTrace, bhyve, and jails) with enhanced security through exploit mitigations and system hardening. Tor has a very unique threat landscape and using a hardened ecosystem is crucial to mitigating risks and threats. Also note that this article reflects how I’ve set up my MTA. I’ve included configuration files verbatim. You will need to replace the text that refers to my .onion domain with yours. On 08 May 2018, HardenedBSD’s version of OpenSMTPD just gained support for running an MTA behind Tor. The package repositories do not yet contain the patch, so we will compile OpenSMTPD from ports. Steps Installation Generating Cryptographic Key Material Tor Configuration OpenSMTPD Configuration Dovecot Configuration Testing your configuration Optional: Webmail Access iXsystems https://www.forbes.com/sites/forbestechcouncil/2018/06/21/strings-attached-knowing-when-and-when-not-to-accept-vc-funding/#30f9f18f46ec https://www.ixsystems.com/blog/self-2018-recap/ ###Running pfSense on a Digital Ocean Droplet I love pfSense (and opnSense, no discrimination here). I use it for just about anything, from homelab to large scale deployments and I’ll give out on any fancy <enter brand name fw appliance here> for a pfSense setup on a decent hardware. I also love DigitalOcean, if you ever used them, you know why, if you never did, head over and try, you’ll understand why. <shameless plug: head over to JupiterBroadcasting.com, the best technology content out there, they have coupon codes to get you started with DO>. Unfortunately, while DO offers tremendous amount of useful distros and applications, pfSense isn’t one of them. But, where there’s a will, there’s a way, and here’s how to get pfSense up and running on DO so you can have it as the gatekeeper to your kingdom. Start by creating a FreeBSD droplet, choose your droplet size (for modest setups, I find the 5$ to be quite awesome): There are many useful things you can do with pfSense on your droplet, from OpenVPN, squid, firewalling, fancy routing, url filtering, dns black listing and much much more. One note though, before we wrap up: You have two ways to initiate the initial setup wizard of the web-configurator: Spin up another droplet, log into it and browse your way to the INTERNAL ip address of the internal NIC you’ve set up. This is the long and tedious way, but it’s also somewhat safer as it eliminates the small window of risk the second method poses. or Once your WAN address is all setup, your pfSense is ready to accept https connection to start the initial web-configurator setup. Thing is, there’s a default, well known set of credential to this initial wizard (admin:pfsense), so, there is a slight window of opportunity that someone can swoop in (assuming they know you’ve installed pfsense + your wan IP address + the exact time window between setting up the WAN interface and completing the wizard) and do <enter scary thing here>. I leave it up to you which of the path you’d like to go, either way, once you’re done with the web-configurator wizard, you’ll have a shiny new pfSense installation at your disposal running on your favorite VPS. Hopefully this was helpful for someone, I hope to get a similar post soon detailing how to get FreeNAS up and running on DO. Many thanks to Tubsta and his blogpost as well as to Allan Jude, Kris Moore and Benedict Reuschling for their AWESOME and inspiring podcast, BSD Now. ##News Roundup One year of C It’s now nearly a year that I started writing non-trivial amounts of C code again (the first sokol_gfx.h commit was on the 14-Jul-2017), so I guess it’s time for a little retrospective. In the beginning it was more of an experiment: I wanted to see how much I would miss some of the more useful C++ features (for instance namespaces, function overloading, ‘simple’ template code for containers, …), and whether it is possible to write non-trivial codebases in C without going mad. Here are all the github projects I wrote in C: sokol: a slowly growing set of platform-abstraction headers sokol-samples - examples for Sokol chips - 8-bit chip emulators chips-test - tests and examples for the chip- emulators, including some complete home computer emulators (minus sound) All in all these are around 32k lines of code (not including 3rd party code like flextGL and HandmadeMath). I think I wrote more C code in the recent 10 months than any other language. So one thing seems to be clear: yes, it’s possible to write a non-trivial amount of C code that does something useful without going mad (and it’s even quite enjoyable I might add). Here’s a few things I learned: Pick the right language for a problem C is a perfect match for WebAssembly C99 is a huge improvement over C89 The dangers of pointers and explicit memory management are overrated Less Boilerplate Code Less Language Feature ‘Anxiety’ Conclusion All in all my “C experiment” is a success. For a lot of problems, picking C over C++ may be the better choice since C is a much simpler language (btw, did you notice how there are hardly any books, conferences or discussions about C despite being a fairly popular language? Apart from the neverending bickering about undefined behaviour from the compiler people of course ;) There simply isn’t much to discuss about a language that can be learned in an afternoon. I don’t like some of the old POSIX or Linux APIs as much as the next guy (e.g. ioctl(), the socket API or some of the CRT library functions), but that’s an API design problem, not a language problem. It’s possible to build friendly C APIs with a bit of care and thinking, especially when C99’s designated initialization can be used (C++ should really make sure that the full C99 language can be used from inside C++ instead of continuing to wander off into an entirely different direction). ###Configuring OpenBGPD to announce VM’s virtual networks We use BGP quite heavily at work, and even though I’m not interacting with that directly, it feels like it’s something very useful to learn at least on some basic level. The most effective and fun way of learning technology is finding some practical application, so I decided to see if it could help to improve networking management for my Virtual Machines. My setup is fairly simple: I have a host that runs bhyve VMs and I have a desktop system from where I ssh to VMs, both hosts run FreeBSD. All VMs are connected to each other through a bridge and have a common network 10.0.1/24. The point of this exercise is to be able to ssh to these VMs from desktop without adding static routes and without adding vmhost’s external interfaces to the VMs bridge. I’ve installed openbgpd on both hosts and configured it like this: vmhost: /usr/local/etc/bgpd.conf AS 65002 router-id 192.168.87.48 fib-update no network 10.0.1.1/24 neighbor 192.168.87.41 { descr "desktop" remote-as 65001 } Here, router-id is set vmhost’s IP address in my home network (192.168.87/24), fib-update no is set to forbid routing table update, which I initially set for testing, but keeping it as vmhost is not supposed to learn new routes from desktop anyway. network announces my VMs network and neighbor describes my desktop box. Now the desktop box: desktop: /usr/local/etc/bgpd.conf AS 65001 router-id 192.168.87.41 fib-update yes neighbor 192.168.87.48 { descr "vmhost" remote-as 65002 } It’s pretty similar to vmhost’s bgpd.conf, but no networks are announced here, and fib-update is set to yes because the whole point is to get VM routes added. Both hosts have to have the openbgpd service enabled: /etc/rc.conf.local openbgpdenable="YES" Conclusion As mentioned already, similar result could be achieved without using BGP by using either static routes or bridging interfaces differently, but the purpose of this exercise is to get some basic hands-on experience with BGP. Right now I’m looking into extending my setup in order to try more complex BGP schema. I’m thinking about adding some software switches in front of my VMs or maybe adding a second VM host (if budget allows). You’re welcome to comment if you have some ideas how to extend this setup for educational purposes in the context of BGP and networking. As a side note, I really like openbgpd so far. Its configuration file format is clean and simple, documentation is good, error and information messages are clear, and CLI has intuitive syntax. Digital Ocean ###The Power to Serve All people within the IT Industry should known where the slogan “The Power To Serve” is exposed every day to millions of people. But maybe too much wishful thinking from me. But without “The Power To Serve” the IT industry today will look totally different. Companies like Apple, Juniper, Cisco and even WatsApp would not exist in their current form. I provide IT architecture services to make your complex IT landscape manageable and I love to solve complex security and privacy challenges. Complex challenges where people, processes and systems are heavily interrelated. For this knowledge intensive work I often run some IT experiments. When you run experiments nowadays you have a choice: Rent some cloud based services or DIY (Do IT Yourself) on premise Running your own developments experiments on your own infrastructure can be time consuming. However smart automation saves time and money. And by creating your own CICD pipeline (Continuous Integration, Continuous Deployment) you stay on top of core infrastructure developments. Even hands-on. Knowing how things work from a technical ‘hands-on’ perspective gives great advantages when it comes to solving complex business IT problems. Making a clear distinguish between a business problem or IT problem is useless. Business and IT problems are related. Sometimes causal related, but more often indirect by one or more non linear feedback loops. Almost every business depends of IT systems. Bad IT means often that your customers will leave your business. One of the things of FeeBSD for me is still FreeBSD Jails. In 2015 I had luck to attend to a presentation of the legendary hacker Poul-Henning Kamp . Check his BSD bio to see what he has done for the FreeBSD community! FreeBSD jails are a light way to visualize your system without enormous overhead. Now that the development on Linux for LXD/LXD is more mature (lxd is the next generation system container manager on linux) there is finally again an alternative for a nice chroot Linux based system again. At least when you do not need the overhead and management complexity that comes with Kubernetes or Docker. FreeBSD means control and quality for me. When there is an open source package I need, I want to install it from source. It gives me more control and always some extra knowledge on how things work. So no precompiled binaries for me on my BSD systems! If a build on FreeBSD fails most of the time this is an alert regarding the quality for me. If a complex OSS package is not available at all in the FreeBSD ports collection there should be a reason for it. Is it really that nobody on the world wants to do this dirty maintenance work? Or is there another cause that running this software on FreeBSD is not possible…There are currently 32644 ports available on FreeBSD. So all the major programming language, databases and middleware libraries are present. The FreeBSD organization is a mature organization and since this is one of the largest OSS projects worldwide learning how this community manages to keep innovation and creates and maintains software is a good entrance for learning how complex IT systems function. FreeBSD is of course BSD licensed. It worked well! There is still a strong community with lots of strong commercial sponsors around the community. Of course: sometimes a GPL license makes more sense. So beside FreeBSD I also love GPL software and the rationale and principles behind it. So my hope is that maybe within the next 25 years the hard battle between BSD vs GPL churches will be more rationalized and normalized. Principles are good, but as all good IT architects know: With good principles alone you never make a good system. So use requirements and not only principles to figure out what OSS license fits your project. There is never one size fits all. June 19, 1993 was the day the official name for FreeBSD was agreed upon. So this blog is written to celebrate 25th anniversary of FreeBSD. ###Dave’s BSDCan trip report So far, only one person has bothered to send in a BSDCan trip report. Our warmest thanks to Dave for doing his part. Hello guys! During the last show, you asked for a trip report regarding BSDCan 2018. This was my first time attending BSDCan. However, BSDCan was my second BSD conference overall, my first being vBSDCon 2017 in Reston, VA. Arriving early Thursday evening and after checking into the hotel, I headed straight to the Red Lion for the registration, picked up my badge and swag and then headed towards the ‘DMS’ building for the newbies talk. The only thing is, I couldn’t find the DMS building! Fortunately I found a BSDCan veteran who was heading there themselves. My only suggestion is to include the full building name and address on the BSDCan web site, or even a link to Google maps to help out with the navigation. The on-campus street maps didn’t have ‘DMS’ written on them anywhere. But I digress. Once I made it to the newbies talk hosted by Dan Langille and Michael W Lucas, it highlighted places to meet, an overview of what is happening, details about the ‘BSDCan widow/widower tours’ and most importantly, the 6-2-1 rule! The following morning, we were present with tea/coffee, muffins and other goodies to help prepare us for the day ahead. The first talk, “The Tragedy of systemd” covered what systemd did wrong and how the BSD community could improve on the ideas behind it. With the exception of Michael W Lucas, SSH Key Management and Kirk McKusick, The Evolution of FreeBSD Governance talk, I pretty much attended all of the ZFS talks including the lunchtime BoF session, hosted by Allan Jude. Coming from FreeNAS and being involved in the community, this is where my main interest and motivation lies. Since then I have been able to share some of that information with the FreeNAS community forums and chatroom. I also attended the “Speculating about Intel” lunchtime BoF session hosted by Theo de Raddt, which proved to be “interesting”. The talks ended with the wrap up session with a few words from Dan, covering the record attendance and made very clear there “was no cabal”. Followed by the the handing over of Groff the BSD goat to a new owner, thank you’s from the FreeBSD Foundation to various community committers and maintainers, finally ending with the charity auction, where a things like a Canadian $20 bill sold for $40, a signed FreeBSD Foundation shirt originally worn by George Neville-Neil, a lost laptop charger, Michael’s used gelato spoon, various books, the last cookie and more importantly, the second to last cookie! After the auction, we all headed to the Red Lion for food and drinks, sponsored by iXsystems. I would like to thank the BSDCan organizers, speakers and sponsors for a great conference. I will certainly hope to attend next year! Regards, Dave (aka m0nkey) Thanks to Dave for sharing his experiences with us and our viewers ##Beastie Bits Robert Watson (from 2008) on how much FreeBSD is in Mac OS X Why Intel Skylake CPUs are sometimes 50% slower than older CPUs Kristaps Dzonsons is looking for somebody to maintain this as mentioned at this link camcontrol(8) saves the day again! Formatting floppy disks in a USB floppy disk drive 32+ great indie games now playable on OpenBSD -current; 7 currently on sale! Warsaw BSD User Group. June 27 2018 18:30-21:00, Wheel Systems Office, Aleje Jerozolimskie 178, Warsaw Tarsnap ##Feedback/Questions Ron - Adding a disk to ZFS Marshall - zfs question Thomas - Allan, the myth perpetuator Ross - ZFS IO stats per dataset Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 251: Crypto HAMMER | BSD Now 251
DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean. ##Headlines DragonflyBSD: Towards a HAMMER1 master/slave encrypted setup with LUKS I just wanted to share my experience with setting up DragonFly master/slave HAMMER1 PFS’s on top of LUKS So after a long time using an Synology for my NFS needs, I decided it was time to rethink my setup a little since I had several issues with it : You cannot run NFS on top of encrypted partitions easily I suspect I am having some some data corruption (bitrot) on the ext4 filesystem the NIC was stcuk to 100 Mbps instead of 1 Gbps even after swapping cables, switches, you name it It’s proprietary I have been playing with DragonFly in the past and knew about HAMMER, now I just had the perfect excuse to actually use it in production :) After setting up the OS, creating the LUKS partition and HAMMER FS was easy : kdload dm cryptsetup luksFormat /dev/serno/<id1> cryptsetup luksOpen /dev/serno/<id1> fort_knox newfs_hammer -L hammer1_secure_master /dev/mapper/fort_knox cryptsetup luksFormat /dev/serno/<id2> cryptsetup luksOpen /dev/serno/<id2> fort_knox_slave newfs_hammer -L hammer1_secure_slave /dev/mapper/fort_knox_slave Mount the 2 drives : mount /dev/mapper/fort_knox /fort_knox mount /dev/mapper_fort_know_slave /fort_knox_slave You can now put your data under /fort_knox Now, off to setting up the replication, first get the shared-uuid of /fort_knox hammer pfs-status /fort_knox Create a PFS slave “linked” to the master hammer pfs-slave /fort_knox_slave/pfs/slave shared-uuid=f9e7cc0d-eb59-10e3-a5b5-01e6e7cefc12 And then stream your data to the slave PFS ! hammer mirror-stream /fort_knox /fort_knox_slave/pfs/slave After that, setting NFS is fairly trivial even though I had problem with the /etc/exports syntax which is different than Linux There’s a few things I wish would be better though but nothing too problematic or without workarounds : Cannot unlock LUKS partitions at boot time afaik (Acceptable tradeoff for the added security LUKS gives me vs my old Synology setup) but this force me to run a script to unlock LUKS, mount hammer and start mirror-stream at each boot No S1/S3 sleep so I made a script to shutdown the system when there’s no network neighborgs to serve the NFS As my system isn’t online 24/7 for energy reasons, I guess will have to run hammer cleanup myself from time to time Some uncertainty because hey, it’s kind of exotic but exciting too :) Overall, I am happy, HAMMER1 and PFS are looking really good, DragonFly is a neat Unix and the community is super friendly (Matthew Dillon actually provided me with a kernel patch to fix the broken ACPI on the PC holding this setup, many thanks!), the system is still a “work in progress” but it is already serving my files as I write this post. Let’s see in 6 months how it goes in the longer run ! Helpful resources : https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/ ###BSDCan 2018 Recap As promised, here is our second part of our BSDCan report, covering the conference proper. The last tutorials/devsummit of that day lead directly into the conference, as people could pick up their registration packs at the Red Lion and have a drink with fellow BSD folks. Allan and I were there only briefly, as we wanted to get back to the “Newcomers orientation and mentorship” session lead by Michael W. Lucas. This session is intended for people that are new to BSDCan (maybe their first BSD conference ever?) and may have questions. Michael explained everything from the 6-2-1 rule (hours of sleep, meals per day, and number of showers that attendees should have at a minimum), to the partner and widowers program (lead by his wife Liz), to the sessions that people should not miss (opening, closing, and hallway track). Old-time BSDCan folks were asked to stand up so that people can recognize them and ask them any questions they might have during the conferences. The session was well attended. Afterwards, people went for dinner in groups, a big one lead by Michael Lucas to his favorite Shawarma place, followed by gelato (of course). This allowed newbies to mingle over dinner and ice cream, creating a welcoming atmosphere. The next day, after Dan Langille opened the conference, Benno Rice gave the keynote presentation about “The Tragedy of Systemd”. Benedict went to the following talks: “Automating Network Infrastructures with Ansible on FreeBSD” in the DevSummit track. A good talk that connected well with his Ansible tutorial and even allowed some discussions among participants. “All along the dwatch tower”: Devin delivered a well prepared talk. I first thought that the number of slides would not fit into the time slot, but she even managed to give a demo of her work, which was well received. The dwatch tool she wrote should make it easy for people to get started with DTrace without learning too much about the syntax at first. The visualizations were certainly nice to see, combining different tools together in a new way. ZFS BoF, lead by Allan and Matthew Ahrens SSH Key Management by Michael W. Lucas. Yet another great talk where I learned a lot. I did not get to the SSH CA chapter in the new SSH Mastery book, so this was a good way to wet my appetite for it and motivated me to look into creating one for the cluster that I’m managing. The rest of the day was spent at the FreeBSD Foundation table, talking to various folks. Then, Allan and I had an interview with Kirk McKusick for National FreeBSD Day, then we had a core meeting, followed by a core dinner. Day 2: “Flexible Disk Use in OpenZFS”: Matthew Ahrens talking about the feature he is implementing to expand a RAID-Z with a single disk, as well as device removal. Allan’s talk about his efforts to implement ZSTD in OpenZFS as another compression algorithm. I liked his overview slides with the numbers comparing the algorithms for their effectiveness and his personal story about the sometimes rocky road to get the feature implemented. “zrepl - ZFS replication” by Christian Schwarz, was well prepared and even had a demo to show what his snapshot replication tool can do. We covered it on the show before and people can find it under sysutils/zrepl. Feedback and help is welcome. “The Evolution of FreeBSD Governance” by Kirk McKusick was yet another great talk by him covering the early days of FreeBSD until today, detailing some of the progress and challenges the project faced over the years in terms of leadership and governance. This is an ongoing process that everyone in the community should participate in to keep the project healthy and infused with fresh blood. Closing session and auction were funny and great as always. All in all, yet another amazing BSDCan. Thank you Dan Langille and your organizing team for making it happen! Well done. Digital Ocean ###NomadBSD 1.1-RC1 Released The first – and hopefully final – release candidate of NomadBSD 1.1 is available! Changes The base system has been upgraded to FreeBSD 11.2-RC3 EFI booting has been fixed. Support for modern Intel GPUs has been added. Support for installing packages has been added. Improved setup menu. More software packages: benchmarks/bonnie++ DSBDisplaySettings DSBExec DSBSu mail/thunderbird net/mosh ports-mgmt/octopkg print/qpdfview security/nmap sysutils/ddrescue sysutils/fusefs-hfsfuse sysutils/fusefs-sshfs sysutils/sleuthkit www/lynx x11-wm/compton x11/xev x11/xterm Many improvements and bugfixes The image and instructions can be found here. ##News Roundup LDAP client added to -current CVSROOT: /cvs Module name: src Changes by: reyk@cvs.openbsd.org 2018/06/13 09:45:58 Log message: Import ldap(1), a simple ldap search client. We have an ldapd(8) server and ypldap in base, so it makes sense to have a simple LDAP client without depending on the OpenLDAP package. This tool can be used in an ssh(1) AuthorizedKeysCommand script. With feedback from many including millert@ schwarze@ gilles@ dlg@ jsing@ OK deraadt@ Status: Vendor Tag: reyk Release Tags: ldap_20180613 N src/usr.bin/ldap/Makefile N src/usr.bin/ldap/aldap.c N src/usr.bin/ldap/aldap.h N src/usr.bin/ldap/ber.c N src/usr.bin/ldap/ber.h N src/usr.bin/ldap/ldap.1 N src/usr.bin/ldap/ldapclient.c N src/usr.bin/ldap/log.c N src/usr.bin/ldap/log.h No conflicts created by this import ###Intel® FPU Speculation Vulnerability Confirmed Earlier this month, Philip Guenther (guenther@) committed (to amd64 -current) a change from lazy to semi-eager FPU switching to mitigate against rumored FPU state leakage in Intel® CPUs. Theo de Raadt (deraadt@) discussed this in his BSDCan 2018 session. Using information disclosed in Theo’s talk, Colin Percival developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the official announcement of the vulnerability. FPU change in FreeBSD Summary: System software may utilize the Lazy FP state restore technique to delay the restoring of state until an instruction operating on that state is actually executed by the new process. Systems using Intel® Core-based microprocessors may potentially allow a local process to infer data utilizing Lazy FP state restore from another process through a speculative execution side channel. Description: System software may opt to utilize Lazy FP state restore instead of eager save and restore of the state upon a context switch. Lazy restored states are potentially vulnerable to exploits where one process may infer register values of other processes through a speculative execution side channel that infers their value. · CVSS - 4.3 Medium CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N Affected Products: Intel® Core-based microprocessors. Recommendations: If an XSAVE-enabled feature is disabled, then we recommend either its state component bitmap in the extended control register (XCR0) is set to 0 (e.g. XCR0[bit 2]=0 for AVX, XCR0[bits 7:5]=0 for AVX512) or the corresponding register states of the feature should be cleared prior to being disabled. Also for relevant states (e.g. x87, SSE, AVX, etc.), Intel recommends system software developers utilize Eager FP state restore in lieu of Lazy FP state restore. Acknowledgements: Intel would like to thank Julian Stecklina from Amazon Germany, Thomas Prescher from Cyberus Technology GmbH (https://www.cyberus-technology.de/), Zdenek Sojka from SYSGO AG (http://sysgo.com), and Colin Percival for reporting this issue and working with us on coordinated disclosure. iXsystems iX Ad Spot iX Systems - BSDCan 2018 Recap ###FreeBSD gets pNFS support Merge the pNFS server code from projects/pnfs-planb-server into head. This code merge adds a pNFS service to the NFSv4.1 server. Although it is a large commit it should not affect behaviour for a non-pNFS NFS server. Some documentation on how this works can be found at: Merge the pN http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt and will hopefully be turned into a proper document soon. This is a merge of the kernel code. Userland and man page changes will come soon, once the dust settles on this merge. It has passed a "make universe", so I hope it will not cause build problems. It also adds NFSv4.1 server support for the "current stateid". Here is a brief overview of the pNFS service: A pNFS service separates the Read/Write operations from all the other NFSv4.1 Metadata operations. It is hoped that this separation allows a pNFS service to be configured that exceeds the limits of a single NFS server for either storage capacity and/or I/O bandwidth. It is possible to configure mirroring within the data servers (DSs) so that the data storage file for an MDS file will be mirrored on two or more of the DSs. When this is used, failure of a DS will not stop the pNFS service and a failed DS can be recovered once repaired while the pNFS service continues to operate. Although two way mirroring would be the norm, it is possible to set a mirroring level of up to four or the number of DSs, whichever is less. The Metadata server will always be a single point of failure, just as a single NFS server is. A Plan B pNFS service consists of a single MetaData Server (MDS) and K Data Servers (DS), all of which are recent FreeBSD systems. Clients will mount the MDS as they would a single NFS server. When files are created, the MDS creates a file tree identical to what a single NFS server creates, except that all the regular (VREG) files will be empty. As such, if you look at the exported tree on the MDS directly on the MDS server (not via an NFS mount), the files will all be of size 0. Each of these files will also have two extended attributes in the system attribute name space: pnfsd.dsfile - This extended attrbute stores the information that the MDS needs to find the data storage file(s) on DS(s) for this file. pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime and Change attributes for the file, so that the MDS doesn't need to acquire the attributes from the DS for every Getattr operation. For each regular (VREG) file, the MDS creates a data storage file on one (or more if mirroring is enabled) of the DSs in one of the "dsNN" subdirectories. The name of this file is the file handle of the file on the MDS in hexadecimal so that the name is unique. The DSs use subdirectories named "ds0" to "dsN" so that no one directory gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize on the MDS, with the default being 20. For production servers that will store a lot of files, this value should probably be much larger. It can be increased when the "nfsd" daemon is not running on the MDS, once the "dsK" directories are created. For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces of information to the client that allows it to do I/O directly to the DS. DeviceInfo - This is relatively static information that defines what a DS is. The critical bits of information returned by the FreeBSD server is the IP address of the DS and, for the Flexible File layout, that NFSv4.1 is to be used and that it is "tightly coupled". There is a "deviceid" which identifies the DeviceInfo. Layout - This is per file and can be recalled by the server when it is no longer valid. For the FreeBSD server, there is support for two types of layout, call File and Flexible File layout. Both allow the client to do I/O on the DS via NFSv4.1 I/O operations. The Flexible File layout is a more recent variant that allows specification of mirrors, where the client is expected to do writes to all mirrors to maintain them in a consistent state. The Flexible File layout also allows the client to report I/O errors for a DS back to the MDS. The Flexible File layout supports two variants referred to as "tightly coupled" vs "loosely coupled". The FreeBSD server always uses the "tightly coupled" variant where the client uses the same credentials to do I/O on the DS as it would on the MDS. For the "loosely coupled" variant, the layout specifies a synthetic user/group that the client uses to do I/O on the DS. The FreeBSD server does not do striping and always returns layouts for the entire file. The critical information in a layout is Read vs Read/Writea and DeviceID(s) that identify which DS(s) the data is stored on. At this time, the MDS generates File Layout layouts to NFSv4.1 clients that know how to do pNFS for the non-mirrored DS case unless the sysctl vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File layouts are generated. The mirrored DS configuration always generates Flexible File layouts. For NFS clients that do not support NFSv4.1 pNFS, all I/O operations are done against the MDS which acts as a proxy for the appropriate DS(s). When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy. If the DS is on the same machine, the MDS/DS will do the RPC on the DS as a proxy and so on, until the machine runs out of some resource, such as session slots or mbufs. As such, DSs must be separate systems from the MDS. *** ###[What does {some strange unix command name} stand for?](http://www.unixguide.net/unix/faq/1.3.shtml) + awk = "Aho Weinberger and Kernighan" + grep = "Global Regular Expression Print" + fgrep = "Fixed GREP". + egrep = "Extended GREP" + cat = "CATenate" + gecos = "General Electric Comprehensive Operating Supervisor" + nroff = "New ROFF" + troff = "Typesetter new ROFF" + tee = T + bss = "Block Started by Symbol + biff = "BIFF" + rc (as in ".cshrc" or "/etc/rc") = "RunCom" + Don Libes' book "Life with Unix" contains lots more of these tidbits. *** ##Beastie Bits + [RetroBSD: Unix for microcontrollers](http://retrobsd.org/wiki/doku.php) + [On the matter of OpenBSD breaking embargos (KRACK)](https://marc.info/?l=openbsd-tech&m=152910536208954&w=2) + [Theo's Basement Computer Paradise (1998)](https://zeus.theos.com/deraadt/hosts.html) + [Airport Extreme runs NetBSD](https://jcs.org/2018/06/12/airport_ssh) + [What UNIX shell could have been](https://rain-1.github.io/shell-2.html) *** Tarsnap ad *** ##Feedback/Questions + We need more feedback and questions. Please email feedback@bsdnow.tv + Also, many of you owe us BSDCan trip reports! We have shared what our experience at BSDCan was like, but we want to hear about yours. What can we do better next year? What was it like being there for the first time? + [Jason writes in](https://slexy.org/view/s205jU58X2) + https://www.wheelsystems.com/en/products/wheel-fudo-psm/ + [June 19th was National FreeBSD Day](https://twitter.com/search?src=typd&q=%23FreeBSDDay) *** - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to [feedback@bsdnow.tv](mailto:feedback@bsdnow.tv) ***
Episode 250: BSDCan 2018 Recap | BSD Now 250
TrueOS becoming a downstream fork with Trident, our BSDCan 2018 recap, HardenedBSD Foundation founding efforts, VPN with OpenIKED on OpenBSD, FreeBSD on a System76 Galago Pro, and hardware accelerated crypto on Octeons. ##Headlines## TrueOS to Focus on Core Operating System The TrueOS Project has some big plans in the works, and we want to take a minute and share them with you. Many have come to know TrueOS as the “graphical FreeBSD” that makes things easy for newcomers to the BSDs. Today we’re announcing that TrueOS is shifting our focus a bit to become a cutting-edge operating system that keeps all of the stability that you know and love from ZFS (OpenZFS) and FreeBSD, and adds additional features to create a fresh, innovative operating system. Our goal is to create a core-centric operating system that is modular, functional, and perfect for do-it-yourselfers and advanced users alike. TrueOS will become a downstream fork that will build on FreeBSD by integrating new software technologies like OpenRC and LibreSSL. Work has already begun which allows TrueOS to be used as a base platform for other projects, including JSON-based manifests, integrated Poudriere / pkg tools and much more. We’re planning on a six month release cycle to keep development moving and fresh, allowing us to bring you hot new features to ZFS, bhyve and related tools in a timely manner. This makes TrueOS the perfect fit to serve as the basis for building other distributions. Some of you are probably asking yourselves “But what if I want to have a graphical desktop?” Don’t worry! We’re making sure that everyone who knows and loves the legacy desktop version of TrueOS will be able to continue using a FreeBSD-based, graphical operating system in the future. For instance, if you want to add KDE, just use sudo pkg install kde and voila! You have your new shiny desktop. Easy right? This allows us to get back to our roots of being a desktop agnostic operating system. If you want to add a new desktop environment, you get to pick the one that best suits your use. We know that some of you will still be looking for an out-of-the-box solution similar to legacy PC-BSD and TrueOS. We’re happy to announce that Project Trident will take over graphical FreeBSD development going forward. Not much is going to change in that regard other than a new name! You’ll still have Lumina Desktop as a lightweight and feature-rich desktop environment and tons of utilities from the legacy TrueOS toolchain like sysadm and AppCafe. There will be migration paths available for those that would like to move to other FreeBSD-based distributions like Project Trident or GhostBSD. We look forward to this new chapter for TrueOS and hope you will give the new edition a spin! Tell us what you think about the new changes by leaving us a comment. Don’t forget you can ask us questions on our Twitter and be a part of our community by joining the new TrueOS Forums when they go live in about a week. Thanks for being a loyal fan of TrueOS. ###Project Trident FAQ Q: Why did you pick the name “Project Trident”? A: We were looking for a name that was unique, yet would still relate to the BSD community. Since Beastie (the FreeBSD mascot) is always pictured with a trident, it felt like that would be a great name. Q: Where can users go for technical support? A: At the moment, Project Trident will continue sharing the TrueOS community forums and Telegram channels. We are currently evaluating dedicated options for support channels in the future. Q: Can I help contribute to the project? A: We are always looking for developers who want to join the project. If you’re not a developer you can still help, as a community project we will be more reliant on contributions from the community in the form of how-to guides and other user-centric documentation and support systems. Q: How is the project supported financially? A: Project Trident is sponsored by the community, from both individuals and corporations. iXsystems has stepped up as the first enterprise-level sponsor of the project, and has been instrumental in getting Project Trident up and running. Please visit the Sponsors page to see all the current sponsors. Q: How can I help support the project financially? A: Several methods exist, from one time or recurring donations via Paypal to limited time swag t-shirt campaigns during the year. We are also looking into more alternative methods of support, so please visit the Sponsors page to see all the current methods of sponsorship. Q: Will there be any transparency of the financial donations and expenditures? A: Yes, we will be totally open with how much money comes into the project and what it is spent on. Due to concerns of privacy, we will not identify individuals and their donation amounts unless they specifically request to be identified. We will release a monthly overview in/out ledger, so that community members can see where their money is going. Relationship with TrueOS Project Trident does have very close ties to the TrueOS project, since most of the original Project Trident developers were once part of the TrueOS project before it became a distribution platform. For users of the TrueOS desktop, we have some additional questions and answers below. Q: Do we need to be at a certain TrueOS install level/release to upgrade? A: As long as you have a TrueOS system which has been updated to at least the 18.03 release you should be able to just perform a system update to be automatically upgraded to Project Trident. Q: Which members moved from TrueOS to Project Trident? A: Project Trident is being led by prior members of the TrueOS desktop team. Ken and JT (development), Tim (documentation) and Rod (Community/Support). Since Project Trident is a community-first project, we look forward to working with new members of the team. iXsystems ###BSDCan BSDCan finished Saturday last week It started with the GoatBoF on Tuesday at the Royal Oak Pub, where people had a chance to meet and greet. Benedict could not attend due to an all-day FreeBSD Foundation meeting and and even FreeBSD Journal Editorial Board meeting. The FreeBSD devsummit was held the next two days in parallel to the tutorials. Gordon Tetlow, who organized the devsummit, opened the devsummit. Deb Goodkin from the FreeBSD Foundation gave the first talk with a Foundation update, highlighting current and future efforts. Li-Wen Hsu is now employed by the Foundation to assist in QA work (Jenkins, CI/CD) and Gordon Tetlow has a part-time contract to help secteam as their secretary. Next, the FreeBSD core team (among them Allan and Benedict) gave a talk about what has happened this last term. With a core election currently running, some of these items will carry over to the next core team, but there were also some finished ones like the FCP process and FreeBSD members initiative. People in the audience asked questions on various topics of interest. After the coffee break, the release engineering team gave a talk about their efforts in terms of making releases happen in time and good quality. Benedict had to give his Ansible tutorial in the afternoon, which had roughly 15 people attending. Most of them beginners, we could get some good discussions going and I also learned a few new tricks. The overall feedback was positive and one even asked what I’m going to teach next year. The second day of the FreeBSD devsummit began with Gordon Tetlow giving an insight into the FreeBSD Security team (aka secteam). He gave a overview of secteam members and responsibilities, explaining the process based on a long past advisory. Developers were encouraged to help out secteam. NDAs and proper disclosure of vulnerabilities were also discussed, and the audience had some feedback and questions. When the coffee break was over, the FreeBSD 12.0 planning session happened. A Google doc served as a collaborative way of gathering features and things left to do. People signed up for it or were volunteered. Some features won’t make it into 12.0 as they are not 100% ready for prime time and need a few more rounds of testing and bugfixing. Still, 12.0 will have some compelling features. A 360° group picture was taken after lunch, and then people split up into the working groups for the afternoon or started hacking in the UofO Henderson residence. Benedict and Allan both attended the OpenZFS working group, lead by Matt Ahrens. He presented the completed and outstanding work in FreeBSD, without spoiling too much of the ZFS presentations of various people that happened later at the conference. Benedict joined the boot code session a bit late (hallway track is the reason) when most things seem to have already been discussed. BSDCan 2018 — Ottawa (In Pictures) iXsystems Photos from BSDCan 2018 ##News Roundup June HardenedBSD Foundation Update We at HardenedBSD are working towards starting up a 501©(3) not-for-profit organization in the USA. Setting up this organization will allow future donations to be tax deductible. We’ve made progress and would like to share with you the current state of affairs. We have identified, sent invitations out, and received acceptance letters from six people who will serve on the HardenedBSD Foundation Board of Directors. You can find their bios below. In the latter half of June 2018 or the beginning half of July 2018, we will meet for the first time as a board and formally begin the process of creating the documentation needed to submit to the local, state, and federal tax services. Here’s a brief introduction to those who will serve on the board: W. Dean Freeman (Advisor): Dean has ten years of professional experience with deploying and security Unix and networking systems, including assessing systems security for government certification and assessing the efficacy of security products. He was introduced to Unix via FreeBSD 2.2.8 on an ISP shell account as a teenager. Formerly, he was the Snort port maintainer for FreeBSD while working in the Sourcefire VRT, and has contributed entropy-related patches to the FreeBSD and HardenedBSD projects – a topic on which he presented at vBSDCon 2017. Ben La Monica (Advisor): Ben is a Senior Technology Manager of Software Engineering at Morningstar, Inc and has been developing software for over 15 years in a variety of languages. He advocates open source software and enjoys tinkering with electronics and home automation. George Saylor (Advisor): George is a Technical Directory at G2, Inc. Mr. Saylor has over 28 years of information systems and security experience in a broad range of disciplines. His core focus areas are automation and standards in the event correlation space as well as penetration and exploitation of computer systems. Mr Saylor was also a co-founder of the OpenSCAP project. Virginia Suydan (Accountant and general administrator): Accountant and general administrator for the HardenedBSD Foundation. She has worked with Shawn Webb for tax and accounting purposes for over six years. Shawn Webb (Director): Co-founder of HardenedBSD and all-around infosec wonk. He has worked and played in the infosec industry, doing both offensive and defensive research, for around fifteen years. He loves open source technologies and likes to frustrate the bad guys. Ben Welch (Advisor): Ben is currently a Security Engineer at G2, Inc. He graduated from Pennsylvania College of Technology with a Bachelors in Information Assurance and Security. Ben likes long walks, beaches, candlelight dinners, and attending various conferences like BSides and ShmooCon. ###Your own VPN with OpenIKED & OpenBSD Remote connectivity to your home network is something I think a lot of people find desirable. Over the years, I’ve just established an SSH tunnel and use it as a SOCKS proxy, sending my traffic through that. It’s a nice solution for a “poor man’s VPN”, but it can be a bit clunky, and it’s not great having to expose SSH to the world, even if you make sure to lock everything down I set out the other day to finally do it properly. I’d come across this great post by Gordon Turner: OpenBSD 6.2 VPN Endpoint for iOS and macOS Whilst it was exactly what I was looking for, it outlined how to set up an L2TP VPN. Really, I wanted IKEv2 for performance and security reasons (I won’t elaborate on this here, if you’re curious about the differences, there’s a lot of content out on the web explaining this). The client systems I’d be using have native support for IKEv2 (iOS, macOS, other BSD systems). But, I couldn’t find any tutorials in the same vein. So, let’s get stuck in! A quick note ✍️ This guide will walk through the set up of an IKEv2 VPN using OpenIKED on OpenBSD. It will detail a “road warrior” configuration, and use a PSK (pre-shared-key) for authentication. I’m sure it can be easily adapted to work on any other platforms that OpenIKED is available on, but keep in mind my steps are specifically for OpenBSD. Server Configuration As with all my home infrastructure, I crafted this set-up declaratively. So, I had the deployment of the VM setup in Terraform (deployed on my private Triton cluster), and wrote the configuration in Ansible, then tied them together using radekg/terraform-provisioner-ansible. One of the reasons I love Ansible is that its syntax is very simplistic, yet expressive. As such, I feel it fits very well into explaining these steps with snippets of the playbook I wrote. I’ll link the full playbook a bit further down for those interested. See the full article for the information on: sysctl parameters The naughty list (optional) Configure the VPN network interface Configure the firewall Configure the iked service Gateway configuration Client configuration Troubleshooting DigitalOcean ###FreeBSD on a System76 Galago Pro Hey all, It’s been a while since I last posted but I thought I would hammer something out here. My most recent purchase was a System76 Galago Pro. I thought, afer playing with POP! OS a bit, is there any reason I couldn’t get BSD on this thing. Turns out the answer is no, no there isnt and it works pretty decently. To get some accounting stuff out of the way I tested this all on FreeBSD Head and 11.1, and all of it is valid as of May 10, 2018. Head is a fast moving target so some of this is only bound to improve. The hardware Intel Core i5 Gen 8 UHD Graphics 620 16 GB DDR4 Ram RTL8411B PCI Express Card Reader RTL8111 Gigabit ethernet controller Intel HD Audio Samsung SSD 960 PRO 512GB NVMe The caveats There are a few things that I cant seem to make work straight out of the box, and that is the SD Card reader, the backlight, and the audio is a bit finicky. Also the trackpad doesn’t respond to two finger scrolling. The wiki is mostly up to date, there are a few edits that need to be made still but there is a bug where I cant register an account yet so I haven’t made all the changes. Processor It works like any other Intel processor. Pstates and throttling work. Graphics The boot menu sets itself to what looks like 1024x768, but works as you expect in a tiny window. The text console does the full 3200x1800 resolution, but the text is ultra tiny. There isnt a font for the console that covers hidpi screens yet. As for X Windows it requres the drm-kmod-next package. Once installed follow the directions from the package and it works with almost no fuss. I have it running on X with full intel acceleration, but it is running at it’s full 3200x1800 resolution, to scale that down just do xrandr --output eDP-1 --scale 0.5x0.5 it will blow it up to roughly 200%. Due to limitations with X windows and hidpi it is harder to get more granular. Intel Wireless 8265 The wireless uses the iwm module, as of right now it does not seem to automagically load right now. Adding iwm_load=“YES” will cause the module to load on boot and kldload iwm Battery I seem to be getting about 5 hours out of the battery, but everything reports out of the box as expected. I could get more by throttling the CPU down speed wise. Overall impression It is a pretty decent experience. While not as polished as a Thinkpad there is a lot of potential with a bit of work and polishing. The laptop itself is not bad, the keyboard is responsive. The build quality is pretty solid. My only real complaint is the trackpad is stiff to click and sort of tiny. They seem to be a bit indifferent to non linux OSes running on the gear but that isnt anything new. I wont have any problems using it and is enough that when I work through this laptop, but I’m not sure at this stage if my next machine will be a System76 laptop, but they have impressed me enough to put them in the running when I go to look for my next portable machine but it hasn’t yet replaced the hole left in my heart by lenovo messing with the thinkpad. ###Hardware accelerated AES/HMAC-SHA on octeons In this commit, visa@ submitted code (disabled for now) to use built-in acceleration on octeon CPUs, much like AESNI for x86s. I decided to test tcpbench(1) and IPsec, before and after updating and enabling the octcrypto(4) driver. I didn't capture detailed perf stats from before the update, I had heard someone say that Edgerouter Lite boxes would only do some 6MBit/s over ipsec, so I set up a really simple ipsec.conf with ike esp from A to B leading to a policy of esp tunnel from A to B spi 0xdeadbeef auth hmac-sha2-256 enc aes going from one ERL to another (I collect octeons, so I have a bunch to test with) and let tcpbench run for a while on it. My numbers hovered around 7Mbit/s, which coincided with what I've heard, and also that most of the CPU gets used while doing it. Then I edited /sys/arch/octeon/conf/GENERIC, removed the # from octcrypto0 at mainbus0 and recompiled. Booted into the new kernel and got a octcrypto0 line in dmesg, and it was time to rock the ipsec tunnel again. The crypto algorithm and HMAC used by default on ipsec coincides nicely with the list of accelerated functions provided by the driver. Before we get to tunnel traffic numbers, just one quick look at what systat pigs says while the ipsec is running at full steam: PID USER NAME CPU 20\ 40\ 60\ 80\ 100\ 58917 root crypto 52.25 ################# 42636 root softnet 42.48 ############## (idle) 29.74 ######### 1059 root tcpbench 24.22 ####### 67777 root crynlk 19.58 ###### So this indicates that the load from doing ipsec and generating the traffic is somewhat nicely evened out over the two cores in the Edgerouter, and there's even some CPU left unused, which means I can actually ssh into it and have it usable. I have had it running for almost 2 days now, moving some 2.1TB over the tunnel. Now for the new and improved performance numbers: 204452123 4740752 37.402 100.00% Conn: 1 Mbps: 37.402 Peak Mbps: 58.870 Avg Mbps: 37.402 204453149 4692968 36.628 100.00% Conn: 1 Mbps: 36.628 Peak Mbps: 58.870 Avg Mbps: 36.628 204454167 5405552 42.480 100.00% Conn: 1 Mbps: 42.480 Peak Mbps: 58.870 Avg Mbps: 42.480 204455188 5202496 40.804 100.00% Conn: 1 Mbps: 40.804 Peak Mbps: 58.870 Avg Mbps: 40.804 204456194 5062208 40.256 100.00% Conn: 1 Mbps: 40.256 Peak Mbps: 58.870 Avg Mbps: 40.256 The tcpbench numbers fluctuate up and down a bit, but the output is nice enough to actually keep tabs on the peak values. Peaking to 58.8MBit/s! Of course, as you can see, the average is lower but nice anyhow. A manyfold increase in performance, which is good enough in itself, but also moves the throughput from a speed that would make a poor but cheap gateway to something actually useful and decent for many home network speeds. Biggest problem after this gets enabled will be that my options to buy cheap used ERLs diminish. ##Beastie Bits Using FreeBSD Text Dumps llvm’s lld now the default linker for amd64 on FreeBSD Author Discoverability Pledge and Unveil in OpenBSD {pdf} EuroBSDCon 2018 CFP Closes June 17, hurry up and get your submissions in Just want to attend, but need help getting to the conference? Applications for the Paul Schenkeveld travel grant accepted until June 15th Tarsnap ##Feedback/Questions Casey - ZFS on Digital Ocean Jürgen - A Question Kevin - Failover best practice Dennis - SQL Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 249: Router On A Stick | BSD Now 249
OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows backup to FreeNAS, ZFS block size importance, and NetBSD as router on a stick. ##Headlines ZFS and DTrace update lands in NetBSD merge a new version of the CDDL dtrace and ZFS code. This changes the upstream vendor from OpenSolaris to FreeBSD, and this version is based on FreeBSD svn r315983. r315983 is from March 2017 (14 months ago), so there is still more work to do in addition to the 10 years of improvements from upstream, this version also has these NetBSD-specific enhancements: dtrace FBT probes can now be placed in kernel modules. ZFS now supports mmap(). This brings NetBSD 10 years forward, and they should be able to catch the rest of the way up fairly quickly ###NetBSD network stack security audit Maxime Villard has been working on an audit of the NetBSD network stack, a project sponsored by The NetBSD Foundation, which has served all users of BSD-derived operating systems. Over the last five months, hundreds of patches were committed to the source tree as a result of this work. Dozens of bugs were fixed, among which a good number of actual, remotely-triggerable vulnerabilities. Changes were made to strengthen the networking subsystems and improve code quality: reinforce the mbuf API, add many KASSERTs to enforce assumptions, simplify packet handling, and verify compliance with RFCs. This was done in several layers of the NetBSD kernel, from device drivers to L4 handlers. In the course of investigating several bugs discovered in NetBSD, I happened to look at the network stacks of other operating systems, to see whether they had already fixed the issues, and if so how. Needless to say, I found bugs there too. A lot of code is shared between the BSDs, so it is especially helpful when one finds a bug, to check the other BSDs and share the fix. The IPv6 Buffer Overflow: The overflow allowed an attacker to write one byte of packet-controlled data into ‘packetstorage+off’, where ‘off’ could be approximately controlled too. This allowed at least a pretty bad remote DoS/Crash The IPsec Infinite Loop: When receiving an IPv6-AH packet, the IPsec entry point was not correctly computing the length of the IPv6 suboptions, and this, before authentication. As a result, a specially-crafted IPv6 packet could trigger an infinite loop in the kernel (making it unresponsive). In addition this flaw allowed a limited buffer overflow - where the data being written was however not controllable by the attacker. The IPPROTO Typo: While looking at the IPv6 Multicast code, I stumbled across a pretty simple yet pretty bad mistake: at one point the Pim6 entry point would return IPPROTONONE instead of IPPROTODONE. Returning IPPROTONONE was entirely wrong: it caused the kernel to keep iterating on the IPv6 packet chain, while the packet storage was already freed. The PF Signedness Bug: A bug was found in NetBSD’s implementation of the PF firewall, that did not affect the other BSDs. In the initial PF code a particular macro was used as an alias to a number. This macro formed a signed integer. NetBSD replaced the macro with a sizeof(), which returns an unsigned result. The NPF Integer Overflow: An integer overflow could be triggered in NPF, when parsing an IPv6 packet with large options. This could cause NPF to look for the L4 payload at the wrong offset within the packet, and it allowed an attacker to bypass any L4 filtering rule on IPv6. The IPsec Fragment Attack: I noticed some time ago that when reassembling fragments (in either IPv4 or IPv6), the kernel was not removing the MPKTHDR flag on the secondary mbufs in mbuf chains. This flag is supposed to indicate that a given mbuf is the head of the chain it forms; having the flag on secondary mbufs was suspicious. What Now: Not all protocols and layers of the network stack were verified, because of time constraints, and also because of unexpected events: the recent x86 CPU bugs, which I was the only one able to fix promptly. A todo list will be left when the project end date is reached, for someone else to pick up. Me perhaps, later this year? We’ll see. This security audit of NetBSD’s network stack is sponsored by The NetBSD Foundation, and serves all users of BSD-derived operating systems. The NetBSD Foundation is a non-profit organization, and welcomes any donations that help continue funding projects of this kind. DigitalOcean ###MySQL on ZFS Performance I used sysbench to create a table of 10M rows and then, using export/import tablespace, I copied it 329 times. I ended up with 330 tables for a total size of about 850GB. The dataset generated by sysbench is not very compressible, so I used lz4 compression in ZFS. For the other ZFS settings, I used what can be found in my earlier ZFS posts but with the ARC size limited to 1GB. I then used that plain configuration for the first benchmarks. Here are the results with the sysbench point-select benchmark, a uniform distribution and eight threads. The InnoDB buffer pool was set to 2.5GB. In both cases, the load is IO bound. The disk is doing exactly the allowed 3000 IOPS. The above graph appears to be a clear demonstration that XFS is much faster than ZFS, right? But is that really the case? The way the dataset has been created is extremely favorable to XFS since there is absolutely no file fragmentation. Once you have all the files opened, a read IOP is just a single fseek call to an offset and ZFS doesn’t need to access any intermediate inode. The above result is about as fair as saying MyISAM is faster than InnoDB based only on table scan performance results of unfragmented tables and default configuration. ZFS is much less affected by the file level fragmentation, especially for point access type. ZFS stores the files in B-trees in a very similar fashion as InnoDB stores data. To access a piece of data in a B-tree, you need to access the top level page (often called root node) and then one block per level down to a leaf-node containing the data. With no cache, to read something from a three levels B-tree thus requires 3 IOPS. The extra IOPS performed by ZFS are needed to access those internal blocks in the B-trees of the files. These internal blocks are labeled as metadata. Essentially, in the above benchmark, the ARC is too small to contain all the internal blocks of the table files’ B-trees. If we continue the comparison with InnoDB, it would be like running with a buffer pool too small to contain the non-leaf pages. The test dataset I used has about 600MB of non-leaf pages, about 0.1% of the total size, which was well cached by the 3GB buffer pool. So only one InnoDB page, a leaf page, needed to be read per point-select statement. To correctly set the ARC size to cache the metadata, you have two choices. First, you can guess values for the ARC size and experiment. Second, you can try to evaluate it by looking at the ZFS internal data. Let’s review these two approaches. You’ll read/hear often the ratio 1GB of ARC for 1TB of data, which is about the same 0.1% ratio as for InnoDB. I wrote about that ratio a few times, having nothing better to propose. Actually, I found it depends a lot on the recordsize used. The 0.1% ratio implies a ZFS recordsize of 128KB. A ZFS filesystem with a recordsize of 128KB will use much less metadata than another one using a recordsize of 16KB because it has 8x fewer leaf pages. Fewer leaf pages require less B-tree internal nodes, hence less metadata. A filesystem with a recordsize of 128KB is excellent for sequential access as it maximizes compression and reduces the IOPS but it is poor for small random access operations like the ones MySQL/InnoDB does. In order to improve ZFS performance, I had 3 options: Increase the ARC size to 7GB Use a larger Innodb page size like 64KB Add a L2ARC I was reluctant to grow the ARC to 7GB, which was nearly half the overall system memory. At best, the ZFS performance would only match XFS. A larger InnoDB page size would increase the CPU load for decompression on an instance with only two vCPUs; not great either. The last option, the L2ARC, was the most promising. ZFS is much more complex than XFS and EXT4 but, that also means it has more tunables/options. I used a simplistic setup and an unfair benchmark which initially led to poor ZFS results. With the same benchmark, very favorable to XFS, I added a ZFS L2ARC and that completely reversed the situation, more than tripling the ZFS results, now 66% above XFS. Conclusion We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%. ###OpenSMTPD new config TL;DR: OpenBSD #p2k18 hackathon took place at Epitech in Nantes. I was organizing the hackathon but managed to make progress on OpenSMTPD. As mentioned at EuroBSDCon the one-line per rule config format was a design error. A new configuration grammar is almost ready and the underlying structures are simplified. Refactor removes ~750 lines of code and solves _many issues that were side-effects of the design error. New features are going to be unlocked thanks to this. Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) The problem with one-line rules OpenSMTPD decides to accept or reject messages based on one-line rules such as: accept from any for domain poolp.org deliver to mbox Which can essentially be split into three units: the decision: accept/reject the matching: from any for domain poolp.org the (default) action: deliver to mbox To ensure that we meet the requirements of the transactions, the matching must be performed during the SMTP transaction before we take a decision for the recipient. Given that the rule is atomic, that it doesn’t have an identifier and that the action is part of it, the two only ways to make sure we can remember the action to take later on at delivery time is to either: save the action in the envelope, which is what we do today evaluate the envelope again at delivery And this this where it gets tricky… both solutions are NOT ok. The first solution, which we’ve been using for a decade, was to save the action within the envelope and kind of carve it in stone. This works fine… however it comes with the downsides that errors fixed in configuration files can’t be caught up by envelopes, that delivery action must be validated way ahead of time during the SMTP transaction which is much trickier, that the parsing of delivery methods takes place as the _smtpd user rather than the recipient user, and that envelope structures that are passed all over OpenSMTPD carry delivery-time informations, and more, and more, and more. The code becomes more complex in general, less safe in some particular places, and some areas are nightmarish to deal with because they have to deal with completely unrelated code that can’t be dealt with later in the code path. The second solution can’t be done. An envelope may be the result of nested rules, for example an external client, hitting an alias, hitting a user with a .forward file resolving to a user. An envelope on disk may no longer match any rule or it may match a completely different rule If we could ensure that it matched the same rule, evaluating the ruleset may spawn new envelopes which would violate the transaction. Trying to imagine how we could work around this leads to more and more and more RFC violations, incoherent states, duplicate mails, etc… There is simply no way to deal with this with atomic rules, the matching and the action must be two separate units that are evaluated at two different times, failure to do so will necessarily imply that you’re either using our first solution and all its downsides, or that you are currently in a world of pain trying to figure out why everything is burning around you. The minute the action is written to an on-disk envelope, you have failed. A proper ruleset must define a set of matching patterns resolving to an action identifier that is carved in stone, AND a set of named action set that is resolved dynamically at delivery time. Follow the link above to see the rest of the article Break ##News Roundup Backing up a legacy Windows machine to a FreeNAS with rsync I have some old Windows servers (10 years and counting) and I have been using rsync to back them up to my FreeNAS box. It has been working great for me. First of all, I do have my Windows servers backup in virtualized format. However, those are only one-time snapshops that I run once in a while. These are classic ASP IIS web servers that I can easily put up on a new VM. However, many of these legacy servers generate gigabytes of data a day in their repositories. Running VM conversion daily is not ideal. My solution was to use some sort of rsync solution just for the data repos. I’ve tried some applications that didn’t work too well with Samba shares and these old servers have slow I/O. Copying files to external sata or usb drive was not ideal. We’ve moved on from Windows to Linux and do not have any Windows file servers of capacity to provide network backups. Hence, I decided to use Delta Copy with FreeNAS. So here is a little write up on how to set it up. I have 4 Windows 2000 servers backing up daily with this method. First, download Delta Copy and install it. It is open-source and pretty much free. It is basically a wrapper for cygwin’s rsync. When you install it, it will ask you to install the Server services which allows you to run it as a Rsync server on Windows. You don’t need to do this. Instead, you will be just using the Delta Copy Client application. But before we do that, we will need to configure our Rsync service for our Windows Clients on FreeNAS. In FreeNAS, go under Services , Select Rsync > Rsync Modules > Add Rsync Module. Then fill out the form; giving the module a name and set the path. In my example, I simply called it WIN and linked it to a user called backupuser. This process is much easier than trying to configure the daemon rsyncd.conf file by hand. Now, on the Windows Client, start the DeltaCopy Client. You will create a new Profile. You will need to enter the IP of the Rsync server (FreeNAS) and specify the module name which will be called “Virtual Directory Name.” When you pull the select menu, the list of Rsync Modules you created earlier in FreeNAS will populate. You can set authentication. On the server, you can restrict by IP and do other things to lock down your rsync. Next, you will add folders (and/or files) you want to synchronize. Once the paths are set up, you can run a sync by right clicking the profile name. Here, I made a test sync to a home folder of a virtualized windows box. As you can see, I mounted the rsync volume on my mac to see the progress. The rsync worked beautifully. DeltaCopy did what it was told. Once you get everything working. The next thing to do is set schedules. If you done tasks schedules in Windows before, it is pretty straightforward. DeltaCopy has a link in the application to directly create a new task for you. I set my backups to run nightly and it has been working great. There you have it. Windows rsync to FreeNAS using DeltaCopy. The nice thing about FreeNAS is you don’t have to modify /etc/rsyncd.conf files. Everything can be done in the web admin. iXsystems ###How to write ATF tests for NetBSD I have recently started contributing to the amazing NetBSD foundation. I was thinking of trying out a new OS for a long time. Switching to the NetBSD OS has been a fun change. My first contribution to the NetBSD foundation was adding regression tests for the Address Sanitizer (ASan) in the Automated Testing Framework(ATF) which NetBSD has. I managed to complete it with the help of my really amazing mentor Kamil. This post is gonna be about the ATF framework that NetBSD has and how to you can add multiple tests with ease. Intro In ATF tests we will basically be talking about test programs which are a suite of test cases for a specific application or program. The ATF suite of Commands There are a variety of commands that the atf suite offers. These include : atf-check: The versatile command that is a vital part of the checking process. man page atf-run: Command used to run a test program. man page atf-fail: Report failure of a test case. atf-report: used to pretty print the atf-run. man page atf-set: To set atf test conditions. We will be taking a better look at the syntax and usage later. Let’s start with the Basics The ATF testing framework comes preinstalled with a default NetBSD installation. It is used to write tests for various applications and commands in NetBSD. One can write the Test programs in either the C language or in shell script. In this post I will be dealing with the Bash part. Follow the link above to see the rest of the article ###The Importance of ZFS Block Size Warning! WARNING! Don’t just do things because some random blog says so One of the important tunables in ZFS is the recordsize (for normal datasets) and volblocksize (for zvols). These default to 128KB and 8KB respectively. As I understand it, this is the unit of work in ZFS. If you modify one byte in a large file with the default 128KB record size, it causes the whole 128KB to be read in, one byte to be changed, and a new 128KB block to be written out. As a result, the official recommendation is to use a block size which aligns with the underlying workload: so for example if you are using a database which reads and writes 16KB chunks then you should use a 16KB block size, and if you are running VMs containing an ext4 filesystem, which uses a 4KB block size, you should set a 4KB block size You can see it has a 16GB total file size, of which 8.5G has been touched and consumes space - that is, it’s a “sparse” file. The used space is also visible by looking at the zfs filesystem which this file resides in Then I tried to copy the image file whilst maintaining its “sparseness”, that is, only touching the blocks of the zvol which needed to be touched. The original used only 8.42G, but the copy uses 14.6GB - almost the entire 16GB has been touched! What’s gone wrong? I finally realised that the difference between the zfs filesystem and the zvol is the block size. I recreated the zvol with a 128K block size That’s better. The disk usage of the zvol is now exactly the same as for the sparse file in the filesystem dataset It does impact the read speed too. 4K blocks took 5:52, and 128K blocks took 3:20 Part of this is the amount of metadata that has to be read, see the MySQL benchmarks from earlier in the show And yes, using a larger block size will increase the compression efficiency, since the compressor has more redundant data to optimize. Some of the savings, and the speedup is because a lot less metadata had to be written Your zpool layout also plays a big role, if you use 4Kn disks, and RAID-Z2, using a volblocksize of 8k will actually result in a large amount of wasted space because of RAID-Z padding. Although, if you enable compression, your 8k records may compress to only 4k, and then all the numbers change again. ###Using a Raspberry Pi 2 as a Router on a Stick Starring NetBSD Sorry we didn’t answer you quickly enough A few weeks ago I set about upgrading my feeble networking skills by playing around with a Cisco 2970 switch. I set up a couple of VLANs and found the urge to set up a router to route between them. The 2970 isn’t a modern layer 3 switch so what am I to do? Why not make use of the Raspberry Pi 2 that I’ve never used and put it to some good use as a ‘router on a stick’. I could install a Linux based OS as I am quite familiar with it but where’s the fun in that? In my home lab I use SmartOS which by the way is a shit hot hypervisor but as far as I know there aren’t any Illumos distributions for the Raspberry Pi. On the desktop I use Solus OS which is by far the slickest Linux based OS that I’ve had the pleasure to use but Solus’ focus is purely desktop. It’s looking like BSD then! I believe FreeBSD is renowned for it’s top notch networking stack and so I wrote to the BSDNow show on Jupiter Broadcasting for some help but it seems that the FreeBSD chaps from the show are off on a jolly to some BSD conference or another(love the show by the way). It looks like me and the luvverly NetBSD are on a date this Saturday. I’ve always had a secret love for NetBSD. She’s a beautiful, charming and promiscuous lover(looking at the supported architectures) and I just can’t stop going back to her despite her misgivings(ahem, zfs). Just my type of grrrl! Let’s crack on… Follow the link above to see the rest of the article ##Beastie Bits BSD Jobs University of Aberdeen’s Internet Transport Research Group is hiring VR demo on OpenBSD via OpenHMD with OSVR HDK2 patch runs ed, and ed can run anything (mentions FreeBSD and OpenBSD) Alacritty (OpenGL-powered terminal emulator) now supports OpenBSD MAP_STACK Stack Register Checking Committed to -current EuroBSDCon CfP till June 17, 2018 Tarsnap ##Feedback/Questions NeutronDaemon - Tutorial request Kurt - Question about transferability/bi-directionality of ZFS snapshots and send/receive Peter - A Question and much love for BSD Now Peter - netgraph state Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Episode 248: Show Me The Mooney | BSD Now 248
DragonflyBSD release 5.2.1 is here, BPF kernel exploit writeup, Remote Debugging the running OpenBSD kernel, interview with Patrick Mooney, FreeBSD buildbot setup in a jail, dumping your USB, and 5 years of gaming on FreeBSD. Headlines DragonFlyBSD: release52 (w/stable HAMMER2, as default root) DragonflyBSD 5.2.1 was released on May 21, 2018 > Big Ticket items: Meltdown and Spectre mitigation support Meltdown isolation and spectre mitigation support added. Meltdown mitigation is automatically enabled for all Intel cpus. Spectre mitigation must be enabled manually via sysctl if desired, using sysctls machdep.spectremitigation and machdep.meltdownmitigation. HAMMER2 H2 has received a very large number of bug fixes and performance improvements. We can now recommend H2 as the default root filesystem in non-clustered mode. Clustered support is not yet available. ipfw Updates Implement state based "redirect", i.e. without using libalias. ipfw now supports all possible ICMP types. Fix ICMPMAXTYPE assumptions (now 40 as of this release). Improved graphics support The drm/i915 kernel driver has been updated to support Intel Coffeelake GPUs Add 24-bit pixel format support to the EFI frame buffer code. Significantly improve fbio support for the "scfb" XOrg driver. This allows EFI frame buffers to be used by X in situations where we do not otherwise support the GPU. Partly implement the FBIOBLANK ioctl for display powersaving. Syscons waits for drm modesetting at appropriate places, avoiding races. PS4 4.55 BPF Race Condition Kernel Exploit Writeup Note: While this bug is primarily interesting for exploitation on the PS4, this bug can also potentially be exploited on other unpatched platforms using FreeBSD if the attacker has read/write permissions on /dev/bpf, or if they want to escalate from root user to kernel code execution. As such, I've published it under the "FreeBSD" folder and not the "PS4" folder. Introduction Welcome to the kernel portion of the PS4 4.55FW full exploit chain write-up. This bug was found by qwerty, and is fairly unique in the way it's exploited, so I wanted to do a detailed write-up on how it worked. The full source of the exploit can be found here. I've previously covered the webkit exploit implementation for userland access here. FreeBSD or Sony's fault? Why not both... Interestingly, this bug is actually a FreeBSD bug and was not (at least directly) introduced by Sony code. While this is a FreeBSD bug however, it's not very useful for most systems because the /dev/bpf device driver is root-owned, and the permissions for it are set to 0600 (meaning owner has read/write privileges, and nobody else does) - though it can be used for escalating from root to kernel mode code execution. However, let’s take a look at the make_dev() call inside the PS4 kernel for /dev/bpf (taken from a 4.05 kernel dump). seg000:FFFFFFFFA181F15B lea rdi, unk_FFFFFFFFA2D77640 seg000:FFFFFFFFA181F162 lea r9, aBpf ; "bpf" seg000:FFFFFFFFA181F169 mov esi, 0 seg000:FFFFFFFFA181F16E mov edx, 0 seg000:FFFFFFFFA181F173 xor ecx, ecx seg000:FFFFFFFFA181F175 mov r8d, 1B6h seg000:FFFFFFFFA181F17B xor eax, eax seg000:FFFFFFFFA181F17D mov cs:qword_FFFFFFFFA34EC770, 0 seg000:FFFFFFFFA181F188 call make_dev We see UID 0 (the UID for the root user) getting moved into the register for the 3rd argument, which is the owner argument. However, the permissions bits are being set to 0x1B6, which in octal is 0666. This means anyone can open /dev/bpf with read/write privileges. I’m not sure why this is the case, qwerty speculates that perhaps bpf is used for LAN gaming. In any case, this was a poor design decision because bpf is usually considered privileged, and should not be accessible to a process that is completely untrusted, such as WebKit. On most platforms, permissions for /dev/bpf will be set to 0x180, or 0600. Race Conditions - What are they? The class of the bug abused in this exploit is known as a "race condition". Before we get into bug specifics, it's important for the reader to understand what race conditions are and how they can be an issue (especially in something like a kernel). Often in complex software (such as a kernel), resources will be shared (or "global"). This means other threads could potentially execute code that will access some resource that could be accessed by another thread at the same point in time. What happens if one thread accesses this resource while another thread does without exclusive access? Race conditions are introduced. Race conditions are defined as possible scenarios where events happen in a sequence different than the developer intended which leads to undefined behavior. In simple, single-threaded programs, this is not an issue because execution is linear. In more complex programs where code can be running in parallel however, this becomes a real issue. To prevent these problems, atomic instructions and locking mechanisms were introduced. When one thread wants to access a critical resource, it will attempt to acquire a "lock". If another thread is already using this resource, generally the thread attempting to acquire the lock will wait until the other thread is finished with it. Each thread must release the lock to the resource after they're done with it, failure to do so could result in a deadlock. While locking mechanisms such as mutexes have been introduced, developers sometimes struggle to use them properly. For example, what if a piece of shared data gets validated and processed, but while the processing of the data is locked, the validation is not? There is a window between validation and locking where that data can change, and while the developer thinks the data has been validated, it could be substituted with something malicious after it is validated, but before it is used. Parallel programming can be difficult, especially when, as a developer, you also want to factor in the fact that you don't want to put too much code in between locking and unlocking as it can impact performance. See article for the rest iXsystems Remote Debugging the running OpenBSD kernel Subtitled: A way to understand the OpenBSD internals +> The Problem +> A few month ago, I tried porting the FreeBSD kdb along with it's gdb stub implementations to OpenBSD as a practice of learning the internals of an BSD operating system. The ddb code in both FreeBSD and OpenBSD looks pretty much the same and the GDB Remote Serial Protocol looks very minimal. +> But sadly I got very busy and the work is stalled but I'm planning on resuming the attempt as soon as I get the chance, But there is an alternative way to Debugging the OpenBSD kernel via QEMU. What I did below is basically the same with a few minor changes which I hope to describe it as best. +> Installing OpenBSD on Qemu +> For debugging the kernel, we need a working OpenBSD system running on Qemu. I chose to create a raw disk file to be able to easily mount it later via the host and copy the custom kernel onto it. $ qemu-img create -f raw disk.raw 5G $ qemu-system-x8664 -m 256M \ -drive format=raw,file=install63.fs \ -drive format=raw,file=disk.raw +> Custom Kernel +> To debug the kernel, we need a version of the kernel with debugging symbols and for that we have to recompile it first. The process is documented at Building the System from Source: ... +> Then we can copy the bsd kernel to the guest machine and keep the bsd.gdb on the host to start the remote debugging via gdb. +> Remote debugging kernel +> Now it's to time to boot the guest with the new custom kernel. Remember that the -s argument enables the gdb server on qemu on localhost port 1234 by default: $ qemu-system-x8664 -m 256M -s \ -net nic -net user \ -drive format=raw,file=install63.fs \ +> Now to finally attach to the running kernel: Interview - Patrick Mooney - Software Engineer pmooney@pfmooney.com / @pfmooney BR: How did you first get introduced to UNIX? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on in the past? AJ: Can you tell us more about what attracted you to illumos? BR: How did you get interested in, and started with, systems development? AJ: When did you first get interested in bhyve? BR: How much work was it to take the years-old port of bhyve and get it working on modern IllumOS? AJ: What was the process for getting the bhyve port caught up to current FreeBSD? BR: How usable is bhyve on illumOS? AJ: What area are you most interested in improving in bhyve? BR: Do you think the FreeBSD and illumos versions of bhyve will stay in sync with each other? AJ: What do you do for fun? BR: Anything else you want to mention? News Roundup Setting up buildbot in FreeBSD Jails In this article, I would like to present a tutorial to set up buildbot, a continuous integration (CI) software (like Jenkins, drone, etc.), making use of FreeBSD’s containerization mechanism "jails". We will cover terminology, rationale for using both buildbot and jails together, and installation steps. At the end, you will have a working buildbot instance using its sample build configuration, ready to play around with your own CI plans (or even CD, it’s very flexible!). Some hints for production-grade installations are given, but the tutorial steps are meant for a test environment (namely a virtual machine). Buildbot’s configuration and detailed concepts are not in scope here. Table of contents Choosing host operating system and version for buildbot Create a FreeBSD playground Introduction to jails Overview of buildbot Set up jails Install buildbot master Run buildbot master Install buildbot worker Run buildbot worker Set up web server nginx to access buildbot UI Run your first build Production hints Finished! Choosing host operating system and version for buildbot We choose the released version of FreeBSD (11.1-RELEASE at the moment). There is no particular reason for it, and as a matter of fact buildbot as a Python-based server is very cross-platform; therefore the underlying OS platform and version should not make a large difference. It will make a difference for what you do with buildbot, however. For instance, poudriere is the de-facto standard for building packages from source on FreeBSD. Builds run in jails which may be any FreeBSD base system version older or equal to the host’s version (reason will be explained below). In other words, if the host is FreeBSD 11.1, build jails created by poudriere could e.g. use 9.1, 10.3, 11.0, 11.1, but potentially not version 12 or newer because of incompatibilities with the host’s kernel (jails do not run their own kernel as full virtual machines do). To not prolong this article over the intended scope, the details of which nice things could be done or automated with buildbot are not covered. Package names on the FreeBSD platform are independent of the OS version, since external software (as in: not part of base system) is maintained in FreeBSD ports. So, if your chosen FreeBSD version (here: 11) is still officially supported, the packages mentioned in this post should work. In the unlikely event of package name changes before you read this article, you should be able to find the actual package names like pkg search buildbot. Other operating systems like the various Linux distributions will use different package names but might also offer buildbot pre-packaged. If not, the buildbot installation manual offers steps to install it manually. In such case, the downside is that you will have to maintain and update the buildbot modules outside the stability and (semi-)automatic updates of your OS packages. See article for the rest DigitalOcean Dumping your USB One of the many new features of OpenBSD 6.3 is the possibility to dump USB traffic to userland via bpf(4). This can be done with tcpdump(8) by specifying a USB bus as interface: ``` tcpdump -Xx -i usb0 tcpdump: listening on usb0, link-type USBPCAP 12:28:03.317945 bus 0 < addr 1: ep1 intr 2 0000: 0400 .. 12:28:03.318018 bus 0 > addr 1: ep0 ctrl 8 0000: 00a3 0000 0002 0004 00 ......... [...] ``` As you might have noted I decided to implement the existing USBPcap capture format. A capture format is required because USB packets do not include all the necessary information to properly interpret them. I first thought I would implement libpcap's DLTUSB but then I quickly realize that this was not a standard. It is instead a FreeBSD specific format which has been since then renamed DLTUSBFREEBSD. But I didn't want to embrace xkcd #927, so I look at the existing formats: DLTUSBFREEBSD, DLTUSBLINUX, DLTUSBLINUXMMAPPED, DLTUSBDARWIN and DLT_USBPCAP. I was first a bit sad to see that nobody could agree on a common format then I moved on and picked the simplest one: USBPcap. Implementing an already existing format gives us out-of-box support for all the tools supporting it. That's why having common formats let us share our energy. In the case of USBPcap it is already supported by Wireshark, so you can already inspect your packet graphically. For that you need to first capture raw packets: ``` tcpdump -s 3303 -w usb.pcap -i usb0 tcpdump: listening on usb0, link-type USBPCAP ^C 208 packets received by filter 0 packets dropped by kernel ``` USB packets can be quite big, that's why I'm not using tcpdump(8)'s default packet size. In this case, I want to make sure I can dump the complete uaudio(4) frames. It is important to say that what is dumped to userland is what the USB stack sees. Packets sent on the wire might differ, especially when it comes to retries and timing. So this feature is not here to replace any USB analyser, however I hope that it will help people understand how things work and what the USB stack is doing. Even I found some interesting timing issues while implementing isochronous support. Run OpenBSD on your web server Deploy and login to your OpenBSD server first. As soon as you're there you can enable an httpd(8) daemon, it's already installed on OpenBSD, you just need to configure it: www# vi /etc/httpd.conf Add two server sections---one for www and another for naked domain (all requests are redirected to www). ``` server "www.example.com" { listen on * port 80 root "/htdocs/www.example.com" } server "example.com" { listen on * port 80 block return 301 "http://www.example.com$REQUEST_URI" } ``` httpd is chrooted to /var/www by default, so let's make a document root directory: www# mkdir -p /var/www/htdocs/www.example.com Save and check this configuration: www# httpd -n configuration ok Enable httpd(8) daemon and start it. www# rcctl enable httpd www# rcctl start httpd Publish your website Copy your website content into /var/www/htdocs/www.example.com and then test it your web browser. http://XXX.XXX.XXX.XXX/ Your web server should be up and running. Update DNS records If there is another HTTPS server using this domain, configure that server to redirect all HTTPS requests to HTTP. Now as your new server is ready you can update DNS records accordingly. example.com. 300 IN A XXX.XXX.XXX.XXX www.example.com. 300 IN A XXX.XXX.XXX.XXX Examine your DNS is propagated. $ dig example.com www.example.com Check IP addresses it answer sections. If they are correct, you should be able to access your new web server by its domain name. What's next? Enable HTTPS on your server. Modern Akonadi and KMail on FreeBSD For, quite literally a year or more, KMail and Akonadi on FreeBSD have been only marginally useful, at best. KDE4 era KMail was pretty darn good, but everything after that has had a number of FreeBSD users tearing out their hair. Sure, you can go to Trojitá, which has its own special problems and is generally “meh”, or bail out entirely to webmail, but .. KMail is a really great mail client when it works. Which, on Linux desktops, is nearly always, and on FreeBSD, is was nearly never. I looked at it with Dan and Volker last summer, briefly, and we got not much further than “hmm”. There’s a message about “The world is going to end!” which hardly makes sense, it means that a message has been truncated or corrupted while traversing a UNIX domain socket. Now Alexandre Martins — praise be! — has wandered in with a likely solution. KDE Bug 381850 contains a suggestion, which deserves to be publicised (and tested): sysctl net.local.stream.recvspace=65536 sysctl net.local.stream.sendspace=65536 The default FreeBSD UNIX local socket buffer space is 8kiB. Bumping the size up to 64kiB — which matches the size that Linux has by default — suddenly makes KMail and Akonadi shine again. No other changes, no recompiling, just .. bump the sysctls (perhaps also in /etc/sysctl.conf) and KMail from Area51 hums along all day without ending the world. Since changing this value may have other effects, and Akonadi shouldn’t be dependent on a specific buffer size anyway, I’m looking into the Akonadi code (encouraged by Dan) to either automatically size the socket buffers, or to figure out where in the underlying code the assumption about buffer size lives. So for now, sysctl can make KMail users on FreeBSD happy, and later we hope to have things fully automatic (and if that doesn’t pan out, well, pkg-message exists). PS. Modern KDE PIM applications — Akonadi, KMail — which live in the deskutils/ category of the official FreeBSD ports were added to the official tree April 10th, so you can get your fix now from the official tree. Beastie Bits pkg-provides support for DragonFly (from Rodrigo Osorio) Memories of writing a parser for man pages Bryan Cantrill interview over at DeveloperOnFire podcast 1978-03-25 - 2018-03-25: 40 years BSD Mail My 5 years of FreeBSD gaming: a compendium of free games and engines running natively on FreeBSD Sequential Resilver being upstreamed to FreeBSD, from FreeNAS, where it was ported from ZFS-on-Linux University of Aberdeen’s Internet Transport Research Group is hiring Tarsnap ad Feedback/Questions Dave - mounting non-filesystem things inside jails Morgan - ZFS on Linux Data loss bug Rene - How to keep your ISP’s nose out of your browser history with encrypted DNS Rodriguez - Feedback question! Relating to Windows Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv