Created by three guys who love BSD, we cover the latest news and have an extensive series of tutorials, as well as interviews with various people from all areas of the BSD community. It also serves as a platform for support and questions. We love and advocate FreeBSD, OpenBSD, NetBSD, DragonFlyBSD and TrueOS. Our show aims to be helpful and informative for new users that want to learn about them, but still be entertaining for the people who are already pros. The show airs on Wednesdays at 2:00PM (US Eastern time) and the edited version is usually up the following day.

Similar Podcasts

Elixir Outlaws

Elixir Outlaws
Elixir Outlaws is an informal discussion about interesting things happening in Elixir. Our goal is to capture the spirit of a conference hallway discussion in a podcast.

The Cynical Developer

The Cynical Developer
A UK based Technology and Software Developer Podcast that helps you to improve your development knowledge and career, through explaining the latest and greatest in development technology and providing you with what you need to succeed as a developer.

Programming Throwdown

Programming Throwdown
Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.

Episode 254: Bare the OS | BSD Now 254

July 12, 2018 1:31:23 54.9 MB Downloads: 0

Control flow integrity with HardenedBSD, fixing bufferbloat with OpenBSD’s pf, Bareos Backup Server on FreeBSD, MeetBSD CfP, crypto simplified interface, twitter gems, interesting BSD commits, and more. ##Headlines Silent Fanless FreeBSD Desktop/Server Today I will write about silent fanless FreeBSD desktop or server computer … or NAS … or you name it, it can have multa##Headlines ###Cross-DSO CFI in HardenedBSD Control Flow Integrity, or CFI, raises the bar for attackers aiming to hijack control flow and execute arbitrary code. The llvm compiler toolchain, included and used by default in HardenedBSD 12-CURRENT/amd64, supports forward-edge CFI. Backward-edge CFI support is gained via a tangential feature called SafeStack. Cross-DSO CFI builds upon ASLR and PaX NOEXEC for effectiveness. HardenedBSD supports non-Cross-DSO CFI in base for 12-CURRENT/amd64 and has it enabled for a few individual ports. The term “non-Cross-DSO CFI” means that CFI is enabled for code within an application’s codebase, but not for the shared libraries it depends on. Supporting non-Cross-DSO CFI is an important initial milestone for supporting Cross-DSO CFI, or CFI applied to both shared libraries and applications. This article discusses where HardenedBSD stands with regards to Cross-DSO CFI in base. We have made a lot of progress, yet we’re not even half-way there. Brace yourself: This article is going to be full of references to “Cross-DSO CFI.” Make a drinking game out of it. Or don’t. It’s your call. ;) Using More llvm Toolchain Components CFI requires compiling source files with Link-Time Optimization (LTO). I remembered hearing a few years back that llvm developers were able to compile the entirety of FreeBSD’s source code with LTO. Compiling with LTO produces intermediate object files as LLVM IR bitcode instead of ELF objects. In March of 2017, we started compiling all applications with LTO and non-Cross-DSO CFI. This also enabled ld.lld as the default linker in base since CFI requires lld. Commit f38b51668efcd53b8146789010611a4632cafade made the switch to ld.lld as the default linker while enabling non-Cross-DSO CFI at the same time. Building libraries in base requires applications like ar, ranlib, nm, and objdump. In FreeBSD 12-CURRENT, ar and ranlib are known as “BSD ar” and “BSD ranlib.” In fact, ar and ranlib are the same applications. One is hardlinked to another and the application changes behavior depending on arvgv[0] ending in “ranlib”. The ar, nm, and objdump used in FreeBSD do not support LLVM IR bitcode object files. In preparation for Cross-DSO CFI support, commit fe4bb0104fc75c7216a6dafe2d7db0e3f5fe8257 in October 2017 saw HardenedBSD switching ar, ranlib, nm, and objdump to their respective llvm components. The llvm versions due support LLVM IR bitcode object files (surprise!) There has been some fallout in the ports tree and we’ve added LLVM_AR_UNSAFE and friends to help transition those ports that dislike llvm-ar, llvm-ranlib, llvm-nm, and llvm-objdump. With ld.lld, llvm-ar, llvm-ranlib, llvm-nm, and llvm-objdump the default, HardenedBSD has effectively switched to a full llvm compiler toolchain in 12-CURRENT/amd64. Building Libraries With LTO The primary 12-CURRENT development branch in HardenedBSD (hardened/current/master) only builds applications with LTO as mentioned in the secion above. My first attempt at building all static and shared libraries failed due to issues within llvm itself. I reported these issues to FreeBSD. Ed Maste (emaste@), Dimitry Andric (dim@), and llvm’s Rafael Espindola expertly helped address these issues. Various commits within the llvm project by Rafael fully and quickly resolved the issues brought up privately in emails. With llvm fixed, I could now build nearly every library in base with LTO. I noticed, however, that if I kept non-Cross-DSO CFI and SafeStack enabled, all applications would segfault. Even simplistic applications like /bin/ls. Disabling both non-Cross-DSO CFI and SafeStack, but keeping LTO produced a fully functioning world! I have spent the last few months figuring out why enabling either non-Cross-DSO CFI or SafeStack caused issues. This brings us to today. The Sanitizers in FreeBSD FreeBSD brought in all the files required for SafeStack and CFI. When compiling with SafeStack, llvm statically links a full sanitization framework into the application. FreeBSD includes a full copy of the sanitization framework in SafeStack, including the common C++ sanization namespaces. Thus, libclang_rt.safestack included code meant to be shared among all the sanitizers, not just SafeStack. I had naively taken a brute-force approach to setting up the libclang_rt.cfi static library. I copied the Makefile from libclang_rt.safestack and used that as a template for libclang_rt.cfi. This approach was incorrect due to breaking the One Definition Rule (ODR). Essentially, I ended up including a duplicate copy of the C++ classes and sanitizer runtime if both CFI and SafeStack were used. In my Cross-DSO CFI development VM, I now have SafeStack disabled across-the-board and am only compiling in CFI. As of 26 May 2018, an LTO-ified world (libs + apps) works in my limited testing. /bin/ls does not crash anymore! The second major milestone for Cross-DSO CFI has now been reached. Known Issues And Limitations There are a few known issues and regressions. Note that this list of known issues essentially also constitutes a “work-in-progress” and every known issue will be fixed prior to the official launch of Cross-DSO CFI. It seems llvm does not like statically compiling applications with LTO that have a mixture of C and C++ code. /sbin/devd is one of these applications. As such, when Cross-DSO CFI is enabled, devd is compiled as a Position-Independent Executable (PIE). Doing this breaks UFS systems where /usr is on a separate partition. We are currently looking into solving this issue to allow devd to be statically compiled again. NO_SHARED is now unset in the tools build stage (aka, bootstrap-tools, cross-tools). This is related to the static compilation issue above. Unsetting NO_SHARED for to tools build stage is only a band-aid until we can resolve static compliation with LTO. One goal of our Cross-DSO CFI integration work is to be able to support the cfi-icall scheme when dlopen(3) and dlsym(3)/dlfunc(3) is used. This means the runtime linker (RTLD), must be enhanced to know and care about the CFI runtime. This enhancement is not currently implemented, but is planned. When Cross-DSO CFI is enabled, SafeStack is disabled. This is because compiling with Cross-DSO CFI brings in a second copy of the sanitizer runtime, violating the One Definition Rule (ODR). Resolving this issue should be straightforward: Unify the sanitizer runtime into a single common library that both Cross-DSO CFI and SafeStack can link against. When the installed world has Cross-DSO CFI enabled, performing a buildworld with Cross-DSO CFI disabled fails. This is somewhat related to the static compilation issue described above. Current Status I’ve managed to get a Cross-DSO CFI world booting on bare metal (my development laptop) and in a VM. Some applications failed to work. Curiously, Firefox still worked (which also means xorg works). I’m now working through the known issues list, researching and learning. Future Work Fixing pretty much everything in the “Known Issues And Limitations” section. ;P I need to create a static library that includes only a single copy of the common sanitizer framework code. Applications compiled with CFI or SafeStack will then only have a single copy of the framework. Next I will need to integrate support in the RTLD for Cross-DSO CFI. Applications with the cfi-icall scheme enabled that call functions resolved through dlsym(3) currently crash due to the lack of RTLD support. I need to make a design decision as to whether to only support adding cfi-icall whitelist entries only with dlfunc(3) or to also whitelist cfi-icall entries with the more widely used dlsym(3). There’s likely more items in the “TODO” bucket that I am not currently aware of. I’m treading in uncharted territory. I have no firm ETA for any bit of this work. We may gain Cross-DSO CFI support in 2018, but it’s looking like it will be later in either 2019 or 2020. Conclusion I have been working on Cross-DSO CFI support in HardenedBSD for a little over a year now. A lot of progress is being made, yet there’s still some major hurdles to overcome. This work has already helped improve llvm and I hope more commits upstream to both FreeBSD and llvm will happen. We’re getting closer to being able to send out a preliminary Call For Testing (CFT). At the very least, I would like to solve the static linking issues prior to publishing the CFT. Expect it to be published before the end of 2018. I would like to thank Ed Maste, Dimitry Andric, and Rafael Espindola for their help, guidance, and support. iXsystems FreeNAS 11.2-BETAs are starting to appear ###Bareos Backup Server on FreeBSD Ever heard about Bareos? Probably heard about Bacula. Read what is the difference here – Why Bareos forked from Bacula? Bareos (Backup Archiving Recovery Open Sourced) is a network based open source backup solution. It is 100% open source fork of the backup project from bacula.org site. The fork is in development since late 2010 and it has a lot of new features. The source is published on github and licensed under AGPLv3 license. Bareos supports ‘Always Incremental backup which is interesting especially for users with big data. The time and network capacity consuming full backups only have to be taken once. Bareos comes with WebUI for administration tasks and restore file browser. Bareos can backup data to disk and to tape drives as well as tape libraries. It supports compression and encryption both hardware-based (like on LTO tape drives) and software-based. You can also get professional services and support from Bareos as well as Bareos subscription service that provides you access to special quality assured installation packages. I started my sysadmin job with backup system as one of the new responsibilities, so it will be like going back to the roots. As I look on the ‘backup’ market it is more and more popular – especially in cloud oriented environments – to implement various levels of protection like GOLD, SILVER and BRONZE for example. They of course have different retention times, number of backups kept, different RTO and RPO. Below is a example implementation of BRONZE level backups in Bareos. I used 3 groups of A, B and C with FULL backup starting on DAY 0 (A group), DAY 1 (B group) and DAY 2 (C group). This way you still have FULL backups quite often and with 3 groups you can balance the network load. I for the days that we will not be doing FULL backups we will be doing DIFFERENTIAL backups. People often confuse them with INCREMENTAL backups. The difference is that DIFFERENTIAL backups are always against FULL backup, so its always ‘one level of combining’. INCREMENTAL ones are done against last done backup TYPE, so its possible to have 100+ levels of combining against 99 earlier INCREMENTAL backups and the 1 FULL backup. That is why I prefer DIFFERENTIAL ones here, faster recovery. That is all backups is about generally, recovery, some people/companies tend to forget that. The implementation of BRONZE in these three groups is not perfect, but ‘does the job’. I also made ‘simulation’ how these group will overlap at the end/beginning of the month, here is the result. Not bad for my taste. Today I will show you how to install and configure Bareos Server based on FreeBSD operating system. It will be the most simplified setup with all services on single machine: bareos-dir bareos-sd bareos-webui bareos-fd I also assume that in order to provide storage space for the backup data itself You would mount resources from external NFS shares. To get in touch with Bareos terminology and technology check their great Manual in HTML or PDF version depending which format You prefer for reading documentation. Also their FAQ provides a lot of needed answers. Also this diagram may be useful for You to get some grip into the Bareos world. System As every system needs to have its name we will use latin word closest to backup here – replica – for our FreeBSD system hostname. The install would be generally the same as in the FreeBSD Desktop – Part 2 – Install article. Here is our installed FreeBSD system with login prompt.

Episode 253: Silence of the Fans | BSD Now 253

July 05, 2018 1:26:51 52.18 MB Downloads: 0

Fanless server setup with FreeBSD, NetBSD on pinebooks, another BSDCan trip report, transparent network audio, MirBSD's Korn Shell on Plan9, static site generators on OpenBSD, and more. ##Headlines Silent Fanless FreeBSD Desktop/Server Today I will write about silent fanless FreeBSD desktop or server computer … or NAS … or you name it, it can have multiple purposes. It also very low power solution, which also means that it will not overheat. Silent means no fans at all, even for the PSU. The format of the system should also be brought to minimum, so Mini-ITX seems best solution here. I have chosen Intel based solutions as they are very low power (6-10W), if you prefer AMD (as I often do) the closest solution in comparable price and power is Biostar A68N-2100 motherboard with AMD E1-2100 CPU and 9W power. Of course AMD has even more low power SoC solutions but finding the Mini-ITX motherboard with decent price is not an easy task. For comparison Intel has lots of such solutions below 6W whose can be nicely filtered on the ark.intel.com page. Pity that AMD does not provide such filtration for their products. I also chosen AES instructions as storage encryption (GELI on FreeBSD) today seems as obvious as HTTPS for the web pages. Here is how the system look powered up and working This motherboard uses Intel J3355 SoC which uses 10W and has AES instructions. It has two cores at your disposal but it also supports VT-x and EPT extensions so you can even run Bhyve on it. Components Now, an example system would look like that one below, here are the components with their prices. $49 CPU/Motherboard ASRock J3355B-ITX Mini-ITX $14 RAM Crucial 4 GB DDR3L 1.35V (low power) $17 PSU 12V 160W Pico (internal) $11 PSU 12V 96W FSP (external) $5 USB 2.0 Drive 16 GB ADATA $4 USB Wireless 802.11n $100 TOTAL The PSU 12V 160W Pico (internal) and PSU 12V 96W FSP can be purchased on aliexpress.com or ebay.com for example, at least I got them there. Here is the 12V 160W Pico (internal) PSU and its optional additional cables to power the optional HDDs. If course its one SATA power and one MOLEX power so additional MOLEX-SATA power adapter for about 1$ would be needed. Here is the 12V 96W FSP (external) PSU without the power cord. This gives as total silent fanless system price of about $120. Its about ONE TENTH OF THE COST of the cheapest FreeNAS hardware solution available – the FreeNAS Mini (Diskless) costs $1156 also without disks. You can put plain FreeBSD on top of it or Solaris/Illumos distribution OmniOSce which is server oriented. You can use prebuilt NAS solution based on FreeBSD like FreeNAS, NAS4Free, ZFSguru or even Solaris/Illumos based storage with napp-it appliance. ###An annotated look at a NetBSD Pinebook’s startup Pinebook is an affordable 64-bit ARM notebook. Today we’re going to take a look at the kernel output at startup and talk about what hardware support is available on NetBSD. Photo Pinebook comes with 2GB RAM standard. A small amount of this is reserved by the kernel and framebuffer. NetBSD uses flattened device-tree (FDT) to enumerate devices on all Allwinner based SoCs. On a running system, you can inspect the device tree using the ofctl(8) utility: Pinebook’s Allwinner A64 processor is based on the ARM Cortex-A53. It is designed to run at frequencies up to 1.2GHz. The A64 is a quad core design. NetBSD’s aarch64 pmap does not yet support SMP, so three cores are disabled for now. The interrupt controller is a standard ARM GIC-400 design. Clock drivers for managing PLLs, module clock dividers, clock gating, software resets, etc. Information about the clock tree is exported in the hw.clk sysctl namespace (root access required to read these values). # sysctl hw.clk.sun50ia64ccu0.mmc2 hw.clk.sun50ia64ccu0.mmc2.rate = 200000000 hw.clk.sun50ia64ccu0.mmc2.parent = pllperiph02x hw.clk.sun50ia64ccu0.mmc2.parent_domain = sun50ia64ccu0 Digital Ocean http://do.co/bsdnow ###BSDCan 2018 Trip Report: Mark Johnston BSDCan is a highlight of my summers: the ability to have face-to-face conversations with fellow developers and contributors is invaluable and always helps refresh my enthusiasm for FreeBSD. While in a perfect world we would all be able to communicate effectively over the Internet, it’s often noted that locking a group of developers together in a room can be a very efficient way to make progress on projects that otherwise get strung out over time, and to me this is one of the principal functions of BSD conferences. In my case I was able to fix some kgdb bugs that had been hindering me for months; get some opinions on the design of a feature I’ve been working on for FreeBSD 12.0; hear about some ongoing usage of code that I’ve worked on; and do some pair-debugging of an issue that has been affecting another developer. As is tradition, on Tuesday night I dropped off my things at the university residence where I was staying, and headed straight to the Royal Oak. This year it didn’t seem quite as packed with BSD developers, but I did meet several long-time colleagues and get a chance to catch up. In particular, I chatted with Justin Hibbits and got to hear about the bring-up of FreeBSD on POWER9, a new CPU family released by IBM. Justin was able to acquire a workstation based upon this CPU, which is a great motivator for getting FreeBSD into shape on that platform. POWER9 also has some promise in the server market, so it’s important for FreeBSD to be a viable OS choice there. Wednesday morning saw the beginning of the two-day FreeBSD developer summit, which precedes the conference proper. Gordon Tetlow led the summit and did an excellent job organizing things and keeping to the schedule. The first presentation was by Deb Goodkin of the FreeBSD Foundation, who gave an overview of the Foundation’s role and activities. After Deb’s presentation, present members of the FreeBSD core team discussed the work they had done over the past two years, as well as open tasks that would be handed over to the new core team upon completion of the ongoing election. Finally, Marius Strobl rounded off the day’s presentations by discussing the state and responsibilities of FreeBSD’s release engineering team. One side discussion of interest to me was around the notion of tightening integration with our Bugzilla instance; at moment we do not have any good means to mark a given bug as blocking a release, making it easy for bugs to slip into releases and thus lowering our overall quality. With FreeBSD 12.0 upon us, I plan to help with the triage and fixes for known regressions before the release process begins. After a break, the rest of the morning was devoted to plans for features in upcoming FreeBSD releases. This is one of my favorite discussion topics and typically takes the form of have/need/want, where developers collectively list features that they’ve developed and intend to upstream (have), features that they are missing (need), and nice-to-have features (want). This year, instead of the usual format, we listed features that are intended to ship in FreeBSD 12.0. The compiled list ended up being quite ambitious given how close we are to the beginning of the release cycle, but many individual developers (including myself) have signed up to deliver work. I’m hopeful that most, if not all of it, will make it into the release. After lunch, I attended a discussion led by Matt Ahrens and Alexander Motin on OpenZFS. Of particular interest to me were some observations made regarding the relative quantity and quality of contributions made by different “camps” of OpenZFS users (illumos, FreeBSD and ZoL), and their respective track records of upstreaming enhancements to the OpenZFS project. In part due to the high pace of changes in ZoL, the definition of “upstream” for ZFS has become murky, and of late ZFS changes have been ported directly from ZoL. Alexander discussed some known problems with ZFS on FreeBSD that have been discovered through performance testing. While I’m not familiar with ZFS internals, Alexander noted that ZFS’ write path has poor SMP scalability on FreeBSD owing to some limitations in a certain kernel API called taskqueue(9). I would like to explore this problem further and perhaps integrate a relatively new alternative interface which should perform better. Friday and Saturday were, of course, taken up by BSDCan talks. Friday’s keynote was by Benno Rice, who provided some history of UNIX boot systems as a precursor to some discussion of systemd and the difficulties presented by a user and developer community that actively resist change. The rest of the morning was consumed by talks and passed by quickly. First was Colin Percival’s detailed examination of where the FreeBSD kernel spends time during boot, together with an overview of some infrastructure he added to track boot times. He also provided a list of improvements that have been made since he started taking measurements, and some areas we can further improve. Colin’s existing work in this area has already brought about substantial reductions in boot time; amusingly, one of the remaining large delays comes from the keyboard driver, which contains a workaround for old PS/2 keyboards. While there seems to be general agreement that the workaround is probably no longer needed on most systems, the lingering uncertainty around this prevents us from removing the workaround. This is, sadly, a fairly typical example of an OS maintenance burden, and underscores the need to carefully document hardware bug workarounds. After this talk, I got to see some rather novel demonstrations of system tracing using dwatch, a new utility by Devin Teske, which aims to provide a user-friendly interface to DTrace. After lunch, I attended talks on netdump, a protocol for transmitting kernel dumps over a network after the system has panicked, and on a VPC implementation for FreeBSD. After the talks ended, I headed yet again to the hacker lounge and had some fruitful discussions on early microcode loading (one of my features for FreeBSD 12.0). These led me to reconsider some aspects of my approach and saved me a lot of time. Finally, I continued my debugging session from Wednesday with help from a couple of other developers. Saturday’s talks included a very thorough account by Li-Wen Hsu of his work in organizing a BSD conference in Taipei last year. As one of the attendees, I had felt that the conference had gone quite smoothly and was taken aback by the number of details and pitfalls that Li-Wen enumerated during his talk. This was followed by an excellent talk by Baptiste Daroussin on the difficulties one encounters when deploying FreeBSD in new environments. Baptiste offered criticisms of a number of aspects of FreeBSD, some of which hit close to home as they involved portions of the system that I’ve worked on. At the conclusion of the talks, we all gathered in the main lecture hall, where Dan led a traditional and quite lively auction for charity. I managed to snag a Pine64 board and will be getting FreeBSD installed on it the first chance I get. At the end of the auction, we all headed to ByWard for dinner, concluding yet another BSDCan. Thanks to Mark for sharing his experiences at this years BSDCan ##News Roundup Transparent network audio with mpd & sndiod Landry Breuil (landry@ when wearing his developer hat) wrote in… I've been a huge fan of MPD over the years to centralize my audio collection, and i've been using it with the http output to stream the music as a radio on the computer i'm currently using… audio_output { type "sndio" name "Local speakers" mixer_type "software" } audio_output { type "httpd" name "HTTP stream" mixer_type "software" encoder "vorbis" port "8000" format "44100:16:2" } this setup worked for years, allows me to stream my home radio to $work by tunnelling the port 8000 over ssh via LocalForward, but that still has some issues: a distinct timing gap between the 'local output' (ie the speakers connected to the machine where MPD is running) and the 'http output' caused by the time it takes to reencode the stream, which is ugly when you walk through the house and have a 15s delay sometimes mplayer as a client doesn't detect the pauses in the stream and needs to be restarted i need to configure/start a client on each computer and point it at the sound server url (can do via gmpc shoutcast client plugin…) it's not that elegant to reencode the stream, and it wastes cpu cycles So the current scheme is: mpd -> http output -> network -> mplayer -> sndiod on remote machine | -> sndio output -> sndiod on soundserver Fiddling a little bit with mpd outputs and reading the sndio output driver, i remembered sndiod has native network support… and the mpd sndio output allows you to specify a device (it uses SIO_DEVANY by default). So in the end, it's super easy to: enable network support in sndio on the remote machine i want the audio to play by adding -L<local ip> to sndiod_flags (i have two audio devices, with an input coming from the webcam): sndiod_flags="-L10.246.200.10 -f rsnd/0 -f rsnd/1" open pf on port 11025 from the sound server ip: pass in proto tcp from 10.246.200.1 to any port 11025 configure a new output in mpd: audio_output { type "sndio" name "sndio on renton" device "snd@10.246.200.10/0" mixer_type "software" } and enable the new output in mpd: $mpc enable 2 Output 1 (Local speakers) is disabled Output 2 (sndio on renton) is enabled Output 3 (HTTP stream) is disabled Results in a big win: no gap anymore with the local speakers, no reencoding, no need to configure a client to play the stream, and i can still probably reproduce the same scheme over ssh from $work using a RemoteForward. mpd -> sndio output 2 -> network -> sndiod on remote machine | -> sndio output 1 -> sndiod on soundserver Thanks ratchov@ for sndiod :) ###MirBSD’s Korn Shell on Plan9 Jehanne Let start by saying that I’m not really a C programmer. My last public contribution to a POSIX C program was a little improvement to the Snort’s react module back in 2008. So while I know the C language well enough, I do not know anything about the subtleness of the standard library and I have little experience with POSIX semantics. This is not a big issue with Plan 9, since the C library and compiler are not standard anyway, but with Jehanne (a Plan 9 derivative of my own) I want to build a simple, loosely coupled, system that can actually run useful free software ported from UNIX. So I ported RedHat’s newlib to Jehanne on top of a new system library I wrote, LibPOSIX, that provides the necessary emulations. I wrote several test, checking they run the same on Linux and Jehanne, and then I begun looking for a real-world, battle tested, application to port first. I approached MirBSD’s Korn Shell for several reason: it is simple, powerful and well written it has been ported to several different operating systems it has few dependencies it’s the default shell in Android, so it’s really battle tested I was very confident. I had read the POSIX standard after all! And I had a test suite! I remember, I thought “Given newlib, how hard can it be?” The porting begun on September 1, 2017. It was completed by tg on January 5, 2018. 125 nights later. Turn out, my POSIX emulation was badly broken. Not just because of the usual bugs that any piece of C can have: I didn’t understood most POSIX semantics at all! iXsystems ###Static site generator with rsync and lowdown on OpenBSD ssg is a tiny POSIX-compliant shell script with few dependencies: lowdown(1) to parse markdown, rsync(1) to copy temporary files, and entr(1) to watch file changes. It generates Markdown articles to a static website. It copies the current directory to a temporary on in /tmp skipping .* and _*, renders all Markdown articles to HTML, generates RSS feed based on links from index.html, extracts the first <h1> tag from every article to generate a sitemap and use it as a page title, then wraps articles with a single HTML template, copies everything from the temporary directory to $DOCS/ Why not Jekyll or “$X”? ssg is one hundred times smaller than Jekyll. ssg and its dependencies are about 800KB combined. Compare that to 78MB of ruby with Jekyll and all the gems. So ssg can be installed in just few seconds on almost any Unix-like operating system. Obviously, ssg is tailored for my needs, it has all features I need and only those I use. Keeping ssg helps you to master your Unix-shell skills: awk, grep, sed, sh, cut, tr. As a web developer you work with lots of text: code and data. So you better master these wonderful tools. Performance 100 pps. On modern computers ssg generates a hundred pages per second. Half of a time for markdown rendering and another half for wrapping articles into the template. I heard good static site generators work—twice as fast—at 200 pps, so there’s lots of performance that can be gained. ;) ###Why does FreeBSD have virtually no (0%) desktop market share? Because someone made a horrible design decision back in 1984. In absolute fairness to those involved, it was an understandable decision, both from a research perspective, and from an economic perspective, although likely not, from a technology perspective. Why and what. The decision was taken because the X Window System was intended to run on cheap hardware, and, at the time, that meant reduced functionality in the end-point device with the physical display attached to it. At the same time, another force was acting to also limit X displays to display services only, rather than rolling in both window management and specific widget instances for common operational paradigms. Mostly, common operational paradigms didn’t really exist for windowing systems because they also simply didn’t exist at the time, and no one really knew how people were going to use the things, and so researchers didn’t want to commit future research to a set of hard constraints. So a decision was made: separate the display services from the application at the lowest level of graphics primitives currently in use at the time. The ramifications of this were pretty staggering. First, it guaranteed that all higher level graphics would live on the host side of the X protocol, instead of on the display device side of the protocol. Despite a good understanding of Moore’s law, and the fact that, since no X Terminals existed at the time as hardware, but were instead running as emulations on workstations that had sufficient capability, this put the higher level GUI object libraries — referred to as “widgets” — in host libraries linked into the applications. Second, it guaranteed that display organization and management paradigms would also live on the host side of the protocol — assumed, in contradiction to the previous decision, to be running on the workstation. But, presumably, at some point, as lightweight X Terminals became available, to migrate to a particular host computer managing compute resource login/access services. Between these early decisions reigned chaos. Specifically, the consequences of these decisions have been with us ever since: Look-and-feel are a consequence of the toolkit chosen by the application programmer, rather than a user decision which applies universally to all applications. You could call this “lack of a theme”, and — although I personally despise the idea of customizing or “theming” desktops — this meant that one paradigm chosen by the user would not apply universally across all applications, no matter who had written them. Window management style is a preference. You could call this a more radical version of “theming” — which you will remember, I despise — but a consequence to this is that training is not universal across personnel using such systems, nor is it transferrable. In other words, I can’t send someone to a class, and have them come back and use the computers in the office as a tool, with the computer itself — and the elements not specific to the application itself — disappearing into the background. Both of these ultimately render an X-based system unsuitable for desktops. I can’t pay once for training. Training that I do pay for does not easily and naturally translate between applications. Each new version may radically alter the desktop management paradigm into unrecognizability. Is there hope for the future? Well, the Linux community has been working on something called Wayland, and it is very promising… …In the same way X was “very promising” in 1984, because, unfortunately, they are making exactly the same mistakes X made in 1984, rather than correcting them, now that we have 20/20 hindsight, and know what a mature widget library should look like. So Wayland is screwing up again. But hey, it only took us, what, 25 years to get from X in 1987 to Wayland in in 2012. Maybe if we try again in 2037, we can get to where Windows was in 1995. ##Beastie Bits New washing machine comes with 7 pages of open source licenses! BSD Jobs Site FreeBSD Foundation Update, May 2018 FreeBSD Journal looking for book reviewers zedenv ZFS Boot Environment Manager Tarsnap ##Feedback/Questions Wouter - Feedback Efraim - OS Suggestion kevr - Raspberry Pi2/FreeBSD/Router on a Stick Vanja - Interview Suggestion Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 252: Goes to 11.2 | BSD Now 252

June 28, 2018 1:34:26 56.72 MB Downloads: 0

FreeBSD 11.2 has been released, setting up an MTA behind Tor, running pfsense on DigitalOcean, one year of C, using OpenBGPD to announce VM networks, the power to serve, and a BSDCan trip report. ##Headlines FreeBSD 11.2-RELEASE Available FreeBSD 11.2 was released today (June 27th) and is ready for download Highlights: OpenSSH has been updated to version 7.5p1. OpenSSL has been updated to version 1.0.2o. The clang, llvm, lldb and compiler-rt utilities have been updated to version 6.0.0. The libarchive(3) library has been updated to version 3.3.2. The libxo(3) library has been updated to version 0.9.0. Major Device driver updates to: cxgbe(4) – Chelsio 10/25/40/50/100 gigabit NICs – version 1.16.63.0 supports T4, T5 and T6 ixl(4) – Intel 10 and 40 gigabit NICs, updated to version 1.9.9-k ng_pppoe(4) – driver has been updated to add support for user-supplied Host-Uniq tags New drivers: + drm-next-kmod driver supporting integrated Intel graphics with the i915 driver. mlx5io(4) – a new IOCTL interface for Mellanox ConnectX-4 and ConnectX-5 10/20/25/40/50/56/100 gigabit NICs ocs_fc(4) – Emulex Fibre Channel 8/16/32 gigabit Host Adapters smartpqi(4) – HP Gen10 Smart Array Controller Family The newsyslog(8) utility has been updated to support RFC5424-compliant messages when rotating system logs The diskinfo(8) utility has been updated to include two new flags, -s which displays the disk identity (usually the serial number), and -p which displays the physical path to the disk in a storage controller. The top(1) utility has been updated to allow filtering on multiple user names when the -U flag is used The umount(8) utility has been updated to include a new flag, -N, which is used to forcefully unmount an NFS mounted filesystem. The ps(1) utility has been updated to display if a process is running with capsicum(4) capability mode, indicated by the flag ‘C’ The service(8) utility has been updated to include a new flag, -j, which is used to interact with services running within a jail(8). The argument to -j can be either the name or numeric jail ID The mlx5tool(8) utility has been added, which is used to manage Connect-X 4 and Connect-X 5 devices supported by mlx5io(4). The ifconfig(8) utility has been updated to include a random option, which when used with the ether option, generates a random MAC address for an interface. The dwatch(1) utility has been introduced The efibootmgr(8) utility has been added, which is used to manipulate the EFI boot manager. The etdump(1) utility has been added, which is used to view El Torito boot catalog information. The linux(4) ABI compatibility layer has been updated to include support for musl consumers. The fdescfs(5) filesystem has been updated to support Linux®-specific fd(4) /dev/fd and /proc/self/fd behavior Support for virtio_console(4) has been added to bhyve(4). The length of GELI passphrases entered when booting a system with encrypted disks is now hidden by default. See the configuration options in geli(8) to restore the previous behavior. In addition to the usual CD/DVD ISO, Memstick, and prebuilt VM images (raw, qcow2, vhd, and vmdk), FreeBSD 11.2 is also available on: Amazon EC2 Google Compute Engine Hashicorp/Atlas Vagrant Microsoft Azure In addition to a generic ARM64 image for devices like the Pine64 and Raspberry Pi 3, specific images are provided for: GUMSTIX BANANAPI BEAGLEBONE CUBIEBOARD CUBIEBOARD2 CUBOX-HUMMINGBOARD RASPBERRY PI 2 PANDABOARD WANDBOARD Full Release Notes ###Setting up an MTA Behind Tor This article will document how to set up OpenSMTPD behind a fully Tor-ified network. Given that Tor’s DNS resolver code does not support MX record lookups, care must be taken for setting up an MTA behind a fully Tor-ified network. OpenSMTPD was chosen because it was easy to modify to force it to fall back to A/AAAA lookups when MX lookups failed with a DNS result code of NOTIMP (4). Note that as of 08 May 2018, the OpenSMTPD project is planning a configuration file language change. The proposed change has not landed. Once it does, this article will be updated to reflect both the old language and new. The reason to use an MTA behing a fully Tor-ified network is to be able to support email behind the .onion TLD. This setup will only allow us to send and receive email to and from the .onion TLD. Requirements: A fully Tor-ified network HardenedBSD as the operating system A server (or VM) running HardenedBSD behind the fully Tor-ified network. /usr/ports is empty Or is already pre-populated with the HardenedBSD Ports tree Why use HardenedBSD? We get all the features of FreeBSD (ZFS, DTrace, bhyve, and jails) with enhanced security through exploit mitigations and system hardening. Tor has a very unique threat landscape and using a hardened ecosystem is crucial to mitigating risks and threats. Also note that this article reflects how I’ve set up my MTA. I’ve included configuration files verbatim. You will need to replace the text that refers to my .onion domain with yours. On 08 May 2018, HardenedBSD’s version of OpenSMTPD just gained support for running an MTA behind Tor. The package repositories do not yet contain the patch, so we will compile OpenSMTPD from ports. Steps Installation Generating Cryptographic Key Material Tor Configuration OpenSMTPD Configuration Dovecot Configuration Testing your configuration Optional: Webmail Access iXsystems https://www.forbes.com/sites/forbestechcouncil/2018/06/21/strings-attached-knowing-when-and-when-not-to-accept-vc-funding/#30f9f18f46ec https://www.ixsystems.com/blog/self-2018-recap/ ###Running pfSense on a Digital Ocean Droplet I love pfSense (and opnSense, no discrimination here). I use it for just about anything, from homelab to large scale deployments and I’ll give out on any fancy <enter brand name fw appliance here> for a pfSense setup on a decent hardware. I also love DigitalOcean, if you ever used them, you know why, if you never did, head over and try, you’ll understand why. <shameless plug: head over to JupiterBroadcasting.com, the best technology content out there, they have coupon codes to get you started with DO>. Unfortunately, while DO offers tremendous amount of useful distros and applications, pfSense isn’t one of them. But, where there’s a will, there’s a way, and here’s how to get pfSense up and running on DO so you can have it as the gatekeeper to your kingdom. Start by creating a FreeBSD droplet, choose your droplet size (for modest setups, I find the 5$ to be quite awesome): There are many useful things you can do with pfSense on your droplet, from OpenVPN, squid, firewalling, fancy routing, url filtering, dns black listing and much much more. One note though, before we wrap up: You have two ways to initiate the initial setup wizard of the web-configurator: Spin up another droplet, log into it and browse your way to the INTERNAL ip address of the internal NIC you’ve set up. This is the long and tedious way, but it’s also somewhat safer as it eliminates the small window of risk the second method poses. or Once your WAN address is all setup, your pfSense is ready to accept https connection to start the initial web-configurator setup. Thing is, there’s a default, well known set of credential to this initial wizard (admin:pfsense), so, there is a slight window of opportunity that someone can swoop in (assuming they know you’ve installed pfsense + your wan IP address + the exact time window between setting up the WAN interface and completing the wizard) and do <enter scary thing here>. I leave it up to you which of the path you’d like to go, either way, once you’re done with the web-configurator wizard, you’ll have a shiny new pfSense installation at your disposal running on your favorite VPS. Hopefully this was helpful for someone, I hope to get a similar post soon detailing how to get FreeNAS up and running on DO. Many thanks to Tubsta and his blogpost as well as to Allan Jude, Kris Moore and Benedict Reuschling for their AWESOME and inspiring podcast, BSD Now. ##News Roundup One year of C It’s now nearly a year that I started writing non-trivial amounts of C code again (the first sokol_gfx.h commit was on the 14-Jul-2017), so I guess it’s time for a little retrospective. In the beginning it was more of an experiment: I wanted to see how much I would miss some of the more useful C++ features (for instance namespaces, function overloading, ‘simple’ template code for containers, …), and whether it is possible to write non-trivial codebases in C without going mad. Here are all the github projects I wrote in C: sokol: a slowly growing set of platform-abstraction headers sokol-samples - examples for Sokol chips - 8-bit chip emulators chips-test - tests and examples for the chip- emulators, including some complete home computer emulators (minus sound) All in all these are around 32k lines of code (not including 3rd party code like flextGL and HandmadeMath). I think I wrote more C code in the recent 10 months than any other language. So one thing seems to be clear: yes, it’s possible to write a non-trivial amount of C code that does something useful without going mad (and it’s even quite enjoyable I might add). Here’s a few things I learned: Pick the right language for a problem C is a perfect match for WebAssembly C99 is a huge improvement over C89 The dangers of pointers and explicit memory management are overrated Less Boilerplate Code Less Language Feature ‘Anxiety’ Conclusion All in all my “C experiment” is a success. For a lot of problems, picking C over C++ may be the better choice since C is a much simpler language (btw, did you notice how there are hardly any books, conferences or discussions about C despite being a fairly popular language? Apart from the neverending bickering about undefined behaviour from the compiler people of course ;) There simply isn’t much to discuss about a language that can be learned in an afternoon. I don’t like some of the old POSIX or Linux APIs as much as the next guy (e.g. ioctl(), the socket API or some of the CRT library functions), but that’s an API design problem, not a language problem. It’s possible to build friendly C APIs with a bit of care and thinking, especially when C99’s designated initialization can be used (C++ should really make sure that the full C99 language can be used from inside C++ instead of continuing to wander off into an entirely different direction). ###Configuring OpenBGPD to announce VM’s virtual networks We use BGP quite heavily at work, and even though I’m not interacting with that directly, it feels like it’s something very useful to learn at least on some basic level. The most effective and fun way of learning technology is finding some practical application, so I decided to see if it could help to improve networking management for my Virtual Machines. My setup is fairly simple: I have a host that runs bhyve VMs and I have a desktop system from where I ssh to VMs, both hosts run FreeBSD. All VMs are connected to each other through a bridge and have a common network 10.0.1/24. The point of this exercise is to be able to ssh to these VMs from desktop without adding static routes and without adding vmhost’s external interfaces to the VMs bridge. I’ve installed openbgpd on both hosts and configured it like this: vmhost: /usr/local/etc/bgpd.conf AS 65002 router-id 192.168.87.48 fib-update no network 10.0.1.1/24 neighbor 192.168.87.41 { descr "desktop" remote-as 65001 } Here, router-id is set vmhost’s IP address in my home network (192.168.87/24), fib-update no is set to forbid routing table update, which I initially set for testing, but keeping it as vmhost is not supposed to learn new routes from desktop anyway. network announces my VMs network and neighbor describes my desktop box. Now the desktop box: desktop: /usr/local/etc/bgpd.conf AS 65001 router-id 192.168.87.41 fib-update yes neighbor 192.168.87.48 { descr "vmhost" remote-as 65002 } It’s pretty similar to vmhost’s bgpd.conf, but no networks are announced here, and fib-update is set to yes because the whole point is to get VM routes added. Both hosts have to have the openbgpd service enabled: /etc/rc.conf.local openbgpdenable="YES" Conclusion As mentioned already, similar result could be achieved without using BGP by using either static routes or bridging interfaces differently, but the purpose of this exercise is to get some basic hands-on experience with BGP. Right now I’m looking into extending my setup in order to try more complex BGP schema. I’m thinking about adding some software switches in front of my VMs or maybe adding a second VM host (if budget allows). You’re welcome to comment if you have some ideas how to extend this setup for educational purposes in the context of BGP and networking. As a side note, I really like openbgpd so far. Its configuration file format is clean and simple, documentation is good, error and information messages are clear, and CLI has intuitive syntax. Digital Ocean ###The Power to Serve All people within the IT Industry should known where the slogan “The Power To Serve” is exposed every day to millions of people. But maybe too much wishful thinking from me. But without “The Power To Serve” the IT industry today will look totally different. Companies like Apple, Juniper, Cisco and even WatsApp would not exist in their current form. I provide IT architecture services to make your complex IT landscape manageable and I love to solve complex security and privacy challenges. Complex challenges where people, processes and systems are heavily interrelated. For this knowledge intensive work I often run some IT experiments. When you run experiments nowadays you have a choice: Rent some cloud based services or DIY (Do IT Yourself) on premise Running your own developments experiments on your own infrastructure can be time consuming. However smart automation saves time and money. And by creating your own CICD pipeline (Continuous Integration, Continuous Deployment) you stay on top of core infrastructure developments. Even hands-on. Knowing how things work from a technical ‘hands-on’ perspective gives great advantages when it comes to solving complex business IT problems. Making a clear distinguish between a business problem or IT problem is useless. Business and IT problems are related. Sometimes causal related, but more often indirect by one or more non linear feedback loops. Almost every business depends of IT systems. Bad IT means often that your customers will leave your business. One of the things of FeeBSD for me is still FreeBSD Jails. In 2015 I had luck to attend to a presentation of the legendary hacker Poul-Henning Kamp . Check his BSD bio to see what he has done for the FreeBSD community! FreeBSD jails are a light way to visualize your system without enormous overhead. Now that the development on Linux for LXD/LXD is more mature (lxd is the next generation system container manager on linux) there is finally again an alternative for a nice chroot Linux based system again. At least when you do not need the overhead and management complexity that comes with Kubernetes or Docker. FreeBSD means control and quality for me. When there is an open source package I need, I want to install it from source. It gives me more control and always some extra knowledge on how things work. So no precompiled binaries for me on my BSD systems! If a build on FreeBSD fails most of the time this is an alert regarding the quality for me. If a complex OSS package is not available at all in the FreeBSD ports collection there should be a reason for it. Is it really that nobody on the world wants to do this dirty maintenance work? Or is there another cause that running this software on FreeBSD is not possible…There are currently 32644 ports available on FreeBSD. So all the major programming language, databases and middleware libraries are present. The FreeBSD organization is a mature organization and since this is one of the largest OSS projects worldwide learning how this community manages to keep innovation and creates and maintains software is a good entrance for learning how complex IT systems function. FreeBSD is of course BSD licensed. It worked well! There is still a strong community with lots of strong commercial sponsors around the community. Of course: sometimes a GPL license makes more sense. So beside FreeBSD I also love GPL software and the rationale and principles behind it. So my hope is that maybe within the next 25 years the hard battle between BSD vs GPL churches will be more rationalized and normalized. Principles are good, but as all good IT architects know: With good principles alone you never make a good system. So use requirements and not only principles to figure out what OSS license fits your project. There is never one size fits all. June 19, 1993 was the day the official name for FreeBSD was agreed upon. So this blog is written to celebrate 25th anniversary of FreeBSD. ###Dave’s BSDCan trip report So far, only one person has bothered to send in a BSDCan trip report. Our warmest thanks to Dave for doing his part. Hello guys! During the last show, you asked for a trip report regarding BSDCan 2018. This was my first time attending BSDCan. However, BSDCan was my second BSD conference overall, my first being vBSDCon 2017 in Reston, VA. Arriving early Thursday evening and after checking into the hotel, I headed straight to the Red Lion for the registration, picked up my badge and swag and then headed towards the ‘DMS’ building for the newbies talk. The only thing is, I couldn’t find the DMS building! Fortunately I found a BSDCan veteran who was heading there themselves. My only suggestion is to include the full building name and address on the BSDCan web site, or even a link to Google maps to help out with the navigation. The on-campus street maps didn’t have ‘DMS’ written on them anywhere. But I digress. Once I made it to the newbies talk hosted by Dan Langille and Michael W Lucas, it highlighted places to meet, an overview of what is happening, details about the ‘BSDCan widow/widower tours’ and most importantly, the 6-2-1 rule! The following morning, we were present with tea/coffee, muffins and other goodies to help prepare us for the day ahead. The first talk, “The Tragedy of systemd” covered what systemd did wrong and how the BSD community could improve on the ideas behind it. With the exception of Michael W Lucas, SSH Key Management and Kirk McKusick, The Evolution of FreeBSD Governance talk, I pretty much attended all of the ZFS talks including the lunchtime BoF session, hosted by Allan Jude. Coming from FreeNAS and being involved in the community, this is where my main interest and motivation lies. Since then I have been able to share some of that information with the FreeNAS community forums and chatroom. I also attended the “Speculating about Intel” lunchtime BoF session hosted by Theo de Raddt, which proved to be “interesting”. The talks ended with the wrap up session with a few words from Dan, covering the record attendance and made very clear there “was no cabal”. Followed by the the handing over of Groff the BSD goat to a new owner, thank you’s from the FreeBSD Foundation to various community committers and maintainers, finally ending with the charity auction, where a things like a Canadian $20 bill sold for $40, a signed FreeBSD Foundation shirt originally worn by George Neville-Neil, a lost laptop charger, Michael’s used gelato spoon, various books, the last cookie and more importantly, the second to last cookie! After the auction, we all headed to the Red Lion for food and drinks, sponsored by iXsystems. I would like to thank the BSDCan organizers, speakers and sponsors for a great conference. I will certainly hope to attend next year! Regards, Dave (aka m0nkey) Thanks to Dave for sharing his experiences with us and our viewers ##Beastie Bits Robert Watson (from 2008) on how much FreeBSD is in Mac OS X Why Intel Skylake CPUs are sometimes 50% slower than older CPUs Kristaps Dzonsons is looking for somebody to maintain this as mentioned at this link camcontrol(8) saves the day again! Formatting floppy disks in a USB floppy disk drive 32+ great indie games now playable on OpenBSD -current; 7 currently on sale! Warsaw BSD User Group. June 27 2018 18:30-21:00, Wheel Systems Office, Aleje Jerozolimskie 178, Warsaw Tarsnap ##Feedback/Questions Ron - Adding a disk to ZFS Marshall - zfs question Thomas - Allan, the myth perpetuator Ross - ZFS IO stats per dataset Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 251: Crypto HAMMER | BSD Now 251

June 21, 2018 1:28:43 53.3 MB Downloads: 0

DragonflyBSD’s hammer1 encrypted master/slave setup, second part of our BSDCan recap, NomadBSD 1.1-RC1 available, OpenBSD adds an LDAP client to base, FreeBSD gets pNFS support, Intel FPU Speculation Vulnerability confirmed, and what some Unix command names mean. ##Headlines DragonflyBSD: Towards a HAMMER1 master/slave encrypted setup with LUKS I just wanted to share my experience with setting up DragonFly master/slave HAMMER1 PFS’s on top of LUKS So after a long time using an Synology for my NFS needs, I decided it was time to rethink my setup a little since I had several issues with it : You cannot run NFS on top of encrypted partitions easily I suspect I am having some some data corruption (bitrot) on the ext4 filesystem the NIC was stcuk to 100 Mbps instead of 1 Gbps even after swapping cables, switches, you name it It’s proprietary I have been playing with DragonFly in the past and knew about HAMMER, now I just had the perfect excuse to actually use it in production :) After setting up the OS, creating the LUKS partition and HAMMER FS was easy : kdload dm cryptsetup luksFormat /dev/serno/<id1> cryptsetup luksOpen /dev/serno/<id1> fort_knox newfs_hammer -L hammer1_secure_master /dev/mapper/fort_knox cryptsetup luksFormat /dev/serno/<id2> cryptsetup luksOpen /dev/serno/<id2> fort_knox_slave newfs_hammer -L hammer1_secure_slave /dev/mapper/fort_knox_slave Mount the 2 drives : mount /dev/mapper/fort_knox /fort_knox mount /dev/mapper_fort_know_slave /fort_knox_slave You can now put your data under /fort_knox Now, off to setting up the replication, first get the shared-uuid of /fort_knox hammer pfs-status /fort_knox Create a PFS slave “linked” to the master hammer pfs-slave /fort_knox_slave/pfs/slave shared-uuid=f9e7cc0d-eb59-10e3-a5b5-01e6e7cefc12 And then stream your data to the slave PFS ! hammer mirror-stream /fort_knox /fort_knox_slave/pfs/slave After that, setting NFS is fairly trivial even though I had problem with the /etc/exports syntax which is different than Linux There’s a few things I wish would be better though but nothing too problematic or without workarounds : Cannot unlock LUKS partitions at boot time afaik (Acceptable tradeoff for the added security LUKS gives me vs my old Synology setup) but this force me to run a script to unlock LUKS, mount hammer and start mirror-stream at each boot No S1/S3 sleep so I made a script to shutdown the system when there’s no network neighborgs to serve the NFS As my system isn’t online 24/7 for energy reasons, I guess will have to run hammer cleanup myself from time to time Some uncertainty because hey, it’s kind of exotic but exciting too :) Overall, I am happy, HAMMER1 and PFS are looking really good, DragonFly is a neat Unix and the community is super friendly (Matthew Dillon actually provided me with a kernel patch to fix the broken ACPI on the PC holding this setup, many thanks!), the system is still a “work in progress” but it is already serving my files as I write this post. Let’s see in 6 months how it goes in the longer run ! Helpful resources : https://www.dragonflybsd.org/docs/how_to_implement_hammer_pseudo_file_system__40___pfs___41___slave_mirroring_from_pfs_master/ ###BSDCan 2018 Recap As promised, here is our second part of our BSDCan report, covering the conference proper. The last tutorials/devsummit of that day lead directly into the conference, as people could pick up their registration packs at the Red Lion and have a drink with fellow BSD folks. Allan and I were there only briefly, as we wanted to get back to the “Newcomers orientation and mentorship” session lead by Michael W. Lucas. This session is intended for people that are new to BSDCan (maybe their first BSD conference ever?) and may have questions. Michael explained everything from the 6-2-1 rule (hours of sleep, meals per day, and number of showers that attendees should have at a minimum), to the partner and widowers program (lead by his wife Liz), to the sessions that people should not miss (opening, closing, and hallway track). Old-time BSDCan folks were asked to stand up so that people can recognize them and ask them any questions they might have during the conferences. The session was well attended. Afterwards, people went for dinner in groups, a big one lead by Michael Lucas to his favorite Shawarma place, followed by gelato (of course). This allowed newbies to mingle over dinner and ice cream, creating a welcoming atmosphere. The next day, after Dan Langille opened the conference, Benno Rice gave the keynote presentation about “The Tragedy of Systemd”. Benedict went to the following talks: “Automating Network Infrastructures with Ansible on FreeBSD” in the DevSummit track. A good talk that connected well with his Ansible tutorial and even allowed some discussions among participants. “All along the dwatch tower”: Devin delivered a well prepared talk. I first thought that the number of slides would not fit into the time slot, but she even managed to give a demo of her work, which was well received. The dwatch tool she wrote should make it easy for people to get started with DTrace without learning too much about the syntax at first. The visualizations were certainly nice to see, combining different tools together in a new way. ZFS BoF, lead by Allan and Matthew Ahrens SSH Key Management by Michael W. Lucas. Yet another great talk where I learned a lot. I did not get to the SSH CA chapter in the new SSH Mastery book, so this was a good way to wet my appetite for it and motivated me to look into creating one for the cluster that I’m managing. The rest of the day was spent at the FreeBSD Foundation table, talking to various folks. Then, Allan and I had an interview with Kirk McKusick for National FreeBSD Day, then we had a core meeting, followed by a core dinner. Day 2: “Flexible Disk Use in OpenZFS”: Matthew Ahrens talking about the feature he is implementing to expand a RAID-Z with a single disk, as well as device removal. Allan’s talk about his efforts to implement ZSTD in OpenZFS as another compression algorithm. I liked his overview slides with the numbers comparing the algorithms for their effectiveness and his personal story about the sometimes rocky road to get the feature implemented. “zrepl - ZFS replication” by Christian Schwarz, was well prepared and even had a demo to show what his snapshot replication tool can do. We covered it on the show before and people can find it under sysutils/zrepl. Feedback and help is welcome. “The Evolution of FreeBSD Governance” by Kirk McKusick was yet another great talk by him covering the early days of FreeBSD until today, detailing some of the progress and challenges the project faced over the years in terms of leadership and governance. This is an ongoing process that everyone in the community should participate in to keep the project healthy and infused with fresh blood. Closing session and auction were funny and great as always. All in all, yet another amazing BSDCan. Thank you Dan Langille and your organizing team for making it happen! Well done. Digital Ocean ###NomadBSD 1.1-RC1 Released The first – and hopefully final – release candidate of NomadBSD 1.1 is available! Changes The base system has been upgraded to FreeBSD 11.2-RC3 EFI booting has been fixed. Support for modern Intel GPUs has been added. Support for installing packages has been added. Improved setup menu. More software packages: benchmarks/bonnie++ DSBDisplaySettings DSBExec DSBSu mail/thunderbird net/mosh ports-mgmt/octopkg print/qpdfview security/nmap sysutils/ddrescue sysutils/fusefs-hfsfuse sysutils/fusefs-sshfs sysutils/sleuthkit www/lynx x11-wm/compton x11/xev x11/xterm Many improvements and bugfixes The image and instructions can be found here. ##News Roundup LDAP client added to -current CVSROOT: /cvs Module name: src Changes by: reyk@cvs.openbsd.org 2018/06/13 09:45:58 Log message: Import ldap(1), a simple ldap search client. We have an ldapd(8) server and ypldap in base, so it makes sense to have a simple LDAP client without depending on the OpenLDAP package. This tool can be used in an ssh(1) AuthorizedKeysCommand script. With feedback from many including millert@ schwarze@ gilles@ dlg@ jsing@ OK deraadt@ Status: Vendor Tag: reyk Release Tags: ldap_20180613 N src/usr.bin/ldap/Makefile N src/usr.bin/ldap/aldap.c N src/usr.bin/ldap/aldap.h N src/usr.bin/ldap/ber.c N src/usr.bin/ldap/ber.h N src/usr.bin/ldap/ldap.1 N src/usr.bin/ldap/ldapclient.c N src/usr.bin/ldap/log.c N src/usr.bin/ldap/log.h No conflicts created by this import ###Intel® FPU Speculation Vulnerability Confirmed Earlier this month, Philip Guenther (guenther@) committed (to amd64 -current) a change from lazy to semi-eager FPU switching to mitigate against rumored FPU state leakage in Intel® CPUs. Theo de Raadt (deraadt@) discussed this in his BSDCan 2018 session. Using information disclosed in Theo’s talk, Colin Percival developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the official announcement of the vulnerability. FPU change in FreeBSD Summary: System software may utilize the Lazy FP state restore technique to delay the restoring of state until an instruction operating on that state is actually executed by the new process. Systems using Intel® Core-based microprocessors may potentially allow a local process to infer data utilizing Lazy FP state restore from another process through a speculative execution side channel. Description: System software may opt to utilize Lazy FP state restore instead of eager save and restore of the state upon a context switch. Lazy restored states are potentially vulnerable to exploits where one process may infer register values of other processes through a speculative execution side channel that infers their value. · CVSS - 4.3 Medium CVSS:3.0/AV:L/AC:L/PR:N/UI:N/S:C/C:L/I:N/A:N Affected Products: Intel® Core-based microprocessors. Recommendations: If an XSAVE-enabled feature is disabled, then we recommend either its state component bitmap in the extended control register (XCR0) is set to 0 (e.g. XCR0[bit 2]=0 for AVX, XCR0[bits 7:5]=0 for AVX512) or the corresponding register states of the feature should be cleared prior to being disabled. Also for relevant states (e.g. x87, SSE, AVX, etc.), Intel recommends system software developers utilize Eager FP state restore in lieu of Lazy FP state restore. Acknowledgements: Intel would like to thank Julian Stecklina from Amazon Germany, Thomas Prescher from Cyberus Technology GmbH (https://www.cyberus-technology.de/), Zdenek Sojka from SYSGO AG (http://sysgo.com), and Colin Percival for reporting this issue and working with us on coordinated disclosure. iXsystems iX Ad Spot iX Systems - BSDCan 2018 Recap ###FreeBSD gets pNFS support Merge the pNFS server code from projects/pnfs-planb-server into head. This code merge adds a pNFS service to the NFSv4.1 server. Although it is a large commit it should not affect behaviour for a non-pNFS NFS server. Some documentation on how this works can be found at: Merge the pN http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt and will hopefully be turned into a proper document soon. This is a merge of the kernel code. Userland and man page changes will come soon, once the dust settles on this merge. It has passed a "make universe", so I hope it will not cause build problems. It also adds NFSv4.1 server support for the "current stateid". Here is a brief overview of the pNFS service: A pNFS service separates the Read/Write operations from all the other NFSv4.1 Metadata operations. It is hoped that this separation allows a pNFS service to be configured that exceeds the limits of a single NFS server for either storage capacity and/or I/O bandwidth. It is possible to configure mirroring within the data servers (DSs) so that the data storage file for an MDS file will be mirrored on two or more of the DSs. When this is used, failure of a DS will not stop the pNFS service and a failed DS can be recovered once repaired while the pNFS service continues to operate. Although two way mirroring would be the norm, it is possible to set a mirroring level of up to four or the number of DSs, whichever is less. The Metadata server will always be a single point of failure, just as a single NFS server is. A Plan B pNFS service consists of a single MetaData Server (MDS) and K Data Servers (DS), all of which are recent FreeBSD systems. Clients will mount the MDS as they would a single NFS server. When files are created, the MDS creates a file tree identical to what a single NFS server creates, except that all the regular (VREG) files will be empty. As such, if you look at the exported tree on the MDS directly on the MDS server (not via an NFS mount), the files will all be of size 0. Each of these files will also have two extended attributes in the system attribute name space: pnfsd.dsfile - This extended attrbute stores the information that the MDS needs to find the data storage file(s) on DS(s) for this file. pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime and Change attributes for the file, so that the MDS doesn't need to acquire the attributes from the DS for every Getattr operation. For each regular (VREG) file, the MDS creates a data storage file on one (or more if mirroring is enabled) of the DSs in one of the "dsNN" subdirectories. The name of this file is the file handle of the file on the MDS in hexadecimal so that the name is unique. The DSs use subdirectories named "ds0" to "dsN" so that no one directory gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize on the MDS, with the default being 20. For production servers that will store a lot of files, this value should probably be much larger. It can be increased when the "nfsd" daemon is not running on the MDS, once the "dsK" directories are created. For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces of information to the client that allows it to do I/O directly to the DS. DeviceInfo - This is relatively static information that defines what a DS is. The critical bits of information returned by the FreeBSD server is the IP address of the DS and, for the Flexible File layout, that NFSv4.1 is to be used and that it is "tightly coupled". There is a "deviceid" which identifies the DeviceInfo. Layout - This is per file and can be recalled by the server when it is no longer valid. For the FreeBSD server, there is support for two types of layout, call File and Flexible File layout. Both allow the client to do I/O on the DS via NFSv4.1 I/O operations. The Flexible File layout is a more recent variant that allows specification of mirrors, where the client is expected to do writes to all mirrors to maintain them in a consistent state. The Flexible File layout also allows the client to report I/O errors for a DS back to the MDS. The Flexible File layout supports two variants referred to as "tightly coupled" vs "loosely coupled". The FreeBSD server always uses the "tightly coupled" variant where the client uses the same credentials to do I/O on the DS as it would on the MDS. For the "loosely coupled" variant, the layout specifies a synthetic user/group that the client uses to do I/O on the DS. The FreeBSD server does not do striping and always returns layouts for the entire file. The critical information in a layout is Read vs Read/Writea and DeviceID(s) that identify which DS(s) the data is stored on. At this time, the MDS generates File Layout layouts to NFSv4.1 clients that know how to do pNFS for the non-mirrored DS case unless the sysctl vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File layouts are generated. The mirrored DS configuration always generates Flexible File layouts. For NFS clients that do not support NFSv4.1 pNFS, all I/O operations are done against the MDS which acts as a proxy for the appropriate DS(s). When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy. If the DS is on the same machine, the MDS/DS will do the RPC on the DS as a proxy and so on, until the machine runs out of some resource, such as session slots or mbufs. As such, DSs must be separate systems from the MDS. *** ###[What does {some strange unix command name} stand for?](http://www.unixguide.net/unix/faq/1.3.shtml) + awk = "Aho Weinberger and Kernighan" + grep = "Global Regular Expression Print" + fgrep = "Fixed GREP". + egrep = "Extended GREP" + cat = "CATenate" + gecos = "General Electric Comprehensive Operating Supervisor" + nroff = "New ROFF" + troff = "Typesetter new ROFF" + tee = T + bss = "Block Started by Symbol + biff = "BIFF" + rc (as in ".cshrc" or "/etc/rc") = "RunCom" + Don Libes' book "Life with Unix" contains lots more of these tidbits. *** ##Beastie Bits + [RetroBSD: Unix for microcontrollers](http://retrobsd.org/wiki/doku.php) + [On the matter of OpenBSD breaking embargos (KRACK)](https://marc.info/?l=openbsd-tech&m=152910536208954&w=2) + [Theo's Basement Computer Paradise (1998)](https://zeus.theos.com/deraadt/hosts.html) + [Airport Extreme runs NetBSD](https://jcs.org/2018/06/12/airport_ssh) + [What UNIX shell could have been](https://rain-1.github.io/shell-2.html) *** Tarsnap ad *** ##Feedback/Questions + We need more feedback and questions. Please email feedback@bsdnow.tv + Also, many of you owe us BSDCan trip reports! We have shared what our experience at BSDCan was like, but we want to hear about yours. What can we do better next year? What was it like being there for the first time? + [Jason writes in](https://slexy.org/view/s205jU58X2) + https://www.wheelsystems.com/en/products/wheel-fudo-psm/ + [June 19th was National FreeBSD Day](https://twitter.com/search?src=typd&q=%23FreeBSDDay) *** - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to [feedback@bsdnow.tv](mailto:feedback@bsdnow.tv) ***

Episode 250: BSDCan 2018 Recap | BSD Now 250

June 14, 2018 1:41:10 60.89 MB Downloads: 0

TrueOS becoming a downstream fork with Trident, our BSDCan 2018 recap, HardenedBSD Foundation founding efforts, VPN with OpenIKED on OpenBSD, FreeBSD on a System76 Galago Pro, and hardware accelerated crypto on Octeons. ##Headlines## TrueOS to Focus on Core Operating System The TrueOS Project has some big plans in the works, and we want to take a minute and share them with you. Many have come to know TrueOS as the “graphical FreeBSD” that makes things easy for newcomers to the BSDs. Today we’re announcing that TrueOS is shifting our focus a bit to become a cutting-edge operating system that keeps all of the stability that you know and love from ZFS (OpenZFS) and FreeBSD, and adds additional features to create a fresh, innovative operating system. Our goal is to create a core-centric operating system that is modular, functional, and perfect for do-it-yourselfers and advanced users alike. TrueOS will become a downstream fork that will build on FreeBSD by integrating new software technologies like OpenRC and LibreSSL. Work has already begun which allows TrueOS to be used as a base platform for other projects, including JSON-based manifests, integrated Poudriere / pkg tools and much more. We’re planning on a six month release cycle to keep development moving and fresh, allowing us to bring you hot new features to ZFS, bhyve and related tools in a timely manner. This makes TrueOS the perfect fit to serve as the basis for building other distributions. Some of you are probably asking yourselves “But what if I want to have a graphical desktop?” Don’t worry! We’re making sure that everyone who knows and loves the legacy desktop version of TrueOS will be able to continue using a FreeBSD-based, graphical operating system in the future. For instance, if you want to add KDE, just use sudo pkg install kde and voila! You have your new shiny desktop. Easy right? This allows us to get back to our roots of being a desktop agnostic operating system. If you want to add a new desktop environment, you get to pick the one that best suits your use. We know that some of you will still be looking for an out-of-the-box solution similar to legacy PC-BSD and TrueOS. We’re happy to announce that Project Trident will take over graphical FreeBSD development going forward. Not much is going to change in that regard other than a new name! You’ll still have Lumina Desktop as a lightweight and feature-rich desktop environment and tons of utilities from the legacy TrueOS toolchain like sysadm and AppCafe. There will be migration paths available for those that would like to move to other FreeBSD-based distributions like Project Trident or GhostBSD. We look forward to this new chapter for TrueOS and hope you will give the new edition a spin! Tell us what you think about the new changes by leaving us a comment. Don’t forget you can ask us questions on our Twitter and be a part of our community by joining the new TrueOS Forums when they go live in about a week. Thanks for being a loyal fan of TrueOS. ###Project Trident FAQ Q: Why did you pick the name “Project Trident”? A: We were looking for a name that was unique, yet would still relate to the BSD community. Since Beastie (the FreeBSD mascot) is always pictured with a trident, it felt like that would be a great name. Q: Where can users go for technical support? A: At the moment, Project Trident will continue sharing the TrueOS community forums and Telegram channels. We are currently evaluating dedicated options for support channels in the future. Q: Can I help contribute to the project? A: We are always looking for developers who want to join the project. If you’re not a developer you can still help, as a community project we will be more reliant on contributions from the community in the form of how-to guides and other user-centric documentation and support systems. Q: How is the project supported financially? A: Project Trident is sponsored by the community, from both individuals and corporations. iXsystems has stepped up as the first enterprise-level sponsor of the project, and has been instrumental in getting Project Trident up and running. Please visit the Sponsors page to see all the current sponsors. Q: How can I help support the project financially? A: Several methods exist, from one time or recurring donations via Paypal to limited time swag t-shirt campaigns during the year. We are also looking into more alternative methods of support, so please visit the Sponsors page to see all the current methods of sponsorship. Q: Will there be any transparency of the financial donations and expenditures? A: Yes, we will be totally open with how much money comes into the project and what it is spent on. Due to concerns of privacy, we will not identify individuals and their donation amounts unless they specifically request to be identified. We will release a monthly overview in/out ledger, so that community members can see where their money is going. Relationship with TrueOS Project Trident does have very close ties to the TrueOS project, since most of the original Project Trident developers were once part of the TrueOS project before it became a distribution platform. For users of the TrueOS desktop, we have some additional questions and answers below. Q: Do we need to be at a certain TrueOS install level/release to upgrade? A: As long as you have a TrueOS system which has been updated to at least the 18.03 release you should be able to just perform a system update to be automatically upgraded to Project Trident. Q: Which members moved from TrueOS to Project Trident? A: Project Trident is being led by prior members of the TrueOS desktop team. Ken and JT (development), Tim (documentation) and Rod (Community/Support). Since Project Trident is a community-first project, we look forward to working with new members of the team. iXsystems ###BSDCan BSDCan finished Saturday last week It started with the GoatBoF on Tuesday at the Royal Oak Pub, where people had a chance to meet and greet. Benedict could not attend due to an all-day FreeBSD Foundation meeting and and even FreeBSD Journal Editorial Board meeting. The FreeBSD devsummit was held the next two days in parallel to the tutorials. Gordon Tetlow, who organized the devsummit, opened the devsummit. Deb Goodkin from the FreeBSD Foundation gave the first talk with a Foundation update, highlighting current and future efforts. Li-Wen Hsu is now employed by the Foundation to assist in QA work (Jenkins, CI/CD) and Gordon Tetlow has a part-time contract to help secteam as their secretary. Next, the FreeBSD core team (among them Allan and Benedict) gave a talk about what has happened this last term. With a core election currently running, some of these items will carry over to the next core team, but there were also some finished ones like the FCP process and FreeBSD members initiative. People in the audience asked questions on various topics of interest. After the coffee break, the release engineering team gave a talk about their efforts in terms of making releases happen in time and good quality. Benedict had to give his Ansible tutorial in the afternoon, which had roughly 15 people attending. Most of them beginners, we could get some good discussions going and I also learned a few new tricks. The overall feedback was positive and one even asked what I’m going to teach next year. The second day of the FreeBSD devsummit began with Gordon Tetlow giving an insight into the FreeBSD Security team (aka secteam). He gave a overview of secteam members and responsibilities, explaining the process based on a long past advisory. Developers were encouraged to help out secteam. NDAs and proper disclosure of vulnerabilities were also discussed, and the audience had some feedback and questions. When the coffee break was over, the FreeBSD 12.0 planning session happened. A Google doc served as a collaborative way of gathering features and things left to do. People signed up for it or were volunteered. Some features won’t make it into 12.0 as they are not 100% ready for prime time and need a few more rounds of testing and bugfixing. Still, 12.0 will have some compelling features. A 360° group picture was taken after lunch, and then people split up into the working groups for the afternoon or started hacking in the UofO Henderson residence. Benedict and Allan both attended the OpenZFS working group, lead by Matt Ahrens. He presented the completed and outstanding work in FreeBSD, without spoiling too much of the ZFS presentations of various people that happened later at the conference. Benedict joined the boot code session a bit late (hallway track is the reason) when most things seem to have already been discussed. BSDCan 2018 — Ottawa (In Pictures) iXsystems Photos from BSDCan 2018 ##News Roundup June HardenedBSD Foundation Update We at HardenedBSD are working towards starting up a 501©(3) not-for-profit organization in the USA. Setting up this organization will allow future donations to be tax deductible. We’ve made progress and would like to share with you the current state of affairs. We have identified, sent invitations out, and received acceptance letters from six people who will serve on the HardenedBSD Foundation Board of Directors. You can find their bios below. In the latter half of June 2018 or the beginning half of July 2018, we will meet for the first time as a board and formally begin the process of creating the documentation needed to submit to the local, state, and federal tax services. Here’s a brief introduction to those who will serve on the board: W. Dean Freeman (Advisor): Dean has ten years of professional experience with deploying and security Unix and networking systems, including assessing systems security for government certification and assessing the efficacy of security products. He was introduced to Unix via FreeBSD 2.2.8 on an ISP shell account as a teenager. Formerly, he was the Snort port maintainer for FreeBSD while working in the Sourcefire VRT, and has contributed entropy-related patches to the FreeBSD and HardenedBSD projects – a topic on which he presented at vBSDCon 2017. Ben La Monica (Advisor): Ben is a Senior Technology Manager of Software Engineering at Morningstar, Inc and has been developing software for over 15 years in a variety of languages. He advocates open source software and enjoys tinkering with electronics and home automation. George Saylor (Advisor): George is a Technical Directory at G2, Inc. Mr. Saylor has over 28 years of information systems and security experience in a broad range of disciplines. His core focus areas are automation and standards in the event correlation space as well as penetration and exploitation of computer systems. Mr Saylor was also a co-founder of the OpenSCAP project. Virginia Suydan (Accountant and general administrator): Accountant and general administrator for the HardenedBSD Foundation. She has worked with Shawn Webb for tax and accounting purposes for over six years. Shawn Webb (Director): Co-founder of HardenedBSD and all-around infosec wonk. He has worked and played in the infosec industry, doing both offensive and defensive research, for around fifteen years. He loves open source technologies and likes to frustrate the bad guys. Ben Welch (Advisor): Ben is currently a Security Engineer at G2, Inc. He graduated from Pennsylvania College of Technology with a Bachelors in Information Assurance and Security. Ben likes long walks, beaches, candlelight dinners, and attending various conferences like BSides and ShmooCon. ###Your own VPN with OpenIKED & OpenBSD Remote connectivity to your home network is something I think a lot of people find desirable. Over the years, I’ve just established an SSH tunnel and use it as a SOCKS proxy, sending my traffic through that. It’s a nice solution for a “poor man’s VPN”, but it can be a bit clunky, and it’s not great having to expose SSH to the world, even if you make sure to lock everything down I set out the other day to finally do it properly. I’d come across this great post by Gordon Turner: OpenBSD 6.2 VPN Endpoint for iOS and macOS Whilst it was exactly what I was looking for, it outlined how to set up an L2TP VPN. Really, I wanted IKEv2 for performance and security reasons (I won’t elaborate on this here, if you’re curious about the differences, there’s a lot of content out on the web explaining this). The client systems I’d be using have native support for IKEv2 (iOS, macOS, other BSD systems). But, I couldn’t find any tutorials in the same vein. So, let’s get stuck in! A quick note ✍️ This guide will walk through the set up of an IKEv2 VPN using OpenIKED on OpenBSD. It will detail a “road warrior” configuration, and use a PSK (pre-shared-key) for authentication. I’m sure it can be easily adapted to work on any other platforms that OpenIKED is available on, but keep in mind my steps are specifically for OpenBSD. Server Configuration As with all my home infrastructure, I crafted this set-up declaratively. So, I had the deployment of the VM setup in Terraform (deployed on my private Triton cluster), and wrote the configuration in Ansible, then tied them together using radekg/terraform-provisioner-ansible. One of the reasons I love Ansible is that its syntax is very simplistic, yet expressive. As such, I feel it fits very well into explaining these steps with snippets of the playbook I wrote. I’ll link the full playbook a bit further down for those interested. See the full article for the information on: sysctl parameters The naughty list (optional) Configure the VPN network interface Configure the firewall Configure the iked service Gateway configuration Client configuration Troubleshooting DigitalOcean ###FreeBSD on a System76 Galago Pro Hey all, It’s been a while since I last posted but I thought I would hammer something out here. My most recent purchase was a System76 Galago Pro. I thought, afer playing with POP! OS a bit, is there any reason I couldn’t get BSD on this thing. Turns out the answer is no, no there isnt and it works pretty decently. To get some accounting stuff out of the way I tested this all on FreeBSD Head and 11.1, and all of it is valid as of May 10, 2018. Head is a fast moving target so some of this is only bound to improve. The hardware Intel Core i5 Gen 8 UHD Graphics 620 16 GB DDR4 Ram RTL8411B PCI Express Card Reader RTL8111 Gigabit ethernet controller Intel HD Audio Samsung SSD 960 PRO 512GB NVMe The caveats There are a few things that I cant seem to make work straight out of the box, and that is the SD Card reader, the backlight, and the audio is a bit finicky. Also the trackpad doesn’t respond to two finger scrolling. The wiki is mostly up to date, there are a few edits that need to be made still but there is a bug where I cant register an account yet so I haven’t made all the changes. Processor It works like any other Intel processor. Pstates and throttling work. Graphics The boot menu sets itself to what looks like 1024x768, but works as you expect in a tiny window. The text console does the full 3200x1800 resolution, but the text is ultra tiny. There isnt a font for the console that covers hidpi screens yet. As for X Windows it requres the drm-kmod-next package. Once installed follow the directions from the package and it works with almost no fuss. I have it running on X with full intel acceleration, but it is running at it’s full 3200x1800 resolution, to scale that down just do xrandr --output eDP-1 --scale 0.5x0.5 it will blow it up to roughly 200%. Due to limitations with X windows and hidpi it is harder to get more granular. Intel Wireless 8265 The wireless uses the iwm module, as of right now it does not seem to automagically load right now. Adding iwm_load=“YES” will cause the module to load on boot and kldload iwm Battery I seem to be getting about 5 hours out of the battery, but everything reports out of the box as expected. I could get more by throttling the CPU down speed wise. Overall impression It is a pretty decent experience. While not as polished as a Thinkpad there is a lot of potential with a bit of work and polishing. The laptop itself is not bad, the keyboard is responsive. The build quality is pretty solid. My only real complaint is the trackpad is stiff to click and sort of tiny. They seem to be a bit indifferent to non linux OSes running on the gear but that isnt anything new. I wont have any problems using it and is enough that when I work through this laptop, but I’m not sure at this stage if my next machine will be a System76 laptop, but they have impressed me enough to put them in the running when I go to look for my next portable machine but it hasn’t yet replaced the hole left in my heart by lenovo messing with the thinkpad. ###Hardware accelerated AES/HMAC-SHA on octeons In this commit, visa@ submitted code (disabled for now) to use built-in acceleration on octeon CPUs, much like AESNI for x86s. I decided to test tcpbench(1) and IPsec, before and after updating and enabling the octcrypto(4) driver. I didn't capture detailed perf stats from before the update, I had heard someone say that Edgerouter Lite boxes would only do some 6MBit/s over ipsec, so I set up a really simple ipsec.conf with ike esp from A to B leading to a policy of esp tunnel from A to B spi 0xdeadbeef auth hmac-sha2-256 enc aes going from one ERL to another (I collect octeons, so I have a bunch to test with) and let tcpbench run for a while on it. My numbers hovered around 7Mbit/s, which coincided with what I've heard, and also that most of the CPU gets used while doing it. Then I edited /sys/arch/octeon/conf/GENERIC, removed the # from octcrypto0 at mainbus0 and recompiled. Booted into the new kernel and got a octcrypto0 line in dmesg, and it was time to rock the ipsec tunnel again. The crypto algorithm and HMAC used by default on ipsec coincides nicely with the list of accelerated functions provided by the driver. Before we get to tunnel traffic numbers, just one quick look at what systat pigs says while the ipsec is running at full steam: PID USER NAME CPU 20\ 40\ 60\ 80\ 100\ 58917 root crypto 52.25 ################# 42636 root softnet 42.48 ############## (idle) 29.74 ######### 1059 root tcpbench 24.22 ####### 67777 root crynlk 19.58 ###### So this indicates that the load from doing ipsec and generating the traffic is somewhat nicely evened out over the two cores in the Edgerouter, and there's even some CPU left unused, which means I can actually ssh into it and have it usable. I have had it running for almost 2 days now, moving some 2.1TB over the tunnel. Now for the new and improved performance numbers: 204452123 4740752 37.402 100.00% Conn: 1 Mbps: 37.402 Peak Mbps: 58.870 Avg Mbps: 37.402 204453149 4692968 36.628 100.00% Conn: 1 Mbps: 36.628 Peak Mbps: 58.870 Avg Mbps: 36.628 204454167 5405552 42.480 100.00% Conn: 1 Mbps: 42.480 Peak Mbps: 58.870 Avg Mbps: 42.480 204455188 5202496 40.804 100.00% Conn: 1 Mbps: 40.804 Peak Mbps: 58.870 Avg Mbps: 40.804 204456194 5062208 40.256 100.00% Conn: 1 Mbps: 40.256 Peak Mbps: 58.870 Avg Mbps: 40.256 The tcpbench numbers fluctuate up and down a bit, but the output is nice enough to actually keep tabs on the peak values. Peaking to 58.8MBit/s! Of course, as you can see, the average is lower but nice anyhow. A manyfold increase in performance, which is good enough in itself, but also moves the throughput from a speed that would make a poor but cheap gateway to something actually useful and decent for many home network speeds. Biggest problem after this gets enabled will be that my options to buy cheap used ERLs diminish. ##Beastie Bits Using FreeBSD Text Dumps llvm’s lld now the default linker for amd64 on FreeBSD Author Discoverability Pledge and Unveil in OpenBSD {pdf} EuroBSDCon 2018 CFP Closes June 17, hurry up and get your submissions in Just want to attend, but need help getting to the conference? Applications for the Paul Schenkeveld travel grant accepted until June 15th Tarsnap ##Feedback/Questions Casey - ZFS on Digital Ocean Jürgen - A Question Kevin - Failover best practice Dennis - SQL Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 249: Router On A Stick | BSD Now 249

June 06, 2018 1:25:17 51.23 MB Downloads: 0

OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows backup to FreeNAS, ZFS block size importance, and NetBSD as router on a stick. ##Headlines ZFS and DTrace update lands in NetBSD merge a new version of the CDDL dtrace and ZFS code. This changes the upstream vendor from OpenSolaris to FreeBSD, and this version is based on FreeBSD svn r315983. r315983 is from March 2017 (14 months ago), so there is still more work to do in addition to the 10 years of improvements from upstream, this version also has these NetBSD-specific enhancements: dtrace FBT probes can now be placed in kernel modules. ZFS now supports mmap(). This brings NetBSD 10 years forward, and they should be able to catch the rest of the way up fairly quickly ###NetBSD network stack security audit Maxime Villard has been working on an audit of the NetBSD network stack, a project sponsored by The NetBSD Foundation, which has served all users of BSD-derived operating systems. Over the last five months, hundreds of patches were committed to the source tree as a result of this work. Dozens of bugs were fixed, among which a good number of actual, remotely-triggerable vulnerabilities. Changes were made to strengthen the networking subsystems and improve code quality: reinforce the mbuf API, add many KASSERTs to enforce assumptions, simplify packet handling, and verify compliance with RFCs. This was done in several layers of the NetBSD kernel, from device drivers to L4 handlers. In the course of investigating several bugs discovered in NetBSD, I happened to look at the network stacks of other operating systems, to see whether they had already fixed the issues, and if so how. Needless to say, I found bugs there too. A lot of code is shared between the BSDs, so it is especially helpful when one finds a bug, to check the other BSDs and share the fix. The IPv6 Buffer Overflow: The overflow allowed an attacker to write one byte of packet-controlled data into ‘packetstorage+off’, where ‘off’ could be approximately controlled too. This allowed at least a pretty bad remote DoS/Crash The IPsec Infinite Loop: When receiving an IPv6-AH packet, the IPsec entry point was not correctly computing the length of the IPv6 suboptions, and this, before authentication. As a result, a specially-crafted IPv6 packet could trigger an infinite loop in the kernel (making it unresponsive). In addition this flaw allowed a limited buffer overflow - where the data being written was however not controllable by the attacker. The IPPROTO Typo: While looking at the IPv6 Multicast code, I stumbled across a pretty simple yet pretty bad mistake: at one point the Pim6 entry point would return IPPROTONONE instead of IPPROTODONE. Returning IPPROTONONE was entirely wrong: it caused the kernel to keep iterating on the IPv6 packet chain, while the packet storage was already freed. The PF Signedness Bug: A bug was found in NetBSD’s implementation of the PF firewall, that did not affect the other BSDs. In the initial PF code a particular macro was used as an alias to a number. This macro formed a signed integer. NetBSD replaced the macro with a sizeof(), which returns an unsigned result. The NPF Integer Overflow: An integer overflow could be triggered in NPF, when parsing an IPv6 packet with large options. This could cause NPF to look for the L4 payload at the wrong offset within the packet, and it allowed an attacker to bypass any L4 filtering rule on IPv6. The IPsec Fragment Attack: I noticed some time ago that when reassembling fragments (in either IPv4 or IPv6), the kernel was not removing the MPKTHDR flag on the secondary mbufs in mbuf chains. This flag is supposed to indicate that a given mbuf is the head of the chain it forms; having the flag on secondary mbufs was suspicious. What Now: Not all protocols and layers of the network stack were verified, because of time constraints, and also because of unexpected events: the recent x86 CPU bugs, which I was the only one able to fix promptly. A todo list will be left when the project end date is reached, for someone else to pick up. Me perhaps, later this year? We’ll see. This security audit of NetBSD’s network stack is sponsored by The NetBSD Foundation, and serves all users of BSD-derived operating systems. The NetBSD Foundation is a non-profit organization, and welcomes any donations that help continue funding projects of this kind. DigitalOcean ###MySQL on ZFS Performance I used sysbench to create a table of 10M rows and then, using export/import tablespace, I copied it 329 times. I ended up with 330 tables for a total size of about 850GB. The dataset generated by sysbench is not very compressible, so I used lz4 compression in ZFS. For the other ZFS settings, I used what can be found in my earlier ZFS posts but with the ARC size limited to 1GB. I then used that plain configuration for the first benchmarks. Here are the results with the sysbench point-select benchmark, a uniform distribution and eight threads. The InnoDB buffer pool was set to 2.5GB. In both cases, the load is IO bound. The disk is doing exactly the allowed 3000 IOPS. The above graph appears to be a clear demonstration that XFS is much faster than ZFS, right? But is that really the case? The way the dataset has been created is extremely favorable to XFS since there is absolutely no file fragmentation. Once you have all the files opened, a read IOP is just a single fseek call to an offset and ZFS doesn’t need to access any intermediate inode. The above result is about as fair as saying MyISAM is faster than InnoDB based only on table scan performance results of unfragmented tables and default configuration. ZFS is much less affected by the file level fragmentation, especially for point access type. ZFS stores the files in B-trees in a very similar fashion as InnoDB stores data. To access a piece of data in a B-tree, you need to access the top level page (often called root node) and then one block per level down to a leaf-node containing the data. With no cache, to read something from a three levels B-tree thus requires 3 IOPS. The extra IOPS performed by ZFS are needed to access those internal blocks in the B-trees of the files. These internal blocks are labeled as metadata. Essentially, in the above benchmark, the ARC is too small to contain all the internal blocks of the table files’ B-trees. If we continue the comparison with InnoDB, it would be like running with a buffer pool too small to contain the non-leaf pages. The test dataset I used has about 600MB of non-leaf pages, about 0.1% of the total size, which was well cached by the 3GB buffer pool. So only one InnoDB page, a leaf page, needed to be read per point-select statement. To correctly set the ARC size to cache the metadata, you have two choices. First, you can guess values for the ARC size and experiment. Second, you can try to evaluate it by looking at the ZFS internal data. Let’s review these two approaches. You’ll read/hear often the ratio 1GB of ARC for 1TB of data, which is about the same 0.1% ratio as for InnoDB. I wrote about that ratio a few times, having nothing better to propose. Actually, I found it depends a lot on the recordsize used. The 0.1% ratio implies a ZFS recordsize of 128KB. A ZFS filesystem with a recordsize of 128KB will use much less metadata than another one using a recordsize of 16KB because it has 8x fewer leaf pages. Fewer leaf pages require less B-tree internal nodes, hence less metadata. A filesystem with a recordsize of 128KB is excellent for sequential access as it maximizes compression and reduces the IOPS but it is poor for small random access operations like the ones MySQL/InnoDB does. In order to improve ZFS performance, I had 3 options: Increase the ARC size to 7GB Use a larger Innodb page size like 64KB Add a L2ARC I was reluctant to grow the ARC to 7GB, which was nearly half the overall system memory. At best, the ZFS performance would only match XFS. A larger InnoDB page size would increase the CPU load for decompression on an instance with only two vCPUs; not great either. The last option, the L2ARC, was the most promising. ZFS is much more complex than XFS and EXT4 but, that also means it has more tunables/options. I used a simplistic setup and an unfair benchmark which initially led to poor ZFS results. With the same benchmark, very favorable to XFS, I added a ZFS L2ARC and that completely reversed the situation, more than tripling the ZFS results, now 66% above XFS. Conclusion We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%. ###OpenSMTPD new config TL;DR: OpenBSD #p2k18 hackathon took place at Epitech in Nantes. I was organizing the hackathon but managed to make progress on OpenSMTPD. As mentioned at EuroBSDCon the one-line per rule config format was a design error. A new configuration grammar is almost ready and the underlying structures are simplified. Refactor removes ~750 lines of code and solves _many issues that were side-effects of the design error. New features are going to be unlocked thanks to this. Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) The problem with one-line rules OpenSMTPD decides to accept or reject messages based on one-line rules such as: accept from any for domain poolp.org deliver to mbox Which can essentially be split into three units: the decision: accept/reject the matching: from any for domain poolp.org the (default) action: deliver to mbox To ensure that we meet the requirements of the transactions, the matching must be performed during the SMTP transaction before we take a decision for the recipient. Given that the rule is atomic, that it doesn’t have an identifier and that the action is part of it, the two only ways to make sure we can remember the action to take later on at delivery time is to either: save the action in the envelope, which is what we do today evaluate the envelope again at delivery And this this where it gets tricky… both solutions are NOT ok. The first solution, which we’ve been using for a decade, was to save the action within the envelope and kind of carve it in stone. This works fine… however it comes with the downsides that errors fixed in configuration files can’t be caught up by envelopes, that delivery action must be validated way ahead of time during the SMTP transaction which is much trickier, that the parsing of delivery methods takes place as the _smtpd user rather than the recipient user, and that envelope structures that are passed all over OpenSMTPD carry delivery-time informations, and more, and more, and more. The code becomes more complex in general, less safe in some particular places, and some areas are nightmarish to deal with because they have to deal with completely unrelated code that can’t be dealt with later in the code path. The second solution can’t be done. An envelope may be the result of nested rules, for example an external client, hitting an alias, hitting a user with a .forward file resolving to a user. An envelope on disk may no longer match any rule or it may match a completely different rule If we could ensure that it matched the same rule, evaluating the ruleset may spawn new envelopes which would violate the transaction. Trying to imagine how we could work around this leads to more and more and more RFC violations, incoherent states, duplicate mails, etc… There is simply no way to deal with this with atomic rules, the matching and the action must be two separate units that are evaluated at two different times, failure to do so will necessarily imply that you’re either using our first solution and all its downsides, or that you are currently in a world of pain trying to figure out why everything is burning around you. The minute the action is written to an on-disk envelope, you have failed. A proper ruleset must define a set of matching patterns resolving to an action identifier that is carved in stone, AND a set of named action set that is resolved dynamically at delivery time. Follow the link above to see the rest of the article Break ##News Roundup Backing up a legacy Windows machine to a FreeNAS with rsync I have some old Windows servers (10 years and counting) and I have been using rsync to back them up to my FreeNAS box. It has been working great for me. First of all, I do have my Windows servers backup in virtualized format. However, those are only one-time snapshops that I run once in a while. These are classic ASP IIS web servers that I can easily put up on a new VM. However, many of these legacy servers generate gigabytes of data a day in their repositories. Running VM conversion daily is not ideal. My solution was to use some sort of rsync solution just for the data repos. I’ve tried some applications that didn’t work too well with Samba shares and these old servers have slow I/O. Copying files to external sata or usb drive was not ideal. We’ve moved on from Windows to Linux and do not have any Windows file servers of capacity to provide network backups. Hence, I decided to use Delta Copy with FreeNAS. So here is a little write up on how to set it up. I have 4 Windows 2000 servers backing up daily with this method. First, download Delta Copy and install it. It is open-source and pretty much free. It is basically a wrapper for cygwin’s rsync. When you install it, it will ask you to install the Server services which allows you to run it as a Rsync server on Windows. You don’t need to do this. Instead, you will be just using the Delta Copy Client application. But before we do that, we will need to configure our Rsync service for our Windows Clients on FreeNAS. In FreeNAS, go under Services , Select Rsync > Rsync Modules > Add Rsync Module. Then fill out the form; giving the module a name and set the path. In my example, I simply called it WIN and linked it to a user called backupuser. This process is much easier than trying to configure the daemon rsyncd.conf file by hand. Now, on the Windows Client, start the DeltaCopy Client. You will create a new Profile. You will need to enter the IP of the Rsync server (FreeNAS) and specify the module name which will be called “Virtual Directory Name.” When you pull the select menu, the list of Rsync Modules you created earlier in FreeNAS will populate. You can set authentication. On the server, you can restrict by IP and do other things to lock down your rsync. Next, you will add folders (and/or files) you want to synchronize. Once the paths are set up, you can run a sync by right clicking the profile name. Here, I made a test sync to a home folder of a virtualized windows box. As you can see, I mounted the rsync volume on my mac to see the progress. The rsync worked beautifully. DeltaCopy did what it was told. Once you get everything working. The next thing to do is set schedules. If you done tasks schedules in Windows before, it is pretty straightforward. DeltaCopy has a link in the application to directly create a new task for you. I set my backups to run nightly and it has been working great. There you have it. Windows rsync to FreeNAS using DeltaCopy. The nice thing about FreeNAS is you don’t have to modify /etc/rsyncd.conf files. Everything can be done in the web admin. iXsystems ###How to write ATF tests for NetBSD I have recently started contributing to the amazing NetBSD foundation. I was thinking of trying out a new OS for a long time. Switching to the NetBSD OS has been a fun change. My first contribution to the NetBSD foundation was adding regression tests for the Address Sanitizer (ASan) in the Automated Testing Framework(ATF) which NetBSD has. I managed to complete it with the help of my really amazing mentor Kamil. This post is gonna be about the ATF framework that NetBSD has and how to you can add multiple tests with ease. Intro In ATF tests we will basically be talking about test programs which are a suite of test cases for a specific application or program. The ATF suite of Commands There are a variety of commands that the atf suite offers. These include : atf-check: The versatile command that is a vital part of the checking process. man page atf-run: Command used to run a test program. man page atf-fail: Report failure of a test case. atf-report: used to pretty print the atf-run. man page atf-set: To set atf test conditions. We will be taking a better look at the syntax and usage later. Let’s start with the Basics The ATF testing framework comes preinstalled with a default NetBSD installation. It is used to write tests for various applications and commands in NetBSD. One can write the Test programs in either the C language or in shell script. In this post I will be dealing with the Bash part. Follow the link above to see the rest of the article ###The Importance of ZFS Block Size Warning! WARNING! Don’t just do things because some random blog says so One of the important tunables in ZFS is the recordsize (for normal datasets) and volblocksize (for zvols). These default to 128KB and 8KB respectively. As I understand it, this is the unit of work in ZFS. If you modify one byte in a large file with the default 128KB record size, it causes the whole 128KB to be read in, one byte to be changed, and a new 128KB block to be written out. As a result, the official recommendation is to use a block size which aligns with the underlying workload: so for example if you are using a database which reads and writes 16KB chunks then you should use a 16KB block size, and if you are running VMs containing an ext4 filesystem, which uses a 4KB block size, you should set a 4KB block size You can see it has a 16GB total file size, of which 8.5G has been touched and consumes space - that is, it’s a “sparse” file. The used space is also visible by looking at the zfs filesystem which this file resides in Then I tried to copy the image file whilst maintaining its “sparseness”, that is, only touching the blocks of the zvol which needed to be touched. The original used only 8.42G, but the copy uses 14.6GB - almost the entire 16GB has been touched! What’s gone wrong? I finally realised that the difference between the zfs filesystem and the zvol is the block size. I recreated the zvol with a 128K block size That’s better. The disk usage of the zvol is now exactly the same as for the sparse file in the filesystem dataset It does impact the read speed too. 4K blocks took 5:52, and 128K blocks took 3:20 Part of this is the amount of metadata that has to be read, see the MySQL benchmarks from earlier in the show And yes, using a larger block size will increase the compression efficiency, since the compressor has more redundant data to optimize. Some of the savings, and the speedup is because a lot less metadata had to be written Your zpool layout also plays a big role, if you use 4Kn disks, and RAID-Z2, using a volblocksize of 8k will actually result in a large amount of wasted space because of RAID-Z padding. Although, if you enable compression, your 8k records may compress to only 4k, and then all the numbers change again. ###Using a Raspberry Pi 2 as a Router on a Stick Starring NetBSD Sorry we didn’t answer you quickly enough A few weeks ago I set about upgrading my feeble networking skills by playing around with a Cisco 2970 switch. I set up a couple of VLANs and found the urge to set up a router to route between them. The 2970 isn’t a modern layer 3 switch so what am I to do? Why not make use of the Raspberry Pi 2 that I’ve never used and put it to some good use as a ‘router on a stick’. I could install a Linux based OS as I am quite familiar with it but where’s the fun in that? In my home lab I use SmartOS which by the way is a shit hot hypervisor but as far as I know there aren’t any Illumos distributions for the Raspberry Pi. On the desktop I use Solus OS which is by far the slickest Linux based OS that I’ve had the pleasure to use but Solus’ focus is purely desktop. It’s looking like BSD then! I believe FreeBSD is renowned for it’s top notch networking stack and so I wrote to the BSDNow show on Jupiter Broadcasting for some help but it seems that the FreeBSD chaps from the show are off on a jolly to some BSD conference or another(love the show by the way). It looks like me and the luvverly NetBSD are on a date this Saturday. I’ve always had a secret love for NetBSD. She’s a beautiful, charming and promiscuous lover(looking at the supported architectures) and I just can’t stop going back to her despite her misgivings(ahem, zfs). Just my type of grrrl! Let’s crack on… Follow the link above to see the rest of the article ##Beastie Bits BSD Jobs University of Aberdeen’s Internet Transport Research Group is hiring VR demo on OpenBSD via OpenHMD with OSVR HDK2 patch runs ed, and ed can run anything (mentions FreeBSD and OpenBSD) Alacritty (OpenGL-powered terminal emulator) now supports OpenBSD MAP_STACK Stack Register Checking Committed to -current EuroBSDCon CfP till June 17, 2018 Tarsnap ##Feedback/Questions NeutronDaemon - Tutorial request Kurt - Question about transferability/bi-directionality of ZFS snapshots and send/receive Peter - A Question and much love for BSD Now Peter - netgraph state Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 248: Show Me The Mooney | BSD Now 248

May 29, 2018 1:44:33 62.8 MB Downloads: 0

DragonflyBSD release 5.2.1 is here, BPF kernel exploit writeup, Remote Debugging the running OpenBSD kernel, interview with Patrick Mooney, FreeBSD buildbot setup in a jail, dumping your USB, and 5 years of gaming on FreeBSD. Headlines DragonFlyBSD: release52 (w/stable HAMMER2, as default root) DragonflyBSD 5.2.1 was released on May 21, 2018 > Big Ticket items: Meltdown and Spectre mitigation support Meltdown isolation and spectre mitigation support added. Meltdown mitigation is automatically enabled for all Intel cpus. Spectre mitigation must be enabled manually via sysctl if desired, using sysctls machdep.spectremitigation and machdep.meltdownmitigation. HAMMER2 H2 has received a very large number of bug fixes and performance improvements. We can now recommend H2 as the default root filesystem in non-clustered mode. Clustered support is not yet available. ipfw Updates Implement state based "redirect", i.e. without using libalias. ipfw now supports all possible ICMP types. Fix ICMPMAXTYPE assumptions (now 40 as of this release). Improved graphics support The drm/i915 kernel driver has been updated to support Intel Coffeelake GPUs Add 24-bit pixel format support to the EFI frame buffer code. Significantly improve fbio support for the "scfb" XOrg driver. This allows EFI frame buffers to be used by X in situations where we do not otherwise support the GPU. Partly implement the FBIOBLANK ioctl for display powersaving. Syscons waits for drm modesetting at appropriate places, avoiding races. PS4 4.55 BPF Race Condition Kernel Exploit Writeup Note: While this bug is primarily interesting for exploitation on the PS4, this bug can also potentially be exploited on other unpatched platforms using FreeBSD if the attacker has read/write permissions on /dev/bpf, or if they want to escalate from root user to kernel code execution. As such, I've published it under the "FreeBSD" folder and not the "PS4" folder. Introduction Welcome to the kernel portion of the PS4 4.55FW full exploit chain write-up. This bug was found by qwerty, and is fairly unique in the way it's exploited, so I wanted to do a detailed write-up on how it worked. The full source of the exploit can be found here. I've previously covered the webkit exploit implementation for userland access here. FreeBSD or Sony's fault? Why not both... Interestingly, this bug is actually a FreeBSD bug and was not (at least directly) introduced by Sony code. While this is a FreeBSD bug however, it's not very useful for most systems because the /dev/bpf device driver is root-owned, and the permissions for it are set to 0600 (meaning owner has read/write privileges, and nobody else does) - though it can be used for escalating from root to kernel mode code execution. However, let’s take a look at the make_dev() call inside the PS4 kernel for /dev/bpf (taken from a 4.05 kernel dump). seg000:FFFFFFFFA181F15B lea rdi, unk_FFFFFFFFA2D77640 seg000:FFFFFFFFA181F162 lea r9, aBpf ; "bpf" seg000:FFFFFFFFA181F169 mov esi, 0 seg000:FFFFFFFFA181F16E mov edx, 0 seg000:FFFFFFFFA181F173 xor ecx, ecx seg000:FFFFFFFFA181F175 mov r8d, 1B6h seg000:FFFFFFFFA181F17B xor eax, eax seg000:FFFFFFFFA181F17D mov cs:qword_FFFFFFFFA34EC770, 0 seg000:FFFFFFFFA181F188 call make_dev We see UID 0 (the UID for the root user) getting moved into the register for the 3rd argument, which is the owner argument. However, the permissions bits are being set to 0x1B6, which in octal is 0666. This means anyone can open /dev/bpf with read/write privileges. I’m not sure why this is the case, qwerty speculates that perhaps bpf is used for LAN gaming. In any case, this was a poor design decision because bpf is usually considered privileged, and should not be accessible to a process that is completely untrusted, such as WebKit. On most platforms, permissions for /dev/bpf will be set to 0x180, or 0600. Race Conditions - What are they? The class of the bug abused in this exploit is known as a "race condition". Before we get into bug specifics, it's important for the reader to understand what race conditions are and how they can be an issue (especially in something like a kernel). Often in complex software (such as a kernel), resources will be shared (or "global"). This means other threads could potentially execute code that will access some resource that could be accessed by another thread at the same point in time. What happens if one thread accesses this resource while another thread does without exclusive access? Race conditions are introduced. Race conditions are defined as possible scenarios where events happen in a sequence different than the developer intended which leads to undefined behavior. In simple, single-threaded programs, this is not an issue because execution is linear. In more complex programs where code can be running in parallel however, this becomes a real issue. To prevent these problems, atomic instructions and locking mechanisms were introduced. When one thread wants to access a critical resource, it will attempt to acquire a "lock". If another thread is already using this resource, generally the thread attempting to acquire the lock will wait until the other thread is finished with it. Each thread must release the lock to the resource after they're done with it, failure to do so could result in a deadlock. While locking mechanisms such as mutexes have been introduced, developers sometimes struggle to use them properly. For example, what if a piece of shared data gets validated and processed, but while the processing of the data is locked, the validation is not? There is a window between validation and locking where that data can change, and while the developer thinks the data has been validated, it could be substituted with something malicious after it is validated, but before it is used. Parallel programming can be difficult, especially when, as a developer, you also want to factor in the fact that you don't want to put too much code in between locking and unlocking as it can impact performance. See article for the rest iXsystems Remote Debugging the running OpenBSD kernel Subtitled: A way to understand the OpenBSD internals +> The Problem +> A few month ago, I tried porting the FreeBSD kdb along with it's gdb stub implementations to OpenBSD as a practice of learning the internals of an BSD operating system. The ddb code in both FreeBSD and OpenBSD looks pretty much the same and the GDB Remote Serial Protocol looks very minimal. +> But sadly I got very busy and the work is stalled but I'm planning on resuming the attempt as soon as I get the chance, But there is an alternative way to Debugging the OpenBSD kernel via QEMU. What I did below is basically the same with a few minor changes which I hope to describe it as best. +> Installing OpenBSD on Qemu +> For debugging the kernel, we need a working OpenBSD system running on Qemu. I chose to create a raw disk file to be able to easily mount it later via the host and copy the custom kernel onto it. $ qemu-img create -f raw disk.raw 5G $ qemu-system-x8664 -m 256M \ -drive format=raw,file=install63.fs \ -drive format=raw,file=disk.raw +> Custom Kernel +> To debug the kernel, we need a version of the kernel with debugging symbols and for that we have to recompile it first. The process is documented at Building the System from Source: ... +> Then we can copy the bsd kernel to the guest machine and keep the bsd.gdb on the host to start the remote debugging via gdb. +> Remote debugging kernel +> Now it's to time to boot the guest with the new custom kernel. Remember that the -s argument enables the gdb server on qemu on localhost port 1234 by default: $ qemu-system-x8664 -m 256M -s \ -net nic -net user \ -drive format=raw,file=install63.fs \ +> Now to finally attach to the running kernel: Interview - Patrick Mooney - Software Engineer pmooney@pfmooney.com / @pfmooney BR: How did you first get introduced to UNIX? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on in the past? AJ: Can you tell us more about what attracted you to illumos? BR: How did you get interested in, and started with, systems development? AJ: When did you first get interested in bhyve? BR: How much work was it to take the years-old port of bhyve and get it working on modern IllumOS? AJ: What was the process for getting the bhyve port caught up to current FreeBSD? BR: How usable is bhyve on illumOS? AJ: What area are you most interested in improving in bhyve? BR: Do you think the FreeBSD and illumos versions of bhyve will stay in sync with each other? AJ: What do you do for fun? BR: Anything else you want to mention? News Roundup Setting up buildbot in FreeBSD Jails In this article, I would like to present a tutorial to set up buildbot, a continuous integration (CI) software (like Jenkins, drone, etc.), making use of FreeBSD’s containerization mechanism "jails". We will cover terminology, rationale for using both buildbot and jails together, and installation steps. At the end, you will have a working buildbot instance using its sample build configuration, ready to play around with your own CI plans (or even CD, it’s very flexible!). Some hints for production-grade installations are given, but the tutorial steps are meant for a test environment (namely a virtual machine). Buildbot’s configuration and detailed concepts are not in scope here. Table of contents Choosing host operating system and version for buildbot Create a FreeBSD playground Introduction to jails Overview of buildbot Set up jails Install buildbot master Run buildbot master Install buildbot worker Run buildbot worker Set up web server nginx to access buildbot UI Run your first build Production hints Finished! Choosing host operating system and version for buildbot We choose the released version of FreeBSD (11.1-RELEASE at the moment). There is no particular reason for it, and as a matter of fact buildbot as a Python-based server is very cross-platform; therefore the underlying OS platform and version should not make a large difference. It will make a difference for what you do with buildbot, however. For instance, poudriere is the de-facto standard for building packages from source on FreeBSD. Builds run in jails which may be any FreeBSD base system version older or equal to the host’s version (reason will be explained below). In other words, if the host is FreeBSD 11.1, build jails created by poudriere could e.g. use 9.1, 10.3, 11.0, 11.1, but potentially not version 12 or newer because of incompatibilities with the host’s kernel (jails do not run their own kernel as full virtual machines do). To not prolong this article over the intended scope, the details of which nice things could be done or automated with buildbot are not covered. Package names on the FreeBSD platform are independent of the OS version, since external software (as in: not part of base system) is maintained in FreeBSD ports. So, if your chosen FreeBSD version (here: 11) is still officially supported, the packages mentioned in this post should work. In the unlikely event of package name changes before you read this article, you should be able to find the actual package names like pkg search buildbot. Other operating systems like the various Linux distributions will use different package names but might also offer buildbot pre-packaged. If not, the buildbot installation manual offers steps to install it manually. In such case, the downside is that you will have to maintain and update the buildbot modules outside the stability and (semi-)automatic updates of your OS packages. See article for the rest DigitalOcean Dumping your USB One of the many new features of OpenBSD 6.3 is the possibility to dump USB traffic to userland via bpf(4). This can be done with tcpdump(8) by specifying a USB bus as interface: ``` tcpdump -Xx -i usb0 tcpdump: listening on usb0, link-type USBPCAP 12:28:03.317945 bus 0 < addr 1: ep1 intr 2 0000: 0400 .. 12:28:03.318018 bus 0 > addr 1: ep0 ctrl 8 0000: 00a3 0000 0002 0004 00 ......... [...] ``` As you might have noted I decided to implement the existing USBPcap capture format. A capture format is required because USB packets do not include all the necessary information to properly interpret them. I first thought I would implement libpcap's DLTUSB but then I quickly realize that this was not a standard. It is instead a FreeBSD specific format which has been since then renamed DLTUSBFREEBSD. But I didn't want to embrace xkcd #927, so I look at the existing formats: DLTUSBFREEBSD, DLTUSBLINUX, DLTUSBLINUXMMAPPED, DLTUSBDARWIN and DLT_USBPCAP. I was first a bit sad to see that nobody could agree on a common format then I moved on and picked the simplest one: USBPcap. Implementing an already existing format gives us out-of-box support for all the tools supporting it. That's why having common formats let us share our energy. In the case of USBPcap it is already supported by Wireshark, so you can already inspect your packet graphically. For that you need to first capture raw packets: ``` tcpdump -s 3303 -w usb.pcap -i usb0 tcpdump: listening on usb0, link-type USBPCAP ^C 208 packets received by filter 0 packets dropped by kernel ``` USB packets can be quite big, that's why I'm not using tcpdump(8)'s default packet size. In this case, I want to make sure I can dump the complete uaudio(4) frames. It is important to say that what is dumped to userland is what the USB stack sees. Packets sent on the wire might differ, especially when it comes to retries and timing. So this feature is not here to replace any USB analyser, however I hope that it will help people understand how things work and what the USB stack is doing. Even I found some interesting timing issues while implementing isochronous support. Run OpenBSD on your web server Deploy and login to your OpenBSD server first. As soon as you're there you can enable an httpd(8) daemon, it's already installed on OpenBSD, you just need to configure it: www# vi /etc/httpd.conf Add two server sections---one for www and another for naked domain (all requests are redirected to www). ``` server "www.example.com" { listen on * port 80 root "/htdocs/www.example.com" } server "example.com" { listen on * port 80 block return 301 "http://www.example.com$REQUEST_URI" } ``` httpd is chrooted to /var/www by default, so let's make a document root directory: www# mkdir -p /var/www/htdocs/www.example.com Save and check this configuration: www# httpd -n configuration ok Enable httpd(8) daemon and start it. www# rcctl enable httpd www# rcctl start httpd Publish your website Copy your website content into /var/www/htdocs/www.example.com and then test it your web browser. http://XXX.XXX.XXX.XXX/ Your web server should be up and running. Update DNS records If there is another HTTPS server using this domain, configure that server to redirect all HTTPS requests to HTTP. Now as your new server is ready you can update DNS records accordingly. example.com. 300 IN A XXX.XXX.XXX.XXX www.example.com. 300 IN A XXX.XXX.XXX.XXX Examine your DNS is propagated. $ dig example.com www.example.com Check IP addresses it answer sections. If they are correct, you should be able to access your new web server by its domain name. What's next? Enable HTTPS on your server. Modern Akonadi and KMail on FreeBSD For, quite literally a year or more, KMail and Akonadi on FreeBSD have been only marginally useful, at best. KDE4 era KMail was pretty darn good, but everything after that has had a number of FreeBSD users tearing out their hair. Sure, you can go to Trojitá, which has its own special problems and is generally “meh”, or bail out entirely to webmail, but .. KMail is a really great mail client when it works. Which, on Linux desktops, is nearly always, and on FreeBSD, is was nearly never. I looked at it with Dan and Volker last summer, briefly, and we got not much further than “hmm”. There’s a message about “The world is going to end!” which hardly makes sense, it means that a message has been truncated or corrupted while traversing a UNIX domain socket. Now Alexandre Martins — praise be! — has wandered in with a likely solution. KDE Bug 381850 contains a suggestion, which deserves to be publicised (and tested): sysctl net.local.stream.recvspace=65536 sysctl net.local.stream.sendspace=65536 The default FreeBSD UNIX local socket buffer space is 8kiB. Bumping the size up to 64kiB — which matches the size that Linux has by default — suddenly makes KMail and Akonadi shine again. No other changes, no recompiling, just .. bump the sysctls (perhaps also in /etc/sysctl.conf) and KMail from Area51 hums along all day without ending the world. Since changing this value may have other effects, and Akonadi shouldn’t be dependent on a specific buffer size anyway, I’m looking into the Akonadi code (encouraged by Dan) to either automatically size the socket buffers, or to figure out where in the underlying code the assumption about buffer size lives. So for now, sysctl can make KMail users on FreeBSD happy, and later we hope to have things fully automatic (and if that doesn’t pan out, well, pkg-message exists). PS. Modern KDE PIM applications — Akonadi, KMail — which live in the deskutils/ category of the official FreeBSD ports were added to the official tree April 10th, so you can get your fix now from the official tree. Beastie Bits pkg-provides support for DragonFly (from Rodrigo Osorio) Memories of writing a parser for man pages Bryan Cantrill interview over at DeveloperOnFire podcast 1978-03-25 - 2018-03-25: 40 years BSD Mail My 5 years of FreeBSD gaming: a compendium of free games and engines running natively on FreeBSD Sequential Resilver being upstreamed to FreeBSD, from FreeNAS, where it was ported from ZFS-on-Linux University of Aberdeen’s Internet Transport Research Group is hiring Tarsnap ad Feedback/Questions Dave - mounting non-filesystem things inside jails Morgan - ZFS on Linux Data loss bug Rene - How to keep your ISP’s nose out of your browser history with encrypted DNS Rodriguez - Feedback question! Relating to Windows Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 247: Interning for FreeBSD | BSD Now 247

May 24, 2018 1:29:59 54.06 MB Downloads: 0

FreeBSD internship learnings, exciting developments coming to FreeBSD, running FreeNAS on DigitalOcean, Network Manager control for OpenBSD, OpenZFS User Conference Videos are here and batch editing files with ed. Headlines What I learned during my FreeBSD intership Hi, my name is Mitchell Horne. I am a computer engineering student at the University of Waterloo, currently in my third year of studies, and fortunate to have been one of the FreeBSD Foundation’s co-op students this past term (January to April). During this time I worked under Ed Maste, in the Foundation’s small Kitchener office, along with another co-op student Arshan Khanifar. My term has now come to an end, and so I’d like to share a little bit about my experience as a newcomer to FreeBSD and open-source development. I’ll begin with some quick background — and a small admission of guilt. I have been an open-source user for a large part of my life. When I was a teenager I started playing around with Linux, which opened my eyes to the wider world of free software. Other than some small contributions to GNOME, my experience has been mostly as an end user; however, the value of these projects and the open-source philosophy was not lost on me, and is most of what motivated my interest in this position. Before beginning this term I had no personal experience with any of the BSDs, although I knew of their existence and was extremely excited to receive the position. I knew it would be a great opportunity for growth, but I must confess that my naivety about FreeBSD caused me to make the silent assumption that this would be a form of compromise — a stepping stone that would eventually allow me to work on open-source projects that are somehow “greater” or more “legitimate”. After four months spent immersed in this project I have learned how it operates, witnessed its community, and learned about its history. I am happy to admit that I was completely mistaken. Saying it now seems obvious, but FreeBSD is a project with its own distinct uses, goals, and identity. For many there may exist no greater opportunity than to work on FreeBSD full time, and with what I know now I would have a hard time coming up with a project that is more “legitimate”. What I Liked In all cases, the work I submitted this term was reviewed by no less than two people before being committed. The feedback and criticism I received was always both constructive and to the point, and it commented on everything from high-level ideas to small style issues. I appreciate having these thorough reviews in place, since I believe it ultimately encourages people to accept only their best work. It is indicative of the high quality that already exists within every aspect of this project, and this commitment to quality is something that should continue to be honored as a core value. As I’ve discovered in some of my previous work terms, it is all too easy cut corners in the name of a deadline or changing priorities, but the fact that FreeBSD doesn’t need to make these types of compromises is a testament to the power of free software. It’s a small thing, but the quality and completeness of the FreeBSD documentation was hugely helpful throughout my term. Everything you might need to know about utilities, library functions, the kernel, and more can be found in a man page; and the handbook is a great resource as both an introduction to the operating system and a reference. I only wish I had taken some time earlier in the term to explore the different documents more thoroughly, as they cover a wide range of interesting and useful topics. The effort people put into writing and maintaining FreeBSD’s documentation is easy to overlook, but its value cannot be overstated. What I Learned Although there was a lot I enjoyed, there were certainly many struggles I faced throughout the term, and lessons to be learned from them. I expect that some of issues I faced may be specific to FreeBSD, while others may be common to open-source projects in general. I don’t have enough experience to speculate on which is which, so I will leave this to the reader. The first lesson can be summed up simply: you have to advocate for your own work. FreeBSD is made up in large part by volunteer efforts, and in many cases there is more work to go around than people available to do it. A consequence of this is that there will not be anybody there to check up on you. Even in my position where I actually had a direct supervisor, Ed often had his plate full with so many other things that the responsibility to find someone to look at my work fell to me. Admittedly, a couple of smaller changes I worked on got left behind or stuck in review simply because there wasn’t a clear person/place to reach out to. I think this is both a barrier of entry to FreeBSD and a mental hurdle that I needed to get over. If there’s a change you want to see included or reviewed, then you may have to be the one to push for it, and there’s nothing wrong with that. Perhaps this process should be easier for newcomers or infrequent contributors (the disconnect between Bugzilla and Phabricator definitely leaves a lot to be desired), but we also have to be aware that this simply isn’t the reality right now. Getting your work looked at may require a little bit more self-motivation, but I’d argue that there are much worse problems a project like FreeBSD could have than this. I understand this a lot better now, but it is still something I struggle with. I’m not naturally the type of person who easily connects with others or asks for help, so I see this as an area for future growth rather than simply a struggle I encountered and overcame over the course of this work term. Certainly it is an important skill to understand the value of your own work, and equally important is the ability to communicate that value to others. I also learned the importance of starting small. My first week or two on the job mainly involved getting set up and comfortable with the workflow. After this initial stage, I began exploring the project and found myself overwhelmed by its scale. With so many possible areas to investigate, and so much work happening at once, I felt quite lost on where to begin. Many of the potential projects I found were too far beyond my experience level, and most small bugs were picked up and fixed quickly by more experienced contributors before I could even get to them. It’s easy to make the mistake that FreeBSD is made up solely of a few rock-star committers that do everything. This is how it appears at face-value, as reading through commits, bug reports, and mailing lists yields a few of the same names over and over. The reality is that just as important are the hundreds of users and infrequent contributors who take the time to submit bug reports, patches, or feedback. Even though there are some people who would fall under the umbrella of a rock-star committer, they didn’t get there overnight. Rather, they have built their skills and knowledge through many years of involvement in FreeBSD and similar projects. As a student coming into this project and having high expectations of myself, it was easy to set the bar too high by comparing myself against those big committers, and feel that my work was insignificant, inadequate, and simply too infrequent. In reality, there is no reason I should have felt this way. In a way, this comparison is disrespectful to those who have reached this level, as it took them a long time to get there, and it’s a humbling reminder that any skill worth learning requires time, patience, and dedication. It is easy to focus on an end product and simply wish to be there, but in order to be truly successful one must start small, and find satisfaction in the struggle of learning something new. I take pride in the many small successes I’ve had throughout my term here, and appreciate the fact that my journey into FreeBSD and open-source software is only just beginning. Closing Thoughts I would like to close with some brief thank-you’s. First, to everyone at the Foundation for being so helpful, and allowing this position to exist in the first place. I am extremely grateful to have been given this unique opportunity to learn about and give back to the open-source world. I’d also like to thank my office mates; Ed: for being an excellent mentor, who offered an endless wealth of knowledge and willingness to share it. My classmate and fellow intern Arshan: for giving me a sense of camaraderie and the comforting reminder that at many moments he was as lost as I was. Finally, a quick thanks to everyone else I crossed paths with who offered reviews and advice. I appreciate your help and look forward to working with you all further. I am walking away from this co-op with a much greater appreciation for this project, and have made it a goal to remain involved in some capacity. I feel that I’ve gained a little bit of a wider perspective on my place in the software world, something I never really got from my previous co-ops. Whether it ends up being just a stepping stone, or the beginning of much larger involvement, I thoroughly enjoyed my time here. Recent Developments in FreeBSD Support for encrypted, compressed (gzip and zstd), and network crash dumps enabled by default on most platforms Intel Microcode Splitter Intel Spec Store Bypass Disable control Raspberry Pi 3B+ Ethernet Driver IBRS for i386 Upcoming: Microcode updater for AMD CPUs the RACK TCP/IP stack, from Netflix Voting in the FreeBSD Core Election begins today: DigitalOcean Digital Ocean Promo Link for BSD Now Listeners Running FreeNAS on a DigitalOcean Droplet Need to backup your FreeNAS offsite? Run a locked down instance in the cloud, and replicate to it The tutorial walks though the steps of converting a fresh FreeBSD based droplet into a FreeNAS Create a droplet, and add a small secondary block-storage device Boot the droplet, login, and download FreeNAS Disable swap, enable ‘foot shooting’ mode in GEOM use dd to write the FreeNAS installer to the boot disk Reboot the droplet, and use the FreeNAS installer to install FreeNAS to the secondary block storage device Now, reimage the droplet with FreeBSD again, to replace the FreeNAS installer Boot, and dd FreeNAS from the secondary block storage device back to the boot disk You can now destroy the secondary block device Now you have a FreeNAS, and can take it from there. Use the FreeNAS replication wizard to configure sending snapshots from your home NAS to your cloud NAS Note: You might consider creating a new block storage device to create a larger pool, that you can more easily grow over time, rather than using the boot device in the droplet as your main pool. News Roundup Network Manager Control for OpenBSD (Updated) Generalities I just remind the scope of this small tool: allow you to pre-define several cable or wifi connections let nmctl to connect automatically to the first available one allow you to easily switch from one network connection to an other one create openbox dynamic menus Enhancements in this version This is my second development version: 0.2. I've added performed several changes in the code: code style cleanup, to better match the python recommendations adapt the tool to allow to connect to an Open-wifi having blancs in the name. This happens in some hotels implement a loop as work-around concerning the arp table issue. The source code is still on the git of Sourceforge.net. You can see the files here And you can download the last version here Feedbacks after few months I'm using this script on my OpenBSD laptop since about 5 months. In my case, I'm mainly using the openbox menus and the --restart option. The Openbox menus The openbox menus are working fine. As explain in my previous blog, I just have to create 2 entries in my openbox's menu.xml file, and all the rest comes automatically from nmctl itself thanks to the --list and --scan options. I've not changed this part of nmctl since it works as expected (for me :-) ). The --restart option Because I'm very lazy, and because OpenBSD is very simple to use, I've added the command "nmctl --restart" in the /etc/apm/resume script. Thanks to apmd, this script will be used each time I'm opening the lid of my laptop. In other words, each time I'll opening my laptop, nmctl will search the optimum network connection for me. But I had several issues in this scenario. Most of the problems were linked to the arp table issues. Indeed, in some circumstances, my proxy IP address was associated to the cable interface instead of the wifi interface or vice-versa. As consequence I'm not able to connect to the proxy, thus not able to connect to internet. So the ping to google (final test nmctl perform) is failing. Knowing that anyhow, I'm doing a full arp cleanup, it's not clear for me from where this problem come from. To solve this situation I've implemented a "retry" concept. In other words, before testing an another possible network connection (as listed in my /etc/nmctl.conf file), the script try 3x the current connection's parameters. If you want to reduce or increase this figures, you can do it via the --retry parameter. Results of my expertise with this small tool Where ever I'm located, my laptop is now connecting automatically to the wifi / cable connection previously identified for this location. Currently I have 3 places where I have Wifi credentials and 2 offices places where I just have to plug the network cable. Since the /etc/apm/resume scripts is triggered when I open the lid of the laptop, I just have to make sure that I plug the RJ45 before opening the laptop. For the rest, I do not have to type any commands, OpenBSD do all what is needed ;-). I hotels or restaurants, I can just connect to the Open Wifi thanks to the openbox menu created by "nmctl --scan". Next steps Documentation The tool is missing lot of documentation. I appreciate OpenBSD for his great documentation, so I have to do the same. I plan to write a README and a man page at first instances. But since my laziness, I will do it as soon as I see some interest for this tool from other persons. Tests I now have to travel and see how to see the script react on the different situations. Interested persons are welcome to share with me the outcome of their tests. I'm curious how it work. OpenBSD 6.3 on EdgeRouter Lite simple upgrade method TL;DR OpenBSD 6.3 oceton upgrade instructions may not factor that your ERL is running from the USB key they want wiped with the miniroot63.fs image loaded on. Place the bsd.rd for OpenBSD 6.3 on the sd0i slice used by U-Boot for the kernel, and then edit the boot command to run it. a tiny upgrade The OpenBSD documentation is comprehensive, but there might be rough corners around what are probably edge cases in their user base. People running EdgeRouter Lite hardware for example, who are looking to upgrade from 6.2 to 6.3. The documentation, which gave us everything we needed last time, left me with some questions about how to upgrade. In INSTALL.octeon, the Upgrading section does mention: The best solution, whenever possible, is to backup your data and reinstall from scratch I had to check if that directive existed in the documentation for other architectures. I wondered if oceton users were getting singled out. We were not. Just simplicity and pragmatism. Reading on: To upgrade OpenBSD 6.3 from a previous version, start with the general instructions in the section "Installing OpenBSD". But that section requires us to boot off of TFTP or NFS. Which I don’t want to do right now. Could also use a USB stick with the miniroot63.fs installed on it. But as the ERL only has a single USB port, we would have to remove the USB stick with the current install on it. Once we get to the Install or Upgrade prompt, there would be nothing to upgrade. Well, I guess I could use a USB hub. But the ERL’s USB port is inside the case. With all the screws in. And the tools are neatly put away. And I’d have to pull the USB hub from behind a workstation. And it’s two am. And I cleaned up the cabling in the lab this past weekend. Looks nice for once. So I don’t want to futz around with all that. There must be an almost imperceptibly easier way of doing this than setting up a TFTP server or NFS share in five minutes… Right? iXsystems Boise Technology Show 2018 Recap OpenZFS User Conference Slides & Videos Thank you ZFS ZSTD Compression Pool Layout Considerations ZFS Releases Helping Developers Help You ZFS and MySQL on Linux Micron OSNEXUS ZFS at Six Feet Up Flexible Disk Use with OpenZFS Batch editing files with ed what’s ‘ed’? ed is this sort of terrifying text editor. A typical interaction with ed for me in the past has gone something like this: $ ed help ? h ? asdfasdfasdfsadf ? <close terminal in frustration> Basically if you do something wrong, ed will just print out a single, unhelpful, ?. So I’d basically dismissed ed as an old arcane Unix tool that had no practical use today. vi is a successor to ed, except with a visual interface instead of this ? surprise: Ed is actually sort of cool and fun So if Ed is a terrifying thing that only prints ? at you, why am I writing a blog post about it? WELL!!!! On April 1 this year, Michael W Lucas published a new short book called Ed Mastery. I like his writing, and even though it was sort of an april fool’s joke, it was ALSO a legitimate actual real book, and so I bought it and read it to see if his claims that Ed is actually interesting were true. And it was so cool!!!! I found out: how to get Ed to give you better error messages than just ? that the name of the grep command comes from ed syntax (g/re/p) the basics of how to navigate and edit files using ed All of that was a cool Unix history lesson, but did not make me want to actually use Ed in real life. But!!! The other neat thing about Ed (that did make me want to use it!) is that any Ed session corresponds to a script that you can replay! So if I know Ed, then I can use Ed basically as a way to easily apply vim-macro-like programs to my files. Beastie Bits FreeBSD Mastery: Jails -- Help make it happen Video: OpenZFS Basics presented by George Wilson and Matt Ahrens at Scale 16x back in March 2018 DragonFlyBSD’s IPFW gets highspeed lockless in-kernel NAT A Love Letter to OpenBSD New talks, and the F-bomb Practical UNIX Manuals: mdoc BSD Meetup in Zurich: May 24th BSD Meetup in Warsaw: May 24th MeetBSD 2018 Tarsnap Feedback/Questions Seth - First time poudriere Builder Farhan - Why we didn't go FreeBSD architech - Encryption Feedback Dave - Handy Tip on setting up automated coredump handling for FreeBSD Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 246: Properly Coordinated Disclosure | BSD Now 246

May 17, 2018 1:29:54 54.01 MB Downloads: 0

How Intel docs were misinterpreted by almost any OS, a look at the mininet SDN emulator, do’s and don’ts for FreeBSD, OpenBSD community going gold, ed mastery is a must read, and the distributed object store minio on FreeBSD. Headlines Intel documentation flaw sees instruction misimplemented in almost every OS A statement in the System Programming Guide of the Intel 64 and IA-32 Architectures Software Developer's Manual (SDM) was mishandled in the development of some or all operating-system kernels, resulting in unexpected behavior for #DB exceptions that are deferred by MOV SS or POP SS, as demonstrated by (for example) privilege escalation in Windows, macOS, some Xen configurations, or FreeBSD, or a Linux kernel crash. OS kernels may not expect this order of events and may therefore experience unexpected behavior when it occurs. + A detailed white paper describes this behavior here + FreeBSD Commit Thank you to the MSRC Incident Response Team, and in particular Greg Lenti and Nate Warfield, for coordinating the response to this issue across multiple vendors. Thanks to Computer Recycling at The Working Center of Kitchener for making hardware available to allow us to test the patch on additional CPU families. + FreeBSD Security Advisory + DragonFlyBSD Post + NetBSD does not support debug register and so is not affected. + OpenBSD also appears to not be affected, “We are not aware of further vendor information regarding this vulnerability.” + IllumOS Not Impacted Guest Post – A Look at SDN Emulator Mininet A guest post on the FreeBSD Foundation’s blog by developer Ayaka Koshibe At this year’s AsiaBSDCon, I presented a talk about a SDN network emulator called Mininet, and my ongoing work to make it more portable. That presentation was focused on the OpenBSD version of the port, and I breezed past the detail that I also had a version or Mininet working on FreeBSD. Because I was given the opportunity, I’d like to share a bit about the FreeBSD version of Mininet. It will not only be about what Mininet is and why it might be interesting, but also a recounting of my experience as a user making a first-time attempt at porting an application to FreeBSD. Mininet started off as a tool used by academic researchers to emulate OpenFlow networks when they didn’t have convenient access to actual networks. Because of its history, Mininet became associated strongly with networks that use OpenFlow for their control channels. But, it has also become fairly popular among developers working in, and among several universities for research and teaching about, SDN (Software Defined Networking) I began using Mininet as an intern at my university’s network research lab. I was using FreeBSD by that time, and wasn’t too happy to learn that Mininet wouldn’t work on anything but Linux. I gradually got tired of having to run a Linux VM just to use Mininet, and one day it clicked in my mind that I can actually try porting it to FreeBSD. Mininet creates a topology using the resource virtualization features that Linux has. Specifically, nodes are bash processes running in network namespaces, and the nodes are interconnected using veth virtual Ethernet links. Switches and controllers are just nodes whose shells have run the right commands to configure a software switch or start a controller application. Mininet can therefore be viewed as a series of Python libraries that run the system commands necessary to create network namespaces and veth interfaces, assemble a specified topology, and coordinate how user commands aimed at nodes (since they are just shells) are run. Coming back to the port, I chose to use vnet jails to replace the network namespaces, and epair(4) links to replace the veth links. For the SDN functionality, I needed at least one switch and controller that can be run on FreeBSD. I chose OpenvSwitch(OVS) for the switch, since it was available in ports and is well-known by the SDN world, and Ryu for the controller since it’s being actively developed and used and supports more recent versions of OpenFlow. I have discussed the possibility of upstreaming my work. Although they were excited about it, I was asked about a script for creating VMs with Mininet preinstalled, and continuous integration support for my fork of the repository. I started taking a look at the release scripts for creating a VM, and after seeing that it would be much easier to use the scripts if I can get Mininet and Ryu added to the ports tree, I also tried a hand at submitting some ports. For CI support, Mininet uses Travis, which unfortunately doesn’t support FreeBSD. For this, I plan to look at a minimalistic CI tool called contbuild, which looks simple enough to get running and is written portably. This is very much a work-in-progress, and one going at a glacial pace. Even though the company that I work for does use Mininet, but doesn’t use FreeBSD, so this is something that I’ve been working on in my free time. Earlier on, it was the learning curve that made progress slow. When I started, I hadn’t done anything more than run FreeBSD on a laptop, and uneventfully build a few applications from the ports tree. Right off the bat, using vnet jails meant learning how to build and run a custom kernel. This was the easy part, as the handbook was clear about how to do this. When I moved from using FreeBSD 10.3 to 11, I found that I can panic my machine by quickly creating and destroying OVS switches and jails. I submitted a bug report, but decided to go one step further and actually try to debug the panic for myself. With the help of a few people well-versed in systems programming and the developer’s handbook, I was able to come up with a fix, and get it accepted. This pretty much brings my porting experiment to the present day, where I’m slowly working out the pieces that I mentioned earlier. In the beginning, I thought that this Mininet port would be a weekend project where I come out knowing thing or two about using vnet jails and with one less VM to run. Instead, it became a crash course in building and debugging kernels and submitting bug reports, patches, and ports. It’d like to mention that I wouldn’t have gotten far at all if it weren’t for the helpful folks, the documentation, and how debuggable FreeBSD is. I enjoy good challenges and learning experiences, and this has definitely been both. Thank you to Ayaka for working to port Mininet to the BSDs, and for sharing her experiences with us. If you want to see the OpenBSD version of the talk, the video from AsiaBSDCon is here, and it will be presented again at BSDCan. **iXsystems** [iXsystems LFNW Recap](https://www.ixsystems.com/blog/lfnw-2018-recap/) 10 Beginner Do's and Don't for FreeBSD 1) Don't mix ports and binary packages 2) Don't edit 'default' files 3) Don't mess with /etc/crontab 4) Don't mess with /etc/passwd and /etc/groups either! 5) Reconsider the removal of any options from your customized kernel configuration 6) Don't change the root shell to something else 7) Don't use the root user all the time 8) /var/backups is a thing 9) Check system integrity using /etc/mtree 10) What works for me doesn't have to work for you! News Roundup OpenBSD Community Goes Gold for 2018! Ken Westerback (krw@ when wearing his developer hat) writes: ``` Monthly paypal donations from the OpenBSD community have made the community the OpenBSD Foundation's first Gold level contributor for 2018! 2018 is the third consecutive year that the community has reached Gold status or better. These monthly paypal commitments by the community are our most reliable source of funds and thus the most useful for financial planning purposes. We are extremely thankful for the continuing support and hope the community matches their 2017 achievement of Platinum status. Or even their 2016 achievement of Iridium status. Sign up now for a monthly donation! Note that Bitcoin contributions have been re-enabled now that our Bitcoin intermediary has re-certified our Canadian paperwork. https://www.openbsdfoundation.org/donations.html ``` ed(1) mastery is a must read for real unix people In some circles on the Internet, your choice of text editor is a serious matter. We've all seen the threads on mailing lits, USENET news groups and web forums about the relative merits of Emacs vs vi, including endless iterations of flame wars, and sometimes even involving lesser known or non-portable editing environments. And then of course, from the Linux newbies we have seen an endless stream of tweeted graphical 'memes' about the editor vim (aka 'vi Improved') versus the various apparently friendlier-to-some options such as GNU nano. Apparently even the 'improved' version of the classical and ubiquitous vi(1) editor is a challenge even to exit for a significant subset of the younger generation. Yes, your choice of text editor or editing environment is a serious matter. Mainly because text processing is so fundamental to our interactions with computers. But for those of us who keep our systems on a real Unix (such as OpenBSD or FreeBSD), there is no real contest. The OpenBSD base system contains several text editors including vi(1) and the almost-emacs mg(1), but ed(1) remains the standard editor. Now Michael Lucas has written a book to guide the as yet uninitiated to the fundamentals of the original Unix text editor. It is worth keeping in mind that much of Unix and its original standard text editor written back when the standard output and default user interface was more likely than not a printing terminal. To some of us, reading and following the narrative of Ed Mastery is a trip down memory lane. To others, following along the text will illustrate the horror of the world of pre-graphic computer interfaces. For others again, the fact that ed(1) doesn't use your terminal settings much at all offers hope of fixing things when something or somebody screwed up your system so you don't have a working terminal for that visual editor. DigitalOcean Digital Ocean Promo Link for BSD Now Listeners Distributed Object Storage with Minio on FreeBSD Free and open source distributed object storage server compatible with Amazon S3 v2/v4 API. Offers data protection against hardware failures using erasure code and bitrot detection. Supports highly available distributed setup. Provides confidentiality, integrity and authenticity assurances for encrypted data with negligible performance overhead. Both server side and client side encryption are supported. Below is the image of example Minio setup. Architecture Diagram The Minio identifies itself as the ZFS of Cloud Object Storage. This guide will show You how to setup highly available distributed Minio storage on the FreeBSD operating system with ZFS as backend for Minio data. For convenience we will use FreeBSD Jails operating system level virtualization. Setup The setup will assume that You have 3 datacenters and assumption that you have two datacenters in whose the most of the data must reside and that the third datacenter is used as a ‘quorum/witness’ role. Distributed Minio supports up to 16 nodes/drives total, so we may juggle with that number to balance data between desired datacenters. As we have 16 drives to allocate resources on 3 sites we will use 7 + 7 + 2 approach here. The datacenters where most of the data must reside have 7/16 ratio while the ‘quorum/witness’ datacenter have only 2/16 ratio. Thanks to built in Minio redundancy we may loose (turn off for example) any one of those machines and our object storage will still be available and ready to use for any purpose. Jails First we will create 3 jails for our proof of concept Minio setup, storage1 will have the ‘quorum/witness’ role while storage2 and storage3 will have the ‘data’ role. To distinguish commands I type on the host system and storageX Jail I use two different prompts, this way it should be obvious what command to execute and where. WeI know the FreeNAS people have been working on integrating this Best practises for pledge(2) security Let's set the record straight for securing kcgi CGI and FastCGI applications with pledge(2). This is focussed on secure OpenBSD deployments. Theory Internally, kcgi makes considerable use of available security tools. But it's also designed to be invoked in a secure environment. We'll start with pledge(2), which has been around on OpenBSD since version 5.9. If you're reading this tutorial, you're probably on OpenBSD, and you probably have knowledge of pledge(2). How to begin? Read kcgi(3). It includes canonical information on which pledge(2) promises you'll need for each function in the library. This is just a tutorial—the manpage is canonical and overrides what you may read here. Next, assess the promises that your application needs. From kcgi(3), it's easy to see which promises we'll need to start. You'll need to augment this list with whichever tools you're also using. The general push is to start with the broadest set of required promises, then restrict as quickly as possible. Sometimes this can be done in a single pledge(2), but other times it takes a few. Beastie Bits April's London *BSD meetup - notes May’s London *BSD Meetup: May 22nd Call for Papers for EuroBSDcon 2018 FreeBSD Journal March/April Desktop/Laptop issue LWN followup on the PostgreSQL fsync() issue The Association for Computing Machinery recognizes Steve Bourne for outstanding contributions Feedback/Questions Ray - Speaking at Conferences Casey - Questions Jeremy - zfs in the enterprise HAST + ZFS Lars - Civil Infrastructure Platform use of *BSD Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 245: ZFS User Conf 2018 | BSD Now 245

May 10, 2018 1:24:37 61.1 MB Downloads: 0

Allan’s recap of the ZFS User conference, first impressions of OmniOS by a BSD user, Nextcloud 13 setup on FreeBSD, OpenBSD on a fanless desktop computer, an intro to HardenedBSD, and DragonFlyBSD getting some SMP improvements. Headlines ZFS User Conference Recap Attendees met for breakfast on the fourth floor, in a lunchroom type area just outside of the theatre. One entire wall was made of lego base plates, and there were buckets of different coloured lego embedded in the wall. The talks started with Matt Ahrens discussing how the 2nd most requested feature of ZFS, Device Removal, has now landed, then pivoting into the MOST requested feature, RAID-Z expansion, and his work on that so far, which included the first functional prototype, on FreeBSD. Then our friend Calvin Hendryx-Parker presented how he solves all of his backup headaches with ZFS. I provided him some helpful hints to optimize his setup and improve the throughput of his backups Then Steven Umbehocker of OSNEXUS talked about their products, and how they manage large numbers of ZFS nodes After a very nice lunch, Orlando Pichardo of Micron talked about the future of flash, and their new 7.5TB SATA SSDs. Discussion of these devices after the talk may lead to enhancements to ZFS to better support these new larger flash devices that use larger logical sector sizes. Alek Pinchuk of Datto talked about Pool Layout Considerations then Tony Hutter of LLNL talked about the release process for ZFS on Linux Then Tom Caputi of Datto presented: Helping Developers Help You, guidance for users submitting bug reports, with some good and bad examples Then we had a nice cocktail party and dinner, and stayed late into the night talked about ZFS The next day, Jervin Real of Percona, presented: ZFS and MySQL on Linux, the Sweet Spots. Mostly outlining some benchmark they had done, some of the results were curious and some additional digging may turn up enhancements that can be made to ZFS, or just better tuning advice for high traffic MySQL servers. Then I presented my ZSTD compression work, which had been referenced in 2 of the previous talks, as people are anxious to get their hands on this code. Lastly, Eric Sproul of Circonus, gave his talk: Thank You, ZFS. It thanked ZFS and its Community for making their companies product possible, and then provided an update to his presentation from last year, where they were having problems with extremely high levels of ZFS fragmentation. This also sparked a longer conversation after the talk was over. Then we had a BBQ lunch, and after some more talking, the conference broke up. Initial OmniOS impressions by a BSD user I had been using FreeBSD as my main web server OS since 2012 and I liked it so much that I even contributed money and code to it. However, since the FreeBSD guys (and gals) decided to install anti-tech feminism, I have been considering to move away from it for quite some time now. As my growing needs require stronger hardware, it was finally time to rent a new server. I do not intend to run FreeBSD on it. Although the most obvious choice would be OpenBSD (I run it on another server and it works just fine), I plan to have a couple of databases running on the new machine, and database throughput has never been one of OpenBSD's strong points. This is my chance to give illumos another try. As neither WiFi nor desktop environments are relevant on a no-X11 server, the server-focused OmniOS seemed to fit my needs. My current (to be phased out) setup on FreeBSD is: apache24 with SSL support, running five websites on six domains (both HTTP and HTTPS) a (somewhat large) Tiny Tiny RSS installation from git, updated via cronjob sbcl running a daily cronjob of my Web-to-RSS parser an FTP server where I share stuff with friends an IRC bouncer MariaDB and PostgreSQL for some of the hosted services I would not consider anything of that too esoteric for a modern operating system. Since I was not really using anything mod_rewrite-related, I was perfectly ready to replace apache24 by nginx, remembering that the prepackaged apache24 on FreeBSD did not support HTTPS out of the box and I had ended up installing it from the ports. That is the only change in my setup which I am actively planning. So here's what I noticed. First impressions: Hooray, a BSD boot loader! Finally an operating system without grub - I made my experiences with that and I don't want to repeat them too often. It is weird that the installer won't accept "mydomain.org" as a hostname but sendmail complains that "mydomain" is not a valid hostname right from the start, OmniOS sent me into Maintenance Mode to fix that. A good start, right? So the first completely new thing I had to find out on my new shiny toy was how to change the hostname. There is no /etc/rc.conf in it and hostname mydomain.org was only valid for one login session. I found out that the hostname has to be changed in three different files under /etc on Solaris - the third one did not even exist for me. Changing the other two files seems to have solved this problem for me. Random findings: ~ I was wondering how many resources my (mostly idle) new web server was using - I always thought Solaris was rather fat, but it still felt fast to me. Ah, right - we're in Unixland and we need to think outside of the box. This table was really helpful: although a number of things are different between OmniOS and SmartOS, I found out that the *stat tools do what top does. I could probably just install top from one of the package managers, but I failed to find a reason to do so. I had 99% idle CPU and RAM - that's all I wanted to know. ~ Trying to set up twtxt informed me that Python 3.6 (from pkgin) expects LANG and LC_ALL to be set. Weird - did FreeBSD do that for me? It's been a while ... at least that was easy to fix. ~ SMF - Solaris's version of init - confuses me. It has "levels" similar to Gentoo's OpenRC, but it mostly shuts up during the boot process. Stuff from pkgsrc, e.g. nginx, comes with a description how to set up the particular service, but I should probably read more about it. What if, one day, I install a package which is not made ready for OmniOS? I'll have to find out how to write SMF scripts. But that should not be my highest priority. ~ The OmniOS documentation talks a lot about "zones" which, if I understand that correctly, mostly equal FreeBSD's "jails". This could be my chance to try to respect a better separation between my various services - if my lazyness won't take over again. (It probably will.) ~ OmniOS's default shell - rather un-unixy - seems to be the bash. Update: I was informed about a mistake here: the default shell is ksh93, there are bogus .bashrc files lying around though. ~ Somewhere in between, my sshd had a hiccup or, at least, logging into it took longer than usual. If that happens again, I should investigate. Conclusion: By the time of me writing this, I have a basic web server with an awesome performance and a lot of applications ready to be configured only one click away. The more I play with it, the more I have the feeling that I have missed a lot while wasting my time with FreeBSD. For a system that is said to be "dying", OmniOS feels well-thought and, when equipped with a reasonable package management, comes with everything I need to reproduce my FreeBSD setup without losing functionality. I'm looking forward to what will happen with it. DigitalOcean http://do.co/bsdnow [Open Source Hardware Camp 2018 — Sat 30/06 & Sun 01/07, Lincoln, UK (includes 'Open-source RISC-V core quickstart' and 'An introductory workshop to NetBSD on embedded platforms')](http://oshug.org/pipermail/oshug/2018-April/000635.html) ``` Hi All, I'm pleased to announce that we have 10 talks and 7 workshops confirmed for Open Source Hardware Camp 2018, with the possibility of one or two more. Registration is now open! For the first time ever we will be hosting OSHCamp in Lincoln and a huge thanks to Sarah Markall for helping to make this happen. As in previous years, there will be a social event on the Saturday evening and we have a room booked at the Wig and Mitre. Food will be available. There will likely be a few of us meeting up for pre-conference drinks on the Friday evening also. Details of the programme can be found below and, as ever, we have an excellent mix of topics being covered. Cheers, Andrew ``` Open Source Hardware Camp 2018 On the 30th June 2018, 09:00 Saturday morning - 16:00 on the Sunday afternoon at The Blue Room, The Lawn, Union Rd, Lincoln, LN1 3BU. Registration: http://oshug.org/event/oshcamp2018 Open Source Hardware Camp 2018 will be hosted in the historic county town of Lincoln — home to, amongst others, noted engine builders Ruston & Hornsby (now Siemens, via GEC and English Electric). Lincoln is well served by rail, reachable from Leeds and London within 2-2.5 hours, and 4-5 hours from Edinburgh and Southampton. There will be a social at the Wig and Mitre on the Saturday evening. For travel and accommodation information information please see the event page on oshug.org. News Roundup Nextcloud 13 on FreeBSD Today I would like to share a setup of Nextcloud 13 running on a FreeBSD system. To make things more interesting it would be running inside a FreeBSD Jail. I will not describe the Nextcloud setup itself here as its large enough for several blog posts. Official Nextcloud 13 documentation recommends following setup: MySQL/MariaDB PHP 7.0 (or newer) Apache 2.4 (with mod_php) I prefer PostgreSQL database to MySQL/MariaDB and I prefer fast and lean Nginx web server to Apache, so my setup is based on these components: PostgreSQL 10.3 PHP 7.2.4 Nginx 1.12.2 (with php-fpm) Memcached 1.5.7 The Memcached subsystem is least important, it can be easily changed into something more modern like Redis for example. I prefer not to use any third party tools for FreeBSD Jails management. Not because they are bad or something like that. There are just many choices for good FreeBSD Jails management and I want to provide a GENERIC example for Nextcloud 13 in a Jail, not for a specific management tool. Host Lets start with preparing the FreeBSD Host with needed settings. We need to allow using raw sockets in Jails. For the future optional upgrades of the Jail we will also allow using chflags(1) in Jails. OpenBSD on my fanless desktop computer You asked me about my setup. Here you go. I’ve been using OpenBSD on servers for years as a web developer, but never had a chance to dive in to system administration before. If you appreciate the simplicity of OpenBSD and you have to give it a try on your desktop. Bear in mind, this is a relatively cheap ergonomic setup, because all I need is xterm(1) with Vim and Firefox, I don’t care about CPU/GPU performance or mobility too much, but I want a large screen and a good keyboard. Item Price, USD Zotac CI527 NANO-BE $371 16GB RAM Crucial DDR4-2133 $127 250GB SSD Samsung 850 EVO $104 Asus VZ249HE 23.8" IPS Full HD $129 ErgoDox EZ V3, Cherry MX Brown, blank DCS $325 Kensington Orbit Trackball $33 Total $1,107 OpenBSD I tried few times to install OpenBSD on my MacBooks—I heard some models are compatible with it,—but in my case it was a bit of a fiasco (thanks to Nvidia and Broadcom). That’s why I bought a new computer, just to be able to run this wonderful operating system. Now I run -stable on my desktop and servers. Servers are supposed to be reliable, that’s obvious, why not run -current on a desktop? Because -stable is shipped every six months and I that’s is often enough for me. I prefer slow fashion. iXsystems iX Ad Spot NAB 2018 – Michael Dexter’s Recap Introduction to HardenedBSD World HardenedBSD is a security enhanced fork of FreeBSD which happened in 2014. HardenedBSD is implementing many exploit mitigation and security technologies on top of FreeBSD which all started with implementation of Address Space Layout Randomization (ASLR). The fork has been created for ease of development. To cite the https://hardenedbsd.org/content/about page – “HardenedBSD aims to implement innovative exploit mitigation and security solutions for the FreeBSD community. (…) HardenedBSD takes a holistic approach to security by hardening the system and implementing exploit mitigation technologies.” Most FreeBSD enthusiasts know mfsBSD project by Martin Matuska – http://mfsbsd.vx.sk/ – FreeBSD system loaded completely into memory. The mfsBSD synonym for the HardenedBSD world is SoloBSD – http://www.solobsd.org/ – which is based on HardenedBSD sources. One may ask how HardenedBSD project compared to more well know for its security OpenBSD system and it is very important question. The OpenBSD developers try to write ‘good’ code without dirty hacks for performance or other reasons. Clean and secure code is most important in OpenBSD world. The OpenBSD project even made security audit of all OpenBSD code available, line by line. This was easier to achieve in FreeBSD or HardenedBSD because OpenBSD code base its about ten times smaller. This has also other implications, possibilities. While FreeBSD (and HardenedBSD) offer many new features like mature SMP subsystem even with some NUMA support, ZFS filesystem, GEOM storage framework, Bhyve virtualization, Virtualbox option and many other new modern features the OpenBSD remains classic UNIX system with UFS filesystem and with very ‘theoretical’ SMP support. The vmm project tried to implement new hypervisor in OpenBSD world, but because of lack of support for graphics its for OpenBSD, Illumos and Linux currently, You will not virtualize Windows or Mac OS X there. This is also only virtualization option for OpenBSD as there are no Jails on OpenBSD. Current Bhyve implementation allows one even to boot latest Windows 2019 Technology Preview. A HardenedBSD project is FreeBSD system code base with LOTS of security mechanisms and mitigations that are not available on FreeBSD system. For example entire lib32 tree has been disabled by default on HardenedBSD to make it more secure. Also LibreSSL is the default SSL library on HardenedBSD, same as OpenBSD while FreeBSD uses OpenSSL for compatibility reasons. Comparison between LibreSSL and OpenSSL vulnerabilities. https://en.wikipedia.org/wiki/LibreSSL#Security https://wiki.freebsd.org/LibreSSL#LibreSSL.28andOpenSSL.29SecurityVulnerabilities One may see HardenedBSD as FreeBSD being successfully pulled up to the OpenBSD level (at least that is the goal), but as FreeBSD has tons more code and features it will be harder and longer process to achieve the goal. As I do not have that much competence on the security field I will just repost the comparison from the HardenedBSD project versus other BSD systems. The comparison is also available here – https://hardenedbsd.org/content/easy-feature-comparison – on the HardenedBSD website. Running my own git server Note: This article is predominantly based on work by Hiltjo Posthuma who you should read because I would have spent far too much time failing to set things up if it wasn’t for their post. Not only have they written lots of very interesting posts, they write some really brilliant programs Since I started university 3 years ago, I started using lots of services from lots of different companies. The “cloud” trend led me to believe that I wanted other people to look after my data for me. I was wrong. Since finding myself loving the ethos of OpenBSD, I found myself wanting to apply this ethos to the services I use as well. Not only is it important to me because of the security benefits, but also because I like the minimalist style OpenBSD portrays. This is the first in a mini-series documenting my move from bloated, hosted, sometimes proprietary services to minimal, well-written, free, self-hosted services. Tools & applications These are the programs I am going to be using to get my git server up and running: httpd(8) acme-client(1) git(1) cgit(1) slowcgi(8) Setting up httpd Ensure you have the necessary flags enabled in your /etc/rc.conf.local: Configuring cgit When using the OpenBSD httpd(8), it will serve it’s content in a chrooted environment,which defaults to the home directory of the user it runs as, which is www in this case. This means that the chroot is limited to the directory /var/www and it’s contents. In order to configure cgit, there must be a cgitrc file available to cgit. This is found at the location stored in $CGIT_CONFIG, which defaults to /conf/cgitrc. Because of the chroot, this file is actually stored at /var/www/conf/cgitrc. Beastie Bits My Penguicon 2018 Schedule sigaction: see who killed you (and more) Takeshi steps down from NetBSD core team after 13 years DragonFlyBSD Kernel Gets Some SMP Improvements – Phoronix Writing FreeBSD Malware Tarsnap ad Feedback/Questions Troels - Question regarding ZFS xattr Mike - Sharing your screen Wilyarti - Adlocking on FreeBSD Brad - Recommendations for snapshot strategy Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 244: C is a Lie | BSD Now 244

May 03, 2018 1:25:32 61.65 MB Downloads: 0

Arcan and OpenBSD, running OpenBSD 6.3 on RPI 3, why C is not a low-level language, HardenedBSD switching back to OpenSSL, how the Internet was almost broken, EuroBSDcon CfP is out, and the BSDCan 2018 schedule is available. Headlines Towards Secure System Graphics: Arcan and OpenBSD Let me preface this by saying that this is a (very) long and medium-rare technical article about the security considerations and minutiae of porting (most of) the Arcan ecosystem to work under OpenBSD. The main point of this article is not so much flirting with the OpenBSD crowd or adding further noise to software engineering topics, but to go through the special considerations that had to be taken, as notes to anyone else that decides to go down this overgrown and lonesome trail, or are curious about some less than obvious differences between how these things “work” on Linux vs. other parts of the world. A disclaimer is also that most of this have been discovered by experimentation and combining bits and pieces scattered in everything from Xorg code to man pages, there may be smarter ways to solve some of the problems mentioned – this is just the best I could find within the time allotted. I’d be happy to be corrected, in patch/pull request form that is 😉 Each section will start with a short rant-like explanation of how it works in Linux, and what the translation to OpenBSD involved or, in the cases that are still partly or fully missing, will require. The topics that will be covered this time are: Graphics Device Access Hotplug Input Backlight Xorg Pledging Missing Installing OpenBSD 6.3 (snapshots) on Raspberry pi 3 The Easy way Installing the OpenBSD on raspberry pi 3 is very easy and well documented which almost convinced me of not writing about it, but still I felt like it may help somebody new to the project (But again I really recommend reading the document if you are interested and have the time). Note: I'm always running snapshots and recommend anybody to do it as well. But the snapshots links will change to the next version every 6 month, so I changed the links to the 6.3 version to keep the blog post valid over times. If you're familiar to the OpenBSD flavors, feel free to use the snapshots links instead. Requirements Due to the lack of driver, the OpenBSD can not boot directly from the SD Card yet, So we'll need an USB Stick for the installtion target aside the SD Card for the U-Boot and installer. Also, a Serial Console connection is required. I Used a PL2303 USB to Serial (TTL) adapter connected to my Laptop via USB port and connected to the Raspberry via TX, RX and GND pins. iXsystems https://www.ixsystems.com/blog/truenas-m-series-veeam-pr-2018/ Why Didn’t Larrabee Fail? Every month or so, someone will ask me what happened to Larrabee and why it failed so badly. And I then try to explain to them that not only didn't it fail, it was a pretty huge success. And they are understandably very puzzled by this, because in the public consciousness Larrabee was like the Itanic and the SPU rolled into one, wasn't it? Well, not quite. So rather than explain it in person a whole bunch more times, I thought I should write it down. This is not a history, and I'm going to skip a TON of details for brevity. One day I'll write the whole story down, because it's a pretty decent escapade with lots of fun characters. But not today. Today you just get the very start and the very end. When I say "Larrabee" I mean all of Knights, all of MIC, all of Xeon Phi, all of the "Isle" cards - they're all exactly the same chip and the same people and the same software effort. Marketing seemed to dream up a new codeword every week, but there was only ever three chips: Knights Ferry / Aubrey Isle / LRB1 - mostly a prototype, had some performance gotchas, but did work, and shipped to partners. Knights Corner / Xeon Phi / LRB2 - the thing we actually shipped in bulk. Knights Landing - the new version that is shipping any day now (mid 2016). That's it. There were some other codenames I've forgotten over the years, but they're all of one of the above chips. Behind all the marketing smoke and mirrors there were only three chips ever made (so far), and only four planned in total (we had a thing called LRB3 planned between KNC and KNL for a while). All of them are "Larrabee", whether they do graphics or not. When Larrabee was originally conceived back in about 2005, it was called "SMAC", and its original goals were, from most to least important: Make the most powerful flops-per-watt machine for real-world workloads using a huge array of simple cores, on systems and boards that could be built into bazillo-core supercomputers. Make it from x86 cores. That means memory coherency, store ordering, memory protection, real OSes, no ugly scratchpads, it runs legacy code, and so on. No funky DSPs or windowed register files or wacky programming models allowed. Do not build another Itanium or SPU! Make it soon. That means keeping it simple. Support the emerging GPGPU market with that same chip. Intel were absolutely not going to build a 150W PCIe card version of their embedded graphics chip (known as "Gen"), so we had to cover those programming models. As a bonus, run normal graphics well. Add as little graphics-specific hardware as you can get away with. That ordering is important - in terms of engineering and focus, Larrabee was never primarily a graphics card. If Intel had wanted a kick-ass graphics card, they already had a very good graphics team begging to be allowed to build a nice big fat hot discrete GPU - and the Gen architecture is such that they'd build a great one, too. But Intel management didn't want one, and still doesn't. But if we were going to build Larrabee anyway, they wanted us to cover that market as well. ... the design of Larrabee was of a CPU with a very wide SIMD unit, designed above all to be a real grown-up CPU - coherent caches, well-ordered memory rules, good memory protection, true multitasking, real threads, runs Linux/FreeBSD, etc. Larrabee, in the form of KNC, went on to become the fastest supercomputer in the world for a couple of years, and it's still making a ton of money for Intel in the HPC market that it was designed for, fighting very nicely against the GPUs and other custom architectures. Its successor, KNL, is just being released right now (mid 2016) and should do very nicely in that space too. Remember - KNC is literally the same chip as LRB2. It has texture samplers and a video out port sitting on the die. They don't test them or turn them on or expose them to software, but they're still there - it's still a graphics-capable part. But it's still actually running FreeBSD on that card, and under FreeBSD it's just running an x86 program called DirectXGfx (248 threads of it). News Roundup C Is Not a Low-level Language : Your computer is not a fast PDP-11. In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades. Processor vendors are not alone in this. Those of us working on C/C++ compilers have also participated. What Is a Low-Level Language? Computer science pioneer Alan Perlis defined low-level languages this way: "A programming language is low level when its programs require attention to the irrelevant." While, yes, this definition applies to C, it does not capture what people desire in a low-level language. Various attributes cause people to regard a language as low-level. Think of programming languages as belonging on a continuum, with assembly at one end and the interface to the Starship Enterprise's computer at the other. Low-level languages are "close to the metal," whereas high-level languages are closer to how humans think. For a language to be "close to the metal," it must provide an abstract machine that maps easily to the abstractions exposed by the target platform. It's easy to argue that C was a low-level language for the PDP-11. They both described a model in which programs executed sequentially, in which memory was a flat space, and even the pre- and post-increment operators cleanly lined up with the PDP-11 addressing modes. Fast PDP-11 Emulators The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware. C code provides a mostly serial abstract machine (until C11, an entirely serial machine if nonstandard vendor extensions were excluded). Creating a new thread is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs. The quest for high ILP was the direct cause of Spectre and Meltdown. A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches. This, again, adds complexity; it also means that an incorrect guess results in work being done and then discarded, which is not ideal for power consumption. This discarded work has visible side effects, which the Spectre and Meltdown attacks could exploit. On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running, which makes it inconvenient in a dark silicon era when transistors are cheap but powered transistors are an expensive resource. This unit is conspicuously absent on GPUs, where parallelism again comes from multiple threads rather than trying to extract instruction-level parallelism from intrinsically scalar code. If instructions do not have dependencies that need to be reordered, then register renaming is not necessary. Consider another core part of the C abstract machine's memory model: flat memory. This hasn't been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency. The cache is, as its name implies, hidden from the programmer and so is not visible to C. Efficient use of the cache is one of the most important ways of making code run quickly on a modern processor, yet this is completely hidden by the abstract machine, and programmers must rely on knowing implementation details of the cache (for example, two values that are 64-byte-aligned may end up in the same cache line) to write efficient code. Backup URL Hacker News Commentary HardenedBSD Switching Back to OpenSSL Over a year ago, HardenedBSD switched to LibreSSL as the default cryptographic library in base for 12-CURRENT. 11-STABLE followed suit later on. Bernard Spil has done an excellent job at keeping our users up-to-date with the latest security patches from LibreSSL. After recently updating 12-CURRENT to LibreSSL 2.7.2 from 2.6.4, it has become increasingly clear to us that performing major upgrades requires a team larger than a single person. Upgrading to 2.7.2 caused a lot of fallout in our ports tree. As of 28 Apr 2018, several ports we consider high priority are still broken. As it stands right now, it would take Bernard a significant amount of his spare personal time to fix these issues. Until we have a multi-person team dedicated to maintaining LibreSSL in base along with the patches required in ports, HardenedBSD will use OpenSSL going forward as the default cryptographic library in base. LibreSSL will co-exist with OpenSSL in the source tree, as it does now. However, MK_LIBRESSL will default to "no" instead of the current "yes". Bernard will continue maintaining LibreSSL in base along with addressing the various problematic ports entries. To provide our users with ample time to plan and perform updates, we will wait a period of two months prior to making the switch. The switch will occur on 01 Jul 2018 and will be performed simultaneously in 12-CURRENT and 11-STABLE. HardenedBSD will archive a copy of the LibreSSL-centric package repositories and binary updates for base for a period of six months after the switch (expiring the package repos on 01 Jan 2019). This essentially gives our users eight full months for an upgrade path. As part of the switch back to OpenSSL, the default NTP daemon in base will switch back from OpenNTPd to ISC NTP. Users who have localopenntpdenable="YES" set in rc.conf will need to switch back to ntpd_enable="YES". Users who build base from source will want to fully clean their object directories. Any and all packages that link with libcrypto or libssl will need to be rebuilt or reinstalled. With the community's help, we look forward to the day when we can make the switch back to LibreSSL. We at HardenedBSD believe that providing our users options to rid themselves of software monocultures can better increase security and manage risk. DigitalOcean http://do.co/bsdnow -- $100 credit for 60 days How Dan Kaminsky Almost Broke the Internet In the summer of 2008, security researcher Dan Kaminsky disclosed how he had found a huge flaw in the Internet that could let attackers redirect web traffic to alternate servers and disrupt normal operations. In this Hacker History video, Kaminsky describes the flaw and notes the issue remains unfixed. “We were really concerned about web pages and emails 'cause that’s what you get to compromise when you compromise DNS,” Kaminsky says. “You think you’re sending an email to IBM but it really goes to the bad guy.” As the phone book of the Internet, DNS translates easy-to-remember domain names into IP addresses so that users don’t have to remember strings of numbers to reach web applications and services. Authoritative nameservers publish the IP addresses of domain names. Recursive nameservers talk to authoritative servers to find addresses for those domain names and saves the information into its cache to speed up the response time the next time it is asked about that site. While anyone can set up a nameserver and configure an authoritative zone for any site, if recursive nameservers don’t point to it to ask questions, no one will get those wrong answers. We made the Internet less flammable. Kaminsky found a fundamental design flaw in DNS that made it possible to inject incorrect information into the nameserver's cache, or DNS cache poisoning. In this case, if an attacker crafted DNS queries looking for sibling names to existing domains, such as 1.example.com, 2.example.com, and 3.example.com, while claiming to be the official "www" server for example.com, the nameserver will save that server IP address for “www” in its cache. “The server will go, ‘You are the official. Go right ahead. Tell me what it’s supposed to be,’” Kaminsky says in the video. Since the issue affected nearly every DNS server on the planet, it required a coordinated response to address it. Kaminsky informed Paul Vixie, creator of several DNS protocol extensions and application, and Vixie called an emergency summit of major IT vendors at Microsoft’s headquarters to figure out what to do. The “fix” involved combining the 16-bit transaction identifier that DNS lookups used with UDP source ports to create 32-bit transaction identifiers. Instead of fixing the flaw so that it can’t be exploited, the resolution focused on making it take more than ten seconds, eliminating the instantaneous attack. “[It’s] not like we repaired DNS,” Kaminsky says. “We made the Internet less flammable.” DNSSEC (Domain Name System Security Extensions), is intended to secure DNS by adding a cryptographic layer to DNS information. The root zone of the internet was signed for DNSSEC in July 2010 and the .com Top Level Domain (TLD) was finally signed for DNSSEC in April 2011. Unfortunately, adoption has been slow, even ten years after Kaminsky first raised the alarm about DNS, as less than 15 percent of users pass their queries to DNSSEC validating resolvers. The Internet was never designed to be secure. The Internet was designed to move pictures of cats. No one expected the Internet to be used for commerce and critical communications. If people lose faith in DNS, then all the things that depend on it are at risk. “What are we going to do? Here is the answer. Some of us gotta go out fix it,” Kaminsky says. OpenIndiana Hipster 2018.04 is here We have released a new OpenIndiana Hipster snapshot 2018.04. The noticeable changes: Userland software is rebuilt with GCC 6. KPTI was enabled to mitigate recent security issues in Intel CPUs. Support of Gnome 2 desktop was removed. Linked images now support zoneproxy service. Mate desktop applications are delivered as 64-bit-only. Upower support was integrated. IIIM was removed. More information can be found in 2018.04 Release notes and new medias can be downloaded from http://dlc.openindiana.org. Beastie Bits EuroBSDCon - Call for Papers OpenSSH 7.7 pkgsrc-2018Q1 released BSDCan Schedule Michael Dexter's LFNW talk Tarsnap ad Feedback/Questions Bob - Help locating FreeBSD Help Alex - Convert directory to dataset Adam - FreeNAS Question Florian - Three Questions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv iX Ad spot: iXsystems TrueNAS M-Series Blows Away Veeam Backup Certification Tests

Episode 243: Understanding The Scheduler | BSD Now 243

April 25, 2018 1:25:24 61.67 MB Downloads: 0

OpenBSD 6.3 and DragonflyBSD 5.2 are released, bug fix for disappearing files in OpenZFS on Linux (and only Linux), understanding the FreeBSD CPU scheduler, NetBSD on RPI3, thoughts on being a committer for 20 years, and 5 reasons to use FreeBSD in 2018. Headlines OpenBSD 6.3 released Punctual as ever, OpenBSD 6.3 has been releases with the following features/changes: Improved HW support, including: SMP support on OpenBSD/arm64 platforms vmm/vmd improvements: IEEE 802.11 wireless stack improvements Generic network stack improvements Installer improvements Routing daemons and other userland network improvements Security improvements dhclient(8) improvements Assorted improvements OpenSMTPD 6.0.4 OpenSSH 7.7 LibreSSL 2.7.2 DragonFlyBSD 5.2 released Big-ticket items Meltdown and Spectre mitigation support Meltdown isolation and spectre mitigation support added. Meltdown mitigation is automatically enabled for all Intel cpus. Spectre mitigation must be enabled manually via sysctl if desired, using sysctls machdep.spectremitigation and machdep.meltdownmitigation. HAMMER2 H2 has received a very large number of bug fixes and performance improvements. We can now recommend H2 as the default root filesystem in non-clustered mode. Clustered support is not yet available. ipfw Updates Implement state based "redirect", i.e. without using libalias. ipfw now supports all possible ICMP types. Fix ICMPMAXTYPE assumptions (now 40 as of this release). Improved graphics support The drm/i915 kernel driver has been updated to support Intel Coffeelake GPUs Add 24-bit pixel format support to the EFI frame buffer code. Significantly improve fbio support for the "scfb" XOrg driver. This allows EFI frame buffers to be used by X in situations where we do not otherwise support the GPU. Partly implement the FBIOBLANK ioctl for display powersaving. Syscons waits for drm modesetting at appropriate places, avoiding races. + For more details, check out the “All changes since DragonFly 5.0” section. ZFS on Linux bug causes files to disappear A bug in ZoL 0.7.7 caused 0.7.8 to be released just 3 days after the release The bug only impacts Linux, the change that caused the problem was not upstreamed yet, so does not impact ZFS on illumos, FreeBSD, OS X, or Windows The bug can cause files being copied into a directory to not be properly linked to the directory, so they will no longer be listed in the contents of the directory ZoL developers are working on a tool to allow you to recover the data, since no data was actually lost, the files were just not properly registered as part of the directory The bug was introduced in a commit made in February, that attempted to improve performance of datasets created with the case insensitivity option. In an effort to improve performance, they introduced a limit to cap to give up (return ENOSPC) if growing the directory ZAP failed twice. The ZAP is the key-value pair data structure that contains metadata for a directory, including a hash table of the files that are in a directory. When a directory has a large number of files, the ZAP is converted to a FatZAP, and additional space may need to be allocated as additional files are added. Commit cc63068 caused ENOSPC error when copy a large amount of files between two directories. The reason is that the patch limits zap leaf expansion to 2 retries, and return ENOSPC when failed. Finding the root cause of this issue was somewhat hampered by the fact that many people were not able to reproduce the issue. It turns out this was caused by an entirely unrelated change to GNU coreutils. On later versions of GNU Coreutils, the files were returned in a sorted order, resulting in them hitting different buckets in the hash table, and not tripping the retry limit Tools like rsync were unaffected, because they always sort the files before copying If you did not see any ENOSPC errors, you were likely not impacted The intent for limiting retries is to prevent pointlessly growing table to max size when adding a block full of entries with same name in different case in mixed mode. However, it turns out we cannot use any limit on the retry. When we copy files from one directory in readdir order, we are copying in hash order, one leaf block at a time. Which means that if the leaf block in source directory has expanded 6 times, and you copy those entries in that block, by the time you need to expand the leaf in destination directory, you need to expand it 6 times in one go. So any limit on the retry will result in error where it shouldn't. Recommendations for Users from Ryan Yao: The regression makes it so that creating a new file could fail with ENOSPC after which files created in that directory could become orphaned. Existing files seem okay, but I have yet to confirm that myself and I cannot speak for what others know. It is incredibly difficult to reproduce on systems running coreutils 8.23 or later. So far, reports have only come from people using coreutils 8.22 or older. The directory size actually gets incremented for each orphaned file, which makes it wrong after orphan files happen. We will likely have some way to recover the orphaned files (like ext4’s lost+found) and fix the directory sizes in the very near future. Snapshots of the damaged datasets are problematic though. Until we have a subcommand to fix it (not including the snapshots, which we would have to list), the damage can be removed from a system that has it either by rolling back to a snapshot before it happened or creating a new dataset with 0.7.6 (or another release other than 0.7.7), moving everything to the new dataset and destroying the old. That will restore things to pristine condition. It should also be possible to check for pools that are affected, but I have yet to finish my analysis to be certain that no false negatives occur when checking, so I will avoid saying how for now. Writes to existing files cannot trigger this bug, only adding new files to a directory in bulk News Roundup des@’s thoughts on being a FreeBSD committer for 20 years Yesterday was the twentieth anniversary of my FreeBSD commit bit, and tomorrow will be the twentieth anniversary of my first commit. I figured I’d split the difference and write a few words about it today. My level of engagement with the FreeBSD project has varied greatly over the twenty years I’ve been a committer. There have been times when I worked on it full-time, and times when I did not touch it for months. The last few years, health issues and life events have consumed my time and sapped my energy, and my contributions have come in bursts. Commit statistics do not tell the whole story, though: even when not working on FreeBSD directly, I have worked on side projects which, like OpenPAM, may one day find their way into FreeBSD. My contributions have not been limited to code. I was the project’s first Bugmeister; I’ve served on the Security Team for a long time, and have been both Security Officer and Deputy Security Officer; I managed the last four Core Team elections and am doing so again this year. In return, the project has taught me much about programming and software engineering. It taught me code hygiene and the importance of clarity over cleverness; it taught me the ins and outs of revision control; it taught me the importance of good documentation, and how to write it; and it taught me good release engineering practices. Last but not least, it has provided me with the opportunity to work with some of the best people in the field. I have the privilege today to count several of them among my friends. For better or worse, the FreeBSD project has shaped my career and my life. It set me on the path to information security in general and IAA in particular, and opened many a door for me. I would not be where I am now without it. I won’t pretend to be able to tell the future. I don’t know how long I will remain active in the FreeBSD project and community. It could be another twenty years; or it could be ten, or five, or less. All I know is that FreeBSD and I still have things to teach each other, and I don’t intend to call it quits any time soon. iXsystems unveils new TrueNAS M-Series Unified Storage Line San Jose, Calif., April 10, 2018 — iXsystems, the leader in Enterprise Open Source servers and software-defined storage, announced the TrueNAS M40 and M50 as the newest high-performance models in its hybrid, unified storage product line. The TrueNAS M-Series harnesses NVMe and NVDIMM to bring all-flash array performance to the award-winning TrueNAS hybrid arrays. It also includes the Intel® Xeon® Scalable Family of Processors and supports up to 100GbE and 32Gb Fibre Channel networking. Sitting between the all-flash TrueNAS Z50 and the hybrid TrueNAS X-Series in the product line, the TrueNAS M-Series delivers up to 10 Petabytes of highly-available and flash-powered network attached storage and rounds out a comprehensive product set that has a capacity and performance option for every storage budget. Designed for On-Premises & Enterprise Cloud Environments As a unified file, block, and object sharing solution, TrueNAS can meet the needs of file serving, backup, virtualization, media production, and private cloud users thanks to its support for the SMB, NFS, AFP, iSCSI, Fibre Channel, and S3 protocols. At the heart of the TrueNAS M-Series is a custom 4U, dual-controller head unit that supports up to 24 3.5” drives and comes in two models, the M40 and M50, for maximum flexibility and scalability. The TrueNAS M40 uses NVDIMMs for write cache, SSDs for read cache, and up to two external 60-bay expansion shelves that unlock up to 2PB in capacity. The TrueNAS M50 uses NVDIMMs for write caching, NVMe drives for read caching, and up to twelve external 60-bay expansion shelves to scale upwards of 10PB. The dual-controller design provides high-availability failover and non-disruptive upgrades for mission-critical enterprise environments. By design, the TrueNAS M-Series unleashes cutting-edge persistent memory technology for demanding performance and capacity workloads, enabling businesses to accelerate enterprise applications and deploy enterprise private clouds that are twice the capacity of previous TrueNAS models. It also supports replication to the Amazon S3, BackBlaze B2, Google Cloud, and Microsoft Azure cloud platforms and can deliver an object store using the ubiquitous S3 object storage protocol at a fraction of the cost of the public cloud. Fast As a true enterprise storage platform, the TrueNAS M50 supports very demanding performance workloads with up to four active 100GbE ports, 3TB of RAM, 32GB of NVDIMM write cache and up to 15TB of NVMe flash read cache. The TrueNAS M40 and M50 include up to 24/7 and global next-business-day support, putting IT at ease. The modular and tool-less design of the M-Series allows for easy, non-disruptive servicing and upgrading by end-users and support technicians for guaranteed uptime. TrueNAS has US-Based support provided by the engineering team that developed it, offering the rapid response that every enterprise needs. Award-Winning TrueNAS Features Enterprise: Perfectly suited for private clouds and enterprise workloads such as file sharing, backups, M&E, surveillance, and hosting virtual machines. Unified: Utilizes SMB, AFP, NFS for file storage, iSCSI, Fibre Channel and OpenStack Cinder for block storage, and S3-compatible APIs for object storage. Supports every common operating system, hypervisor, and application. Economical: Deploy an enterprise private cloud and reduce storage TCO by 70% over AWS with built-in enterprise-class features such as in-line compression, deduplication, clones, and thin-provisioning. Safe: The OpenZFS file system ensures data integrity with best-in-class replication and snapshotting. Customers can replicate data to the rest of the iXsystems storage lineup and to the public cloud. Reliable: High Availability option with dual hot-swappable controllers for continuous data availability and 99.999% uptime. Familiar: Provision and manage storage with the same simple and powerful WebUI and REST APIs used in all iXsystems storage products, as well as iXsystems’ FreeNAS Software. Certified: TrueNAS has passed the Citrix Ready, VMware Ready, and Veeam Ready certifications, reducing the risk of deploying a virtualized infrastructure. Open: By using industry-standard sharing protocols, the OpenZFS Open Source enterprise file system and FreeNAS, the world’s #1 Open Source storage operating system (and also engineered by iXsystems), TrueNAS is the most open enterprise storage solution on the market. Availability The TrueNAS M40 and M50 will be generally available in April 2018 through the iXsystems global channel partner network. The TrueNAS M-Series starts at under $20,000 USD and can be easily expanded using a linear “per terabyte” pricing model. With typical compression, a Petabtye can be stored for under $100,000 USD. TrueNAS comes with an all-inclusive software suite that provides NFS, Windows SMB, iSCSI, snapshots, clones and replication. For more information, visit www.ixsystems.com/TrueNAS TrueNAS M-Series What's New Video Understanding and tuning the FreeBSD Scheduler ``` Occasionally I noticed that the system would not quickly process the tasks i need done, but instead prefer other, longrunning tasks. I figured it must be related to the scheduler, and decided it hates me. A closer look shows the behaviour as follows (single CPU): Lets run an I/O-active task, e.g, postgres VACUUM that would continuously read from big files (while doing compute as well [1]): pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 1.58K 0 12.9M 0 Now start an endless loop: while true; do :; done And the effect is: pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 9 0 76.8K 0 The VACUUM gets almost stuck! This figures with WCPU in "top": PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 99 0 7044K 1944K RUN 1:06 92.21% bash 53005 pgsql 52 0 620M 91856K RUN 5:47 0.50% postgres Hacking on kern.sched.quantum makes it quite a bit better: sysctl kern.sched.quantum=1 kern.sched.quantum: 94488 -> 7874 pool alloc free read write read write cache - - - - - - ada1s4 7.08G 10.9G 395 0 3.12M 0 PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 85583 root 94 0 7044K 1944K RUN 4:13 70.80% bash 53005 pgsql 52 0 276M 91856K RUN 5:52 11.83% postgres Now, as usual, the "root-cause" questions arise: What exactly does this "quantum"? Is this solution a workaround, i.e. actually something else is wrong, and has it tradeoff in other situations? Or otherwise, why is such a default value chosen, which appears to be ill-deceived? The docs for the quantum parameter are a bit unsatisfying - they say its the max num of ticks a process gets - and what happens when they're exhausted? If by default the endless loop is actually allowed to continue running for 94k ticks (or 94ms, more likely) uninterrupted, then that explains the perceived behaviour - buts thats certainly not what a scheduler should do when other procs are ready to run. 11.1-RELEASE-p7, kern.hz=200. Switching tickless mode on or off does not influence the matter. Starting the endless loop with "nice" does not influence the matter. [1] A pure-I/O job without compute load, like "dd", does not show this behaviour. Also, when other tasks are running, the unjust behaviour is not so stongly pronounced. ``` aarch64 support added I have committed about adding initial support for aarch64. booting log on RaspberryPI3: ``` boot NetBSD/evbarm (aarch64) Drop to EL1...OK Creating VA=PA tables Creating KSEG tables Creating KVA=PA tables Creating devmap tables MMU Enable...OK VSTART = ffffffc000001ff4 FDT<3ab46000> devmap cpufunc bootstrap consinit ok uboot: args 0x3ab46000, 0, 0, 0 NetBSD/evbarm (fdt) booting ... FDT /memory [0] @ 0x0 size 0x3b000000 MEM: add 0-3b000000 MEM: res 0-1000 MEM: res 3ab46000-3ab4a000 Usable memory: 1000 - 3ab45fff 3ab4a000 - 3affffff initarm: kernel phys start 1000000 end 17bd000 MEM: res 1000000-17bd000 bootargs: root=axe0 1000 - ffffff 17bd000 - 3ab45fff 3ab4a000 - 3affffff ------------------------------------------ kern_vtopdiff = 0xffffffbfff000000 physical_start = 0x0000000000001000 kernel_start_phys = 0x0000000001000000 kernel_end_phys = 0x00000000017bd000 physical_end = 0x000000003ab45000 VM_MIN_KERNEL_ADDRESS = 0xffffffc000000000 kernel_start_l2 = 0xffffffc000000000 kernel_start = 0xffffffc000000000 kernel_end = 0xffffffc0007bd000 kernel_end_l2 = 0xffffffc000800000 (kernel va area) (devmap va area) VM_MAX_KERNEL_ADDRESS = 0xffffffffffe00000 ------------------------------------------ Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 The NetBSD Foundation, Inc. All rights reserved. Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 total memory = 936 MB avail memory = 877 MB … Starting local daemons:. Updating motd. Starting sshd. Starting inetd. Starting cron. The following components reported failures: /etc/rc.d/swap2 See /var/run/rc.log for more information. Fri Mar 30 12:35:31 JST 2018 NetBSD/evbarm (rpi3) (console) login: root Last login: Fri Mar 30 12:30:24 2018 on console rpi3# uname -ap NetBSD rpi3 8.99.14 NetBSD 8.99.14 (RPI64) #11: Fri Mar 30 12:34:19 JST 2018 ryo@moveq:/usr/home/ryo/tmp/netbsd-src-ryo-wip/sys/arch/evbarm/compile/RPI64 evbarm aarch64 rpi3# ``` Now, multiuser mode works stably on fdt based boards (RPI3,SUNXI,TEGRA). But there are still some problems, more time is required for release. also SMP is not yet. See sys/arch/aarch64/aarch64/TODO for more detail. Especially the problems around TLS of rtld, and C++ stack unwindings are too difficult for me to solve, I give up and need someone's help (^o^)/ Since C++ doesn't work, ATF also doesn't work. If the ATF works, it will clarify more issues. sys/arch/evbarm64 is gone and integrated into sys/arch/evbarm. One evbarm/conf/GENERIC64 kernel binary supports all fdt (bcm2837,sunxi,tegra) based boards. While on 32bit, sys/arch/evbarm/conf/GENERIC will support all fdt based boards...but doesn't work yet. (WIP) My deepest appreciation goes to Tohru Nishimura (nisimura@) whose writes vector handlers, context switchings, and so on. and his comments and suggestions were innumerably valuable. I would also like to thank Nick Hudson (skrll@) and Jared McNeill (jmcneill@) whose added support FDT and integrated into evbarm. Finally, I would like to thank Matt Thomas (matt@) whose commited aarch64 toolchains and preliminary support for aarch64. Beastie Bits 5 Reasons to Use FreeBSD in 2018 Rewriting Intel gigabit network driver in Rust Recruiting to make Elastic Search on FreeBSD better Windows Server 2019 Preview, in bhyve on FreeBSD “SSH Mastery, 2nd ed” in hardcover Feedback/Questions Jason - ZFS Transfer option Luis - ZFS Pools ClonOS Michael - Tech Conferences anonymous - BSD trash on removable drives Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 242: Linux Takes The Fastpath | BSD Now 242

April 18, 2018 1:23:20 60.07 MB Downloads: 0

TrueOS Stable 18.03 released, a look at F-stack, the secret to an open source business model, intro to jails and jail networking, FreeBSD Foundation March update, and the ipsec Errata. Headlines TrueOS STABLE 18.03 Release The TrueOS team is pleased to announce the availability of a new STABLE release of the TrueOS project (version 18.03). This is a special release due to the security issues impacting the computing world since the beginning of 2018. In particular, mitigating the “Meltdown” and “Spectre” system exploits make it necessary to update the entire package ecosystem for TrueOS. This release does not replace the scheduled June STABLE update, but provides the necessary and expected security updates for the STABLE release branch of TrueOS, even though this is part-way through our normal release cycle. Important changes between version 17.12 and 18.03 “Meltdown” security fixes: This release contains all the fixes to FreeBSD which mitigate the security issues for systems that utilize Intel-based processors when running virtual machines such as FreeBSD jails. Please note that virtual machines or jails must also be updated to a version of FreeBSD or TrueOS which contains these security fixes. “Spectre” security mitigations: This release contains all current mitigations from FreeBSD HEAD for the Spectre memory-isolation attacks (Variant 2). All 3rd-party packages for this release are also compiled with LLVM/Clang 6 (the “retpoline” mitigation strategy). This fixes many memory allocation issues and enforces stricter requirements for code completeness and memory usage within applications. Unfortunately, some 3rd-party applications became unavailable as pre-compiled packages due to non-compliance with these updated standards. These applications are currently being fixed either by the upstream authors or the FreeBSD port maintainers. If there are any concerns about the availability of a critical application for a specific workflow, please search through the changelog of packages between TrueOS 17.12 and 18.03 to verify the status of the application. Most systems will need microcode updates for additional Spectre mitigations. The microcode updates are not enabled by default. This work is considered experimental because it is in active development by the upstream vendors. If desired, the microcode updates are available with the new devcpu-data package, which is available in the Appcafe. Install this package and enable the new microcode_update service to apply the latest runtime code when booting the system. Important security-based package updates LibreSSL is updated from version 2.6.3 -> 2.6.4 Reminder: LibreSSL is used on TrueOS to build any package which does not explicitly require OpenSSL. All applications that utilize the SSL transport layer are now running with the latest security updates. Browser updates: (Keep in mind that many browsers have also implemented their own security mitigations in the aftermath of the Spectre exploit.) Firefox: 57.0.1 -> 58.0.2 Chromium: 61.0.3163.100 -> 63.0.3239.132 Qt5 Webengine (QupZilla, Falkon, many others): 5.7.1 -> 5.9.4 All pre-compiled packages for this release are built with the latest versions of LLVM/Clang, unless the package explicitly requires GCC. These packages also utilize the latest compile-time mitigations for memory-access security concerns. F-Stack F-Stack is an user space network development kit with high performance based on DPDK, FreeBSD TCP/IP stack and coroutine API. http://www.f-stack.org Introduction With the rapid development of NIC, the poor performance of data packets processing with Linux kernel has become the bottleneck. However, the rapid development of the Internet needs high performance of network processing, kernel bypass has caught more and more attentions. There are various similar technologies appear, such as DPDK, NETMAP and PF_RING. The main idea of kernel bypass is that Linux is only used to deal with control flow, all data streams are processed in user space. Therefore, kernel bypass can avoid performance bottlenecks caused by kernel packet copying, thread scheduling, system calls and interrupts. Furthermore, kernel bypass can achieve higher performance with multi optimizing methods. Within various techniques, DPDK has been widely used because of its more thorough isolation from kernel scheduling and active community support. F-Stack is an open source network framework with high performance based on DPDK. With following characteristics Ultra high network performance which can achieve network card under full load, 10 million concurrent connections, 5 million RPS, 1 million CPS. Transplant FreeBSD 11.01 user space stack, provides a complete stack function, cut a great amount of irrelevant features. Therefore greatly enhance the performance. Support Nginx, Redis and other mature applications, service can easily use F-Stack With Multi-process architecture, easy to extend Provide micro thread interface. Various applications with stateful app can easily use F-Stack to get high performance without processing complex asynchronous logic. Provide Epoll/Kqueue interface that allow many kinds of applications easily use F-Stack History In order to deal with the increasingly severe DDoS attacks, authorized DNS server of Tencent Cloud DNSPod switched from Gigabit Ethernet to 10-Gigabit at the end of 2012. We faced several options, one is to continue to use the original model another is to use kernel bypass technology. After several rounds of investigation, we finally chose to develop our next generation of DNS server based on DPDK. The reason is DPDK provides ultra-high performance and can be seamlessly extended to 40G, or even 100G NIC in the future. After several months of development and testing, DKDNS, high-performance DNS server based on DPDK officially released in October 2013. It's capable of achieving up to 11 million QPS with a single 10GE port and 18.2 million QPS with two 10GE ports. And then we developed a user-space TCP/IP stack called F-Stack that can process 0.6 million RPS with a single 10GE port. With the fast growth of Tencent Cloud, more and more services need higher network access performance. Meanwhile, F-Stack was continuous improving driven by the business growth, and ultimately developed into a general network access framework. But this TCP/IP stack couldn't meet the needs of these services while continue to develop and maintain a complete network stack will cost high, we've tried several plans and finally determined to port FreeBSD(11.0 stable) TCP/IP stack into F-Stack. Thus, we can reduce the cost of maintenance and follow up the improvement from community quickly.Thanks to libplebnet and libuinet, this work becomes a lot easier. With the rapid development of all kinds of application, in order to help different APPs quick and easily use F-Stack, F-Stack has integrated Nginx, Redis and other commonly used APPs, and a micro thread framework, and provides a standard Epoll/Kqueue interface. Currently, besides authorized DNS server of DNSPod, there are various products in Tencent Cloud has used the F-Stack, such as HttpDNS (D+), COS access module, CDN access module, etc.. iXsystems Leadership Is The Secret To An Open Source Business Model A Forbes article by Mike Lauth, CEO of iXsystems There is a good chance you’ve never heard of open source software and an even greater one that you’re using it every day without even realizing it. Open source software is computer software that is available under a variety of licenses that all encourage the sharing of the software and its underlying source code. Open source has powered the internet from day one and today powers the cloud and just about everything connected to it from your mobile phone to virtually every internet of things device. FreeNAS is one of two open source operating systems that my company, iXsystems, develops and distributes free of charge and is at the heart of our line of TrueNAS enterprise storage products. While some of our competitors sell storage software similar to FreeNAS, we not only give it away but also do so with truly no strings attached -- competitors can and do take FreeNAS and build products based on it with zero obligation to share their changes. The freedom to do so is the fundamental tenet of permissively licensed open source software, and while it sounds self-defeating to be this generous, we’ve proven that leadership, not licensing, is the true secret to a successful open source business model. We each have our own personal definition of what is fair when it comes to open source. At iXsystems, we made a conscious decision to base FreeNAS and TrueOS on the FreeBSD operating system developed by the FreeBSD project. We stand on the shoulders of giants by using FreeBSD and we consider it quite reasonable to give back on the same generous terms that the FreeBSD project offers us. We could be selective in what we provide free of charge, but we believe that doing so would be short-sighted. In the long game we’re playing, the leadership we provide over the open source projects we produce is infinitely more important than any restrictions provided by the licenses of those and other open source projects. Twenty years in, we have no reason to change our free-software-on-great-hardware business model and giving away the software has brought an unexpected side-benefit: the largest Q/A department in the world, staffed by our passionate users who volunteer to let us know every thought they have about our software. We wouldn’t change a thing, and I encourage you to find exactly what win-win goodwill you and your company can provide to your constituents to make them not just a customer base but a community. Drive The Conversation It took a leap of faith for us to give away the heart of our products in exchange for a passionate community, but doing so changes your customer's relationship with your brand from priced to priceless. This kind of relationship leverages a social contract instead of a legal one. Taking this approach empowers your users in ways they will not experience with other companies and it is your responsibility to lead, rather than control them with a project like FreeNAS Relieve Customer Pain Points With Every New Release Responsiveness to the needs of your constituents is what distinguishes project leadership from project dictatorship. Be sure to balance your vision for your products and projects with the “real world” needs of your users. While our competition can use the software we develop, they will at best wow users with specific features rather than project-wide ones. Never underestimate how grateful a user will be when you make their job easier. Accept That A Patent Is Not A Business Model Patents are considered the ultimate control mechanism in the technology industry, but they only provide a business model if you have a monopoly and monopolies are illegal. Resist getting hung up on the control you can establish over your customers and spend your time acquiring and empowering them. The moment you both realize that your success is mutual, you have a relationship that will last longer than any single sale. You’ll be pleasantly surprised how the relationships you build will transcend the specific companies that friends you make work for. Distinguish Leadership From Management Every company has various levels of management, but leadership is the magic that creates markets where they did not exist and aligns paying customers with value that you can deliver in a profitable manner. Leadership and vision are ultimately the most proprietary aspects of a technology business, over every patentable piece of hardware or licensable piece of software. Whether you create a new market or bring efficiency to an existing one, your leadership is your secret weapon -- not your level of control. News Roundup Introduction to Jails and Jail Networking on FreeBSD Jails basically partition a FreeBSD system into various isolated sub-systems called jails. The syscall and userspace tools first appeared in FreeBSD 4.0 (~ March 2000) with subsequent releases expanding functionality and improving existing features as well as usability. + For Linux users, jails are similar to LXC, used for resource/process isolation. Unlike LXC however, jails are a first-class concept and are well integrated into the base system. Essentially however, both offer a chroot-with-extra-separation feeling. Setting up a jail is a fairly simple process, which can essentially be split into three steps: + Place the stuff you want to run and the stuff it needs to run somewhere on your filesystem. + Add some basic configuration for the jail in jail.conf. + Fire up the jail. To confirm that the jail started successfully we can use the jls utility: We can now enter the jailed environment by using jexec, which will by default execute a root shell inside the named jail A jail can only see and use addresses that have been passed down to it by the parent system. This creates a slight problem with the loopback address: The host would probably like to keep that address to itself and not share it with any jail. Because of this, the loopback-address inside a jail is emulated by the system: + 127.0.0.1 is an alias for the first IPv4-address assigned to the jail. + ::1 is an alias for the first IPv6-address assigned to the jail. While this looks simple enough and usually works just fine[tm], it is also a source of many problems. Just imagine if your jail has only one single global IPv4 assigned to it. A daemon binding its (possibly unsecured) control port to the loopback-address would then unwillingly be exposed to the rest of the internet, which is hardly ever a good idea. + So, create an extra loopback adapter, and make the first IP in each jail a private loopback address + The tutorial goes on to cover making multiple jails share a single public IP address using NAT + It also covers more advanced concepts like ‘thin’ jails, to save some disk space if you are going to create a large number of jails, and how to upgrade them after the fact + Finally, it covers the integration with a lot of common tools, like identifying and filter jailed processes using top and ps, or using the package managers support for jails to install packages in a jail from the outside. **DigitalOcean** SmartOS release-20180315 ``` Hello All, The latest bi-weekly "release" branch build of SmartOS is up: curl -C - -O https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest.iso curl -C - -O https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest-USB.img.bz2 curl -C - -O https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos-latest.vmwarevm.tar.bz2 A generated changelog is here: https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/smartos.html#20180329T002644Z The full build bits directory, for those interested, is here in Manta: /Joyent_Dev/public/SmartOS/20180329T002644Z Highlights Firewall rules created with fwadm(1M) can now use the PRIORITY keyword to specify a higher precedence for a rule. This release has includes mitigation of the Intel Meltdown vulnerability in the form of kpti (kernel page table isolation) with PCID (process context identifier) support This release also includes experimental support for bhyve branded zones. General Info Every second Thursday we roll a "release-YYYYMMDD" release branch and builds for SmartOS (and Triton DataCenter and Manta, as well). Cheers, Josh Wilsdon, on behalf of the SmartOS developers https://smartos.org ``` Here's a screencap from q5sys' machine showing the output of sysinfo: https://i.imgur.com/MFkNi76.jpg FreeBSD Foundation March 2018 Update > Syzkaller update: Syzkaller is a coverage-guided system call fuzzer. It invokes syscalls with arbitrary and changing inputs, and is intended to use code coverage data to guide changes to system call inputs in order to access larger and larger portions of the kernel in the search for bugs. > Last term’s student focused largely on scripts to deploy and configure Syzkaller on Packet.net’s hosting infrastructure, but did not get to the code coverage integration required for Syzkaller to be effective. This term co-op student Mitchell Horne has been adding code coverage support in FreeBSD for Syzkaller. > The Linux code coverage support for Syzkaller is known as kcov and was submitted by Dmitry Vyukov, Syzkaller’s author. Kcov is purposebuilt for Syzkaller: > kcov provides code coverage collection for coverage-guided fuzzing (randomized testing). Coverage-guided fuzzing is a testing technique that uses coverage feedback to determine new interesting inputs to a system. > kcov does not aim to collect as much coverage as possible. It aims to collect more or less stable coverage that is function of syscall inputs. To achieve this goal it does not collect coverage in soft/hard interrupts and instrumentation of some inherently non-deterministic or non-interesting parts of kernel is disabled (e.g. scheduler, locking). > Mitchell implemented equivalent functionality for FreeBSD - a distinct implementation, but modelled on the one in Linux. These patches are currently in review, as are minor changes to Syzkaller to use the new interface on FreeBSD. > We still have some additional work to fully integrate Syzkaller and run it on a consistent basis, but the brief testing that has been completed suggests this work will provide a very valuable improvement in test coverage and opportunities for system hardening: we tested Syzkaller with Mitchell's code coverage patch over a weekend. It provoked kernel crashes hundreds of times faster than without his work. > I want to say thank you to NetApp for becoming an Iridium Partner again this year! (Donations between $100,000 - $249,999) It’s companies like NetApp, who recognize the importance of supporting our efforts, that allow us to continue to provide software improvements, advocate for FreeBSD, and help lead the release engineering and security efforts. > Conference Recap: FOSSASIA 2018 Foundation Director Philip Paeps went to FOSSASIA, which is possibly the largest open source event in Asia. The FreeBSD Foundation sponsored the conference. Our booth had a constant stream of traffic over the weekend and we handed out hundreds of FreeBSD stickers, pens and flyers. Many attendees of FOSSASIA had never heard of FreeBSD before and are now keen to start exploring and perhaps even contributing. By the end of the conference, there were FreeBSD stickers everywhere! > One particular hallway-track conversation led to an invitation to present FreeBSD at a "Women Who Code" evening in Kuala Lumpur later this week (Thursday 29th March). I spent the days after the conference meeting companies who use (or want to use) FreeBSD in Singapore. > SCaLE 16x: The Foundation sponsored a FreeBSD table in the expo hall that was staffed by Dru Lavigne, Warren Block, and Deb Goodkin. Our purpose was to promote FreeBSD, and attract more users and contributors to the Project. We had a steady flow of people stopping by our table, asking inquisitive questions, and picking up some cool swag and FreeBSD handouts. Deb Goodkin took some tutorials/trainings there and talked to a lot of other open source projects. Next year, we have the opportunity to have a BSD track, similar to the BSD Devroom at FOSDEM. We are looking for some volunteers in Southern California who can help organize this one or two-day event and help us educate more people about the BSDs. Let us know if you would like to help with this effort. Roll Call: #WhoUsesFreeBSD Many of you probably saw our post on social media asking Who Uses FreeBSD. Please help us answer this question to assist us in determining FreeBSD market share data, promote how companies are successfully using FreeBSD to encourage more companies to embrace FreeBSD, and to update the list of users on our website. Knowing who uses FreeBSD helps our contributors know where to look for jobs; knowing what universities teach with FreeBSD, helps companies know where to recruit, and knowing what products use FreeBSD helps us determine what features and technologies to support. New Hosting Partner: Oregon State University Open Source Lab > We are pleased to announce that the Oregon State University (OSU) Open Source Lab (OSL), which hosts infrastructure for over 160 different open source projects, has agreed to host some of our servers for FreeBSD development. The first server, which should be arriving shortly, is an HP Enterprise Proliant DL360 Gen10 configured with NVDIMM memory which will be initially used for further development and testing of permanent memory support in the kernel. Stay tuned for more news from the FreeBSD Foundation in May (next newsletter). Beastie Bits cURL is 20 today A Note on SYSVIPC and Jails on FreeBSD OpenBSD Errata: March 20th, 2018 (ipsec) FreeBSD Security Advisories for IPSEC and vt 23 Useful PKG Command Examples to Manage Packages in FreeBSD Tarsnap Feedback/Questions Casey - Cool Editor Nelson - New article on FreeBSD vs MacOS Damian - Mysterious Reverse Proxy 504 Nelson - FreeBSD, rsync, nasty bug, now fixed Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 241: Bowling in the LimeLight | BSD Now 241

April 12, 2018 2:01:00 87.19 MB Downloads: 0

Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers. Headlines [Other big ZFS improvements you might have missed] 9075 Improve ZFS pool import/load process and corrupted pool recovery One of the first tasks during the pool load process is to parse a config provided from userland that describes what devices the pool is composed of. A vdev tree is generated from that config, and then all the vdevs are opened. The Meta Object Set (MOS) of the pool is accessed, and several metadata objects that are necessary to load the pool are read. The exact configuration of the pool is also stored inside the MOS. Since the configuration provided from userland is external and might not accurately describe the vdev tree of the pool at the txg that is being loaded, it cannot be relied upon to safely operate the pool. For that reason, the configuration in the MOS is read early on. In the past, the two configurations were compared together and if there was a mismatch then the load process was aborted and an error was returned. The latter was a good way to ensure a pool does not get corrupted, however it made the pool load process needlessly fragile in cases where the vdev configuration changed or the userland configuration was outdated. Since the MOS is stored in 3 copies, the configuration provided by userland doesn't have to be perfect in order to read its contents. Hence, a new approach has been adopted: The pool is first opened with the untrusted userland configuration just so that the real configuration can be read from the MOS. The trusted MOS configuration is then used to generate a new vdev tree and the pool is re-opened. When the pool is opened with an untrusted configuration, writes are disabled to avoid accidentally damaging it. During reads, some sanity checks are performed on block pointers to see if each DVA points to a known vdev; when the configuration is untrusted, instead of panicking the system if those checks fail we simply avoid issuing reads to the invalid DVAs. This new two-step pool load process now allows rewinding pools across vdev tree changes such as device replacement, addition, etc. Loading a pool from an external config file in a clustering environment also becomes much safer now since the pool will import even if the config is outdated and didn't, for instance, register a recent device addition. With this code in place, it became relatively easy to implement a long-sought-after feature: the ability to import a pool with missing top level (i.e. non-redundant) devices. Note that since this almost guarantees some loss Of data, this feature is for now restricted to a read-only import. 7614 zfs device evacuation/removal This project allows top-level vdevs to be removed from the storage pool with “zpool remove”, reducing the total amount of storage in the pool. This operation copies all allocated regions of the device to be removed onto other devices, recording the mapping from old to new location. After the removal is complete, read and free operations to the removed (now “indirect”) vdev must be remapped and performed at the new location on disk. The indirect mapping table is kept in memory whenever the pool is loaded, so there is minimal performance overhead when doing operations on the indirect vdev. The size of the in-memory mapping table will be reduced when its entries become “obsolete” because they are no longer used by any block pointers in the pool. An entry becomes obsolete when all the blocks that use it are freed. An entry can also become obsolete when all the snapshots that reference it are deleted, and the block pointers that reference it have been “remapped” in all filesystems/zvols (and clones). Whenever an indirect block is written, all the block pointers in it will be “remapped” to their new (concrete) locations if possible. This process can be accelerated by using the “zfs remap” command to proactively rewrite all indirect blocks that reference indirect (removed) vdevs. Note that when a device is removed, we do not verify the checksum of the data that is copied. This makes the process much faster, but if it were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be possible to copy the wrong data, when we have the correct data on e.g. the other side of the mirror. Therefore, mirror and raidz devices can not be removed. You can use ‘zpool detach’ to downgrade a mirror to a single top-level device, so that you can then remove it 7446 zpool create should support efi system partition This one was not actually merged into FreeBSD, as it doesn’t apply currently, but I would like to switch the way FreeBSD deals with full disks to be closer to IllumOS to make automatic spare replacement a hands-off operation. Since we support whole-disk configuration for boot pool, we also will need whole disk support with UEFI boot and for this, zpool create should create efi-system partition. I have borrowed the idea from oracle solaris, and introducing zpool create -B switch to provide an way to specify that boot partition should be created. However, there is still an question, how big should the system partition be. For time being, I have set default size 256MB (thats minimum size for FAT32 with 4k blocks). To support custom size, the set on creation "bootsize" property is created and so the custom size can be set as: zpool create -B -o bootsize=34MB rpool c0t0d0. After the pool is created, the "bootsize" property is read only. When -B switch is not used, the bootsize defaults to 0 and is shown in zpool get output with no value. Older zfs/zpool implementations can ignore this property. **Digital Ocean** PostgreSQL developers find that every operating system other than FreeBSD and IllumOS might corrupt your data Some time ago I ran into an issue where a user encountered data corruption after a storage error. PostgreSQL played a part in that corruption by allowing checkpoint what should've been a fatal error. TL;DR: Pg should PANIC on fsync() EIO return. Retrying fsync() is not OK at least on Linux. When fsync() returns success it means "all writes since the last fsync have hit disk" but we assume it means "all writes since the last SUCCESSFUL fsync have hit disk". Pg wrote some blocks, which went to OS dirty buffers for writeback. Writeback failed due to an underlying storage error. The block I/O layer and XFS marked the writeback page as failed (ASEIO), but had no way to tell the app about the failure. When Pg called fsync() on the FD during the next checkpoint, fsync() returned EIO because of the flagged page, to tell Pg that a previous async write failed. Pg treated the checkpoint as failed and didn't advance the redo start position in the control file. + All good so far. But then we retried the checkpoint, which retried the fsync(). The retry succeeded, because the prior fsync() *cleared the ASEIO bad page flag*. The write never made it to disk, but we completed the checkpoint, and merrily carried on our way. Whoops, data loss. The clear-error-and-continue behaviour of fsync is not documented as far as I can tell. Nor is fsync() returning EIO unless you have a very new linux man-pages with the patch I wrote to add it. But from what I can see in the POSIX standard we are not given any guarantees about what happens on fsync() failure at all, so we're probably wrong to assume that retrying fsync() is safe. We already PANIC on fsync() failure for WAL segments. We just need to do the same for data forks at least for EIO. This isn't as bad as it seems because AFAICS fsync only returns EIO in cases where we should be stopping the world anyway, and many FSes will do that for us. + Upon further looking, it turns out it is not just Linux brain damage: Apparently I was too optimistic. I had looked only at FreeBSD, which keeps the page around and dirties it so we can retry, but the other BSDs apparently don't (FreeBSD changed that in 1999). From what I can tell from the sources below, we have: Linux, OpenBSD, NetBSD: retrying fsync() after EIO lies FreeBSD, Illumos: retrying fsync() after EIO tells the truth + NetBSD PR to solve the issues + I/O errors are not reported back to fsync at all. + Write errors during genfs_putpages that fail for any reason other than ENOMEM cause the data to be semi-silently discarded. + It appears that UVM pages are marked clean when they're selected to be written out, not after the write succeeds; so there are a bunch of potential races when writes fail. + It appears that write errors for buffercache buffers are semi-silently discarded as well. Interview - Kevin Bowling: Senior Manager Engineering of LimeLight Networks - kbowling@llnw.com / @kevinbowling1 BR: How did you first get introduced to UNIX and BSD? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on it the past? AJ: Tell us a bit about LimeLight and how they use FreeBSD. BR: What are the biggest advantages of FreeBSD for LimeLight? AJ: What could FreeBSD do better that would benefit LimeLight? BR: What has LimeLight given back to FreeBSD? AJ: What have you been working on more recently? BR: What do you find to be the most valuable part of open source? AJ: Where do you think the most improvement in open source is needed? BR: Tell us a bit about your computing history collection. What are your three favourite pieces? AJ: How do you keep motivated to work on Open Source? BR: What do you do for fun? AJ: Anything else you want to mention? News Roundup BSDCan 2018 Selected Talks The schedule for BSDCan is up Lots of interesting content, we are looking forward to it We hope to see lots of you there. Make sure you come introduce yourselves to us. Don’t be shy. Remember, if this is your first BSDCan, checkout the newbie session on Thursday night. It’ll help you get to know a few people so you have someone you can ask for guidance. Also, check out the hallway track, the tables, and come to the hacker lounge. iXsystems Cryptographic Right Answers Crypto can be confusing. We all know we shouldn’t roll our own, but what should we use? Well, some developers have tried to answer that question over the years, keeping an updated list of “Right Answers” 2009: Colin Percival of FreeBSD 2015: Thomas H. Ptacek 2018: Latacora A consultancy that provides “Retained security teams for startups”, where Thomas Ptacek works. We’re less interested in empowering developers and a lot more pessimistic about the prospects of getting this stuff right. There are, in the literature and in the most sophisticated modern systems, “better” answers for many of these items. If you’re building for low-footprint embedded systems, you can use STROBE and a sound, modern, authenticated encryption stack entirely out of a single SHA-3-like sponge constructions. You can use NOISE to build a secure transport protocol with its own AKE. Speaking of AKEs, there are, like, 30 different password AKEs you could choose from. But if you’re a developer and not a cryptography engineer, you shouldn’t do any of that. You should keep things simple and conventional and easy to analyze; “boring”, as the Google TLS people would say. Cryptographic Right Answers Encrypting Data Percival, 2009: AES-CTR with HMAC. Ptacek, 2015: (1) NaCl/libsodium’s default, (2) ChaCha20-Poly1305, or (3) AES-GCM. Latacora, 2018: KMS or XSalsa20+Poly1305 Symmetric key length Percival, 2009: Use 256-bit keys. Ptacek, 2015: Use 256-bit keys. Latacora, 2018: Go ahead and use 256 bit keys. Symmetric “Signatures” Percival, 2009: Use HMAC. Ptacek, 2015: Yep, use HMAC. Latacora, 2018: Still HMAC. Hashing algorithm Percival, 2009: Use SHA256 (SHA-2). Ptacek, 2015: Use SHA-2. Latacora, 2018: Still SHA-2. Random IDs Percival, 2009: Use 256-bit random numbers. Ptacek, 2015: Use 256-bit random numbers. Latacora, 2018: Use 256-bit random numbers. Password handling Percival, 2009: scrypt or PBKDF2. Ptacek, 2015: In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2. Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2. Asymmetric encryption Percival, 2009: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537. Ptacek, 2015: Use NaCl/libsodium (box / cryptobox). Latacora, 2018: Use Nacl/libsodium (box / cryptobox). Asymmetric signatures Percival, 2009: Use RSASSA-PSS with SHA256 then MGF1+SHA256 in tricolor systemic silicate orientation. Ptacek, 2015: Use Nacl, Ed25519, or RFC6979. Latacora, 2018: Use Nacl or Ed25519. Diffie-Hellman Percival, 2009: Operate over the 2048-bit Group #14 with a generator of 2. Ptacek, 2015: Probably still DH-2048, or Nacl. Latacora, 2018: Probably nothing. Or use Curve25519. Website security Percival, 2009: Use OpenSSL. Ptacek, 2015: Remains: OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Client-server application security Percival, 2009: Distribute the server’s public RSA key with the client code, and do not use SSL. Ptacek, 2015: Use OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Online backups Percival, 2009: Use Tarsnap. Ptacek, 2015: Use Tarsnap. Latacora, 2018: Store PMAC-SIV-encrypted arc files to S3 and save fingerprints of your backups to an ERC20-compatible blockchain. Just kidding. You should still use Tarsnap. Seriously though, use Tarsnap. Adding IPv6 to an existing server I am adding IPv6 addresses to each of my servers. This post assumes the server is up and running FreeBSD 11.1 and you already have an IPv6 address block. This does not cover the creation of an IPv6 tunnel, such as that provided by HE.net. This assumes native IPv6. In this post, I am using the IPv6 addresses from the IPv6 Address Prefix Reserved for Documentation (i.e. 2001:DB8::/32). You should use your own addresses. The IPv6 block I have been assigned is 2001:DB8:1001:8d00/64. I added this to /etc/rc.conf: ipv6_activate_all_interfaces="YES" ipv6_defaultrouter="2001:DB8:1001:8d00::1" ifconfig_em1_ipv6="inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv" # ns1 The IPv6 address I have assigned to this host is completely random (with the given block). I found a random IPv6 address generator and used it to select d389:119c:9b57:396b as the address for this service within my address block. I don’t have the reference, but I did read that randomly selecting addresses within your block is a better approach. In order to invoke these changes without rebooting, I issued these commands: ``` [dan@tallboy:~] $ sudo ifconfig em1 inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv [dan@tallboy:~] $ [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 add net default: gateway 2001:DB8:1001:8d00::1 ``` If you do the route add first, you will get this error: [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 route: writing to routing socket: Network is unreachable add net default: gateway 2001:DB8:1001:8d00::1 fib 0: Network is unreachable Beastie Bits Ghost in the Shell – Part 1 Enabling compression on ZFS - a practical example Modern and secure DevOps on FreeBSD (Goran Mekić) LibreSSL 2.7.0 Released zrepl version 0.0.3 is out! [ZFS User Conference](http://zfs.datto.com/] Tarsnap Feedback/Questions Benjamin - BSD Personal Mailserver Warren - ZFS volume size limit (show #233) Lars - AFRINIC Brad - OpenZFS vs OracleZFS Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Episode 240: TCP Blackbox Recording | BSD Now 240

April 07, 2018 1:39:18 47.82 MB Downloads: 0

New ZFS features landing in FreeBSD, MAP_STACK for OpenBSD, how to write safer C code with Clang’s address sanitizer, Michael W. Lucas on sponsor gifts, TCP blackbox recorder, and Dell disk system hacking. Headlines [A number of Upstream ZFS features landed in FreeBSD this week] 9188 increase size of dbuf cache to reduce indirect block decompression With compressed ARC (6950) we use up to 25% of our CPU to decompress indirect blocks, under a workload of random cached reads. To reduce this decompression cost, we would like to increase the size of the dbuf cache so that more indirect blocks can be stored uncompressed. If we are caching entire large files of recordsize=8K, the indirect blocks use 1/64th as much memory as the data blocks (assuming they have the same compression ratio). We suggest making the dbuf cache be 1/32nd of all memory, so that in this scenario we should be able to keep all the indirect blocks decompressed in the dbuf cache. (We want it to be more than the 1/64th that the indirect blocks would use because we need to cache other stuff in the dbuf cache as well.) In real world workloads, this won't help as dramatically as the example above, but we think it's still worth it because the risk of decreasing performance is low. The potential negative performance impact is that we will be slightly reducing the size of the ARC (by ~3%). 9166 zfs storage pool checkpoint The idea of Storage Pool Checkpoint (aka zpool checkpoint) deals with exactly that. It can be thought of as a “pool-wide snapshot” (or a variation of extreme rewind that doesn’t corrupt your data). It remembers the entire state of the pool at the point that it was taken and the user can revert back to it later or discard it. Its generic use case is an administrator that is about to perform a set of destructive actions to ZFS as part of a critical procedure. She takes a checkpoint of the pool before performing the actions, then rewinds back to it if one of them fails or puts the pool into an unexpected state. Otherwise, she discards it. With the assumption that no one else is making modifications to ZFS, she basically wraps all these actions into a “high-level transaction”. More information 8484 Implement aggregate sum and use for arc counters In pursuit of improving performance on multi-core systems, we should implements fanned out counters and use them to improve the performance of some of the arc statistics. These stats are updated extremely frequently, and can consume a significant amount of CPU time. And a small bug fix authored by me: 9321 arcloancompressedbuf() can increment arcloanedbytes by the wrong value arcloancompressedbuf() increments arcloanedbytes by psize unconditionally In the case of zfscompressedarcenabled=0, when the buf is returned via arcreturnbuf(), if ARCBUFCOMPRESSED(buf) is false, then arcloanedbytes is decremented by lsize, not psize. Switch to using arcbufsize(buf), instead of psize, which will return psize or lsize, depending on the result of ARCBUF_COMPRESSED(buf). MAP_STACK for OpenBSD Almost 2 decades ago we started work on W^X. The concept was simple. Pages that are writable, should not be executable. We applied this concept object by object, trying to seperate objects with different qualities to different pages. The first one we handled was the signal trampoline at the top of the stack. We just kept making changes in the same vein. Eventually W^X came to some of our kernel address spaces also. The fundamental concept is that an object should only have the permissions necessary, and any other operation should fault. The only permission separations we have are kernel vs userland, and then read, write, and execute. How about we add another new permission! This is not a hardware permission, but a software permission. It is opportunistically enforced by the kernel. the permission is MAPSTACK. If you want to use memory as a stack, you must mmap it with that flag bit. The kernel does so automatically for the stack region of a process's stack. Two other types of stack occur: thread stacks, and alternate signal stacks. Those are handled in clever ways. When a system call happens, we check if the stack-pointer register points to such a page. If it doesn't, the program is killed. We have tightened the ABI. You may no longer point your stack register at non-stack memory. You'll be killed. This checking code is MI, so it works for all platforms. Since page-permissions are generally done on page boundaries, there is caveat that thread and altstacks must now be page-sized and page-aligned, so that we can enforce the MAPSTACK attribute correctly. It is possible that a few ports need some massaging to satisfy this condition, but we haven't found any which break yet. A syslog_r has been added so that we can identify these failure cases. Also, the faulting cases are quite verbose for now, to help identify the programs we need to repair. **iXsystems** Writing Safer C with the Clang Address Sanitizer We wanted to improve our password strength algorithm, and decided to go for the industry-standard zxcvbn, from the people at Dropbox. Our web front-end would use the default Javascript library, and for mobile and desktop, we chose to use the C implementation as it was the lowest common denominator for all platforms. Bootstrapping all of this together was done pretty fast. I had toyed around with a few sample passwords so I decided to run it through the test suite we had for the previous password strength evaluator. The test generates a large number of random passwords according to different rules and expects the strength to be in a given range. But the test runner kept crashing with segmentation faults. It turns out the library has a lot of buffer overflow cases that are usually "harmless", but eventually crash your program when you run the evaluator function too much. I started fixing the cases I could see, but reading someone else's algorithms to track down tiny memory errors got old pretty fast. I needed a tool to help me. That's when I thought of Clang's Address Sanitizer. AddressSanitizer is a fast memory error detector. It consists of a compiler instrumentation module and a run-time library Let's try the sanitizer on a simple program. We'll allocate a buffer on the heap, copy each character of a string into it, and print it to standard output. + The site walks through a simple example which contains an error, it writes past the end of a buffer + The code works as expected, and nothing bad happens. It must be fine… + Then they compile it again with the address sanitizer actived So what can we gather from that pile of hex? Let's go through it line by line. AddressSanitizer found a heap buffer overflow at 0x60200000ef3d, a seemingly valid address (not NULL or any other clearly faulty value). + ASAN points directly to the line of code that is causing the problem We're writing outside of the heap in this instruction. And AddressSanitizer isn't having it. This is definitely one of my favorite indications. In addition to telling which line in the code failed and where in the memory the failure happened, you get a complete description of the closest allocated region in memory (which is probably the region you were trying to access). + They then walk through combining this with lldb, the Clang debugger, to actually interactively inspect the state of the problem when an invalid memory access happens Back to my practical case, how did I put the address sanitizer to good use? I simply ran the test suite, compiled with the sanitizer, with lldb. Sure enough, it stopped on every line that could cause a crash. It turns out there were many cases where zxcvbn-c wrote past the end of allocated buffers, on the heap and on the stack. I fixed those cases in the C library and ran the tests again. Not a segfault in sight! I've used memory tools in the past, but they were usually unwieldy, or put such a toll on performance that they were useless in any real-life case. Clang's address sanitizer turned out to be detailed, reliable, and surprisingly easy to use. I've heard of the miracles of Valgrind but macOS hardly supports it, making it a pain to use on my MacBook Pro. Coupled with Clang's static analyzer, AddressSanitizer is going to become a mandatory stop for evaluating code quality. It's also going to be the first tool I grab when facing confusing memory issues. There are many more case where I could use early failure and memory history to debug my code. For example, if a program crashes when accessing member of a deallocated object, we could easily trace the event that caused the deallocation, saving hours of adding and reading logs to retrace just what happened. News Roundup On sponsor gifts Note the little stack of customs forms off to the side. It’s like I’ve learned a lesson from standing at the post office counter filling out those stupid forms. Sponsors should get their books soon. This seems like an apropos moment to talk about what I do for print sponsors. I say I send them “a gift,” but what does that really mean? The obvious thing to ship them is a copy of the book I’ve written. Flat-out selling print books online has tax implications, though. Sponsors might have guessed that they’d get a copy of the book. But I shipped them the hardcover, which isn’t my usual practice. That’s because I send sponsors a gift. As it’s a gift, I get to choose what I send. I want to send them something nice, to encourage them to sponsor another book. It makes no sense for me to send a sponsor a Singing Wedgie-O-Gram. (Well, maybe a couple sponsors. You know who you are.) The poor bastards who bought into my scam–er, sponsored my untitled book–have no idea what’s coming. As of right now, their sensible guesses are woefully incomplete. Future books? They might get a copy of the book. They might get book plus something. They might just get the something. Folks who sponsor the jails book might get a cake with a file in it. Who knows? It’s a gift. It’s my job to make that gift worthwhile. And to amuse myself. Because otherwise, what’s the point? TCP Blackbox Recorder ``` Add the "TCP Blackbox Recorder" which we discussed at the developer summits at BSDCan and BSDCam in 2017. The TCP Blackbox Recorder allows you to capture events on a TCP connection in a ring buffer. It stores metadata with the event. It optionally stores the TCP header associated with an event (if the event is associated with a packet) and also optionally stores information on the sockets. It supports setting a log ID on a TCP connection and using this to correlate multiple connections that share a common log ID. You can log connections in different modes. If you are doing a coordinated test with a particular connection, you may tell the system to put it in mode 4 (continuous dump). Or, if you just want to monitor for errors, you can put it in mode 1 (ring buffer) and dump all the ring buffers associated with the connection ID when we receive an error signal for that connection ID. You can set a default mode that will be applied to a particular ratio of incoming connections. You can also manually set a mode using a socket option. This commit includes only basic probes. rrs@ has added quite an abundance of probes in his TCP development work. He plans to commit those soon. There are user-space programs which we plan to commit as ports. These read the data from the log device and output pcapng files, and then let you analyze the data (and metadata) in the pcapng files. Reviewed by: gnn (previous version) Obtained from: Netflix, Inc. Relnotes: yes Differential Revision: https://reviews.freebsd.org/D11085 ``` **Digital Ocean** Outta the way, KDE4 KDE4 has been rudely moved aside on FreeBSD. It still installs (use x11/kde4) and should update without a problem, but this is another step towards adding modern KDE (Plasma 5 and Applications) to the official FreeBSD Ports tree. This has taken a long time mostly for administrative reasons, getting all the bits lined up so that people sticking with KDE4 (which, right now, would be everyone using KDE from official ports and packages on FreeBSD) don’t end up with a broken desktop. We don’t want that. But now that everything Qt4 and kdelibs4-based has been moved aside by suffixing it with -kde4, we have the unsuffixed names free to indicate the latest-and-greatest from upstream. KDE4 users will see a lot of packages moving around and being renamed, but no functional changes. Curiously, the KDE4 desktop depends on Qt5 and KDE Frameworks 5 — and it has for quite some time already, because the Oxygen icons are shared with KDE Frameworks, but primarily because FileLight was updated to the modern KDE Applications version some time ago (the KDE4 version had some serious bugs, although I can not remember what they were). Now that the names are cleaned up, we could consider giving KDE4 users the buggy version back. From here on, we’ve got the following things lined up: Qt 5.10 is being worked on, except for WebEngine (it would slow down an update way too much), because Plasma is going to want Qt 5.10 soon. CMake 3.11 is in the -rc stage, so that is being lined up. The kde5-import branch in KDE-FreeBSD’s copy of the FreeBSD ports tree (e.g. Area51) is being prepped and polished for a few big SVN commits that will add all the new bits. So we’ve been saying Real Soon Now ™ for years, but things are Realer Sooner Nower ™ now. Dell FS12-NV7 and other 2U server (e.g. C6100) disk system hacking A while back I reviewed the Dell FS12-NV7 – a 2U rack server being sold cheap by all and sundry. It’s a powerful box, even by modern standards, but one of its big drawbacks is the disk system it comes with. But it needn’t be. There are two viable solutions, depending on what you want to do. You can make use of the SAS backplane, using SAS and/or SATA drives, or you can go for fewer SATA drives and free up one or more PCIe slots as Plan B. You probably have an FS12 because it looks good for building a drive array (or even FreeNAS) so I’ll deal with Plan A first. Like most Dell servers, this comes with a Dell PERC RAID SAS controller – a PERC6/i to be precise. This ‘I’ means it has internal connectors; the /E is the same but its sockets are external. The PERC connects to a twelve-slot backplane forming a drive array at the front of the box. More on the backplane later; it’s the PERCs you need to worry about. The PERC6 is actually an LSI Megaraid 1078 card, which is just the thing you need if you’re running an operating system like Windows that doesn’t support a volume manager, striping and other grown-up stuff. Or if your OS does have these features, but you just don’t trust it. If you are running such an OS you may as well stick to the PERC6, and good luck to you. If you’re using BSD (including FreeNAS), Solaris or a Linux distribution that handles disk arrays, read on. The PERC6 is a solution to a problem you probably don’t have, but in all other respects its a turkey. You really want a straightforward HBA (Host Bus Adapter) that allows your clever operating system to talk directly with the drives. Any SAS card based on the 1078 (such as the PERC6) is likely to have problems with drives larger than 2Tb. I’m not completely sure why, but I suspect it only applies to SATA. Unfortunately I don’t have any very large SAS drives to test this theory. A 2Tb limit isn’t really such a problem when you’re talking about a high performance array, as lots of small drives are a better option anyway. But it does matter if you’re building a very large datastore and don’t mind slower access and very significant resilvering times when you replace a drive. And for large datastores, very large SATA drives save you a whole lot of cash. The best capacity/cost ratio is for 5Gb SATA drives Some Dell PERCs can be re-flashed with LSI firmware and used as a normal HBA. Unfortunately the PERC6 isn’t one of them. I believe the PERC6/R can be, but those I’ve seen in a FS12 are just a bit too old. So the first thing you’ll need to do is dump them in the recycling or try and sell them on eBay. There are actually two PERC6 cards in most machine, and they each support eight SAS channels through two SFF-8484 connectors on each card. Given there are twelve drives slots, one of the PERCs is only half used. Sometimes they have a cable going off to a battery located near the fans. This is used in a desperate attempt to keep the data in the card’s cache safe in order to avoid write holes corrupting NTFS during a power failure, although the data on the on-drive caches won’t be so lucky. If you’re using a file system like that, make sure you have a UPS for the whole lot. But we’re going to put the PERCs out of our misery and replace them with some nice new LSI HBAs that will do our operating system’s bidding and let it talk to the drives as it knows best. But which to pick? First we need to know what we’re connecting. Moving to the front of the case there are twelve metal drive slots with a backplane behind. Dell makes machines with either backplanes or expanders. A backplane has a 1:1 SAS channel to drive connection; an expander takes one SAS channel and multiplexes it to (usually) four drives. You could always swap the blackplane with an expander, but I like the 1:1 nature of a backplane. It’s faster, especially if you’re configured as an array. And besides, we don’t want to spend more money than we need to, otherwise we wouldn’t be hot-rodding a cheap 2U server in the first place – expanders are expensive. Bizarrely, HBAs are cheap in comparison. So we need twelve channels of SAS that will connect to the sockets on the backplane. The HBA you will probably want to go with is an LSI, as these have great OS support. Other cards are available, but check that the drivers are also available. The obvious choice for SAS aficionados is the LSI 9211-8i, which has eight internal channels. This is based on an LSI 2000 series chip, the 2008, which is the de-facto standard. There’s also four-channel -4i version, so you could get your twelve channels using one of each – but the price difference is small these days, so you might as well go for two -8i cards. If you want cheaper there are 1068-based equivalent cards, and these work just fine at about half the price. They probably won’t work with larger disks, only operate at 3Gb and the original SAS standard. However, the 2000 series is only about £25 extra and gives you more options for the future. A good investment. Conversely, the latest 3000 series cards can do some extra stuff (particularly to do with active cables) but I can’t see any great advantage in paying megabucks for one unless you’re going really high-end – in which case the NV12 isn’t the box for you anyway. And you’d need some very fast drives and a faster backplane to see any speed advantage. And probably a new motherboard…. Whether the 6Gb SAS2 of the 9211-8i is any use on the backplane, which was designed for 3Gb, I don’t know. If it matters that much to you you probably need to spend a lot more money. A drive array with a direct 3Gb to each drive is going to shift fast enough for most purposes. Once you have removed the PERCs and plugged in your modern-ish 9211 HBAs, your next problem is going to be the cable. Both the PERCs and the backplane have SFF-8484 multi-lane connectors, which you might not recognise. SAS is a point-to-point system, the same as SATA, and a multi-lane cable is simply four single cables in a bundle with one plug. (Newer versions of SAS have more). SFF-8484 multi-lane connectors are somewhat rare, (but unfortunately this doesn’t make them valuable if you were hoping to flog them on eBay). The world switched quickly to the SFF-8087 for multi-lane SAS. The signals are electrically the same, but the connector is not. Please generate and paste your ad code here. If left empty, the ad location will be highlighted on your blog pages with a reminder to enter your code. Mid-Post So there are two snags with this backplane. Firstly it’s designed to work with PERC controllers; secondly it has the old SFF-8484 connectors on the back, and any SAS cables you find are likely to have SFF-8087. First things first – there is actually a jumper on the backplane to tell it whether it’s talking to a PERC or a standard LSI HBA. All you need to do is find it and change it. Fortunately there are very few jumpers to choose from (i.e. two), and you know the link is already in the wrong place. So try them one at a time until it works. The one you want may be labelled J15, but I wouldn’t like to say this was the same on every variant. Second problem: the cable. You can get cables with an SFF-8087 on one end and an SFF-8484 on the other. These should work. But they’re usually rather expensive. If you want to make your own, it’s a PITA but at least you have the connectors already (assuming you didn’t bin the ones on the PERC cables). I don’t know what committee designed SAS cable connectors, but ease of construction wasn’t foremost in their collective minds. You’re basically soldering twisted pair to a tiny PCB. This is mechanically rubbish, of course, as the slightest force on the cable will lift the track. Therefore its usual to cover the whole joint in solidified gunk (technical term) to protect it. Rewiring SAS connectors is definitely not easy. I’ve tried various ways of soldering to them, none of which were satisfactory or rewarding. One method is to clip the all bare wires you wish to solder using something like a bulldog clip so they’re at lined up horizontally and then press then adjust the clamp so they’re gently pressed to the tracks on the board, making final adjustments with a strong magnifying glass and a fine tweezers. You can then either solder them with a fine temperature-controlled iron, or have pre-coated the pads with solder paste and flash across it with an SMD rework station. I’d love to know how they’re actually manufactured – using a precision jig I assume. The “easy” way is to avoid soldering the connectors at all; simply cut existing cables in half and join one to the other. I’ve used prototyping matrix board for this. Strip and twist the conductors, push them through a hole and solder. This keeps things compact but manageable. We’re dealing with twisted pair here, so maintain the twists as close as possible to the board – it actually works quite well. However, I’ve now found a reasonably-priced source of the appropriate cable so I don’t do this any more. Contact me if you need some in the UK. So all that remains is to plug your HBAs to the backplane, shove in some drives and you’re away. If you’re at this stage, it “just works”. The access lights for all the drives do their thing as they should. The only mystery is how you can get the ident LED to come on; this may be controlled by the PERC when it detects a failure using the so-called sideband channel, or it may be operated by the electronics on the backplane. It’s workings are, I’m afraid, something of a mystery still – it’s got too much electronics on board to be a completely passive backplane. Plan B: SATA If you plan to use only SATA drives, especially if you don’t intend using more than six, it makes little sense to bother with SAS at all. The Gigabyte motherboard comes with half a dozen perfectly good 3Gb SATA channels, and if you need more you can always put another controller in a PCIe slot, or even USB. The advantages are lower cost and you get to free up two PCIe slots for more interesting things. The down-side is that you can’t use the SAS backplane, but you can still use the mounting bays. Removing the backplane looks tricky, but it really isn’t when you look a bit closer. Take out the fans first (held in place by rubber blocks), undo a couple of screws and it just lifts and slides out. You can then slot and lock in the drives and connect the SATA connectors directly to the back of the drives. You could even slide them out again without opening the case, as long as the cable was long enough and you manually detached the cable it when it was withdrawn. And let’s face it – drives are likely to last for years so even with half a dozen it’s not that great a hardship to open the case occasionally. Next comes power. The PSU has a special connector for the backplane and two standard SATA power plugs. You could split these three ways using an adapter, but if you have a lot of drives you might want to re-wire the cables going to the backplane plug. It can definitely power twelve drives. And that’s almost all there is to it. Unfortunately the main fans are connected to the backplane, which you’ve just removed. You can power them from an adapter on the drive power cables, but there are unused fan connectors on the motherboard. I’m doing a bit more research on cooling options, but this approach has promising possibilities for noise reduction. Beastie Bits Adriaan de Groot’s post FOSDEM blog post My First FreeNAS smart(8) Call for Testing by Michael Dexter BSDCan 2018 Travel Grant Application Now Open BSD Developer Kristaps Dzonsons interviews Linus Torvalds, about diving Twitter vote - The secret to a faster FreeBSD default build world... tmate - Instant terminal sharing Tarsnap Feedback/Questions Vikash - Getting a port added Chris Wells - Quarterly Ports Branch FreeBSD-CI configs on Github Jenkins on the FreeBSD Wiki Gordon - Centralised storage suggestions Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv