hey, uhh, when was your last...

If "... STI screening," turn to page 58.
If "... backup of that hard drive," turn to page 35.

We all know the NSA spies on basically all Internet users in the U.S. Why do the courts have to pretend it’s a secret? eff.org/deeplinks/2019/04/gove

My son just held a pizza crust up to his ear to pretend call his mother. He had to report he was at work at Google, where he punched boxes all day.

Thank you, #Microsoft, for sharing how many #Google services are looming in the #Chrome/#Chromium browser. I hope all projects that use Chromium take note and also start removing these. (slide was presented at BlinkOn conference)

I can't believe I can still be surprised about the shitty things facebook is willing to do eff.org/deeplinks/2019/04/face . It's so upsetting how easily they can find engineers that aren't morally opposed to implementing this garbage

I’m in a Lyft and the driver is trying to pick between playing the Tim Ferriss podcast and the Joe Rogan podcast and I’m taking a Lyft to soma on the weekend so this is what I deserve.

Me, a simpleton: Let me back up the keys in the phone's TA partition so I can factory restore later

Android, practiced in the dark arts: shame if something were to happen to your hardware attestation keystore

This article is amazing. If there’s one place the security field should have been trying to get things right, this is it. As always, we’ve been looking the other way. wired.com/story/eva-galperin-s


Microsoft is shutting down its ebook store, and deleting all its customers' libraries:


The only reason they can do this is DRM, which means you never really own anything you've paid for.

Physical books are the most obvious alternative, but there are also DRM-free ebook shops. @libreture has a good selection here:


You can find more book-related alternatives here:


(via @yogthos )

#eBooks #Books #Bookshops #Bookstores

Patagonia "is now signing up only fellow B corporations, as well as members of 1% for the Planet, an eco-conscious business association, as corporate customers for branded logowear"


so goes midtown uniform: instagram.com/p/Bt3oqGBHAgW/

I'm finally setting up a custom style for search results. A rule similar to this blacklists all subdomains of a given domain:

.result[data-domain$=".example.com"] {
display: none;

So far I have groups for sites that use ad-blocker-blockers, and low quality sites.

Well that's appropriate, I'm eyeball deep in a storage machine build and an article extolling a NAS pops in my RSS feed. Per usual I read it and per usual it's full of hardware RAID and simple filesystems with a fancy UI.

There is a time/place for such things but there are important points to keep in mind when building storage, especially if you care about your data being safe and recoverable over time.

Some bullet points
✅ Hardware RAID only protects you from disk failures
✅ Hardware RAID is NOT portable
✅ Software RAID is portable but ONLY protects you from disk failures
✅ All but a few filesystems suffer from bitrot and this can NOT be avoided
✅ CPU/RAM/Cable/Controller Card (even the built in ones) CAN and WILL fail taking your data with them
✅ btrfs and zfs can keep bitrot at bay, IF properly configured
✅ btrfs failure modes are catastrophic and horrifying
✅ zfs failure modules are catastrophic but are less horrific than btrfs
✅ ext4 and exFAT are the two most reliable "dumb" filesystems you can pick
✅ ntfs is OK but isn't reliably compatible with anything outside Windows land
✅ hfs[+] (Apple's filesystems) are hot garbage and may the dieties come to your aid if you suffer any form of failure
✅ vfat/fat32 are great at rotting from the inside out, slightly less painful than hfs[+] for rot and failures
✅ lvm and lvm2 are handy but wholly useless for dynamic provisioning
✅ software raid + lvm2 + ext4 is NOT a durable solution ; lots of moving parts, many failure modes and not so subtle problems lay down this path. bitrot is very real here
✅ zfs and btrfs CAN be used with 1Gb RAM machines
✅ zfs and btrfs do NOT suffer bitrot IF configured propery

There is a pattern here: if you try to use a tech stack that's not entirely unified (RAID + lvm2 + filesystem folks I'm looking at you) you're going to have a LOT of moving parts, no data durability (checksums exist in zfs and btrfs at the filesystem level for a reason!) and your failure modes will be entertaining at best. If any one of those 3 pieces of the stack fail you're into a world of pain recovering that layer and hoping the others remain unaffected.

That and the bitrot problem: computers are comprised of a LOT of hardware components working together to do things. If ANY hardware related to your disk stack (CPU, RAM, disk controller, power cable, data cable, south bridge, north bridge, usb controller, etc) goes awry it'll surface higher up the "stack" in the software as some form of failure.

The traditional RAID + lvm2 + filesystem stack makes some assumptions about those moving parts generally working and really only the disks failing. That's a recipe for problems if something other than the disk goes bad along the way. Nevermind cascading disk failures... The software/hardware tools will tell you the disks are problematic but any replacements will immediately go bad. That's not a fun, fast or easy process to sort out. Nevermind you're likely going to incur subtle bitrot problems along the way that only a filesystem check will be able to see. It won't be able to correct rot coming from subtly broken (or confused) RAID or lvm2 either. The traditional filesystems aren't structured to recover data that's subtly corrupted at lower layers.

Enter zfs. [Editors note: btrfs has some of zfs' features but falls short in many ways] ZFS assumes ALL THE THINGS will break. Yeah, it has a "it'll break and it's my job to handle that mess" attitude towards data durability. You get choices: from no durability through wildy durable (think 99.999% reliable). The default is to checksum all the data being read/written to the disk and to tell you if reads/writes or checksums fail. Basically it's assuming something WILL go wrong (I promise it will at some point) and tell you about it. If you setup your disks with zfs in a way where >1 copy of the data exists (you can have >1 copy even with a single disk BTW) it'll fixup the filesystem on the fly and report a checksum error. If there are enough errors for a disk, it'll drop the disk offline so you can investigate. If you have >1 disk you can do some really neat things to avoid disk controller failure problems, cabling issues and more. ECC ram also protects you from RAM problems. You got choices to reduce the pain.

To say zfs is durable and resilient understates its capabilities greatly. Nevermind you can deploy FreeNAS and similar systems pretty cheaply and easily these days. You can even re-purpose an old computer OR a Rasbperry Pi 3b+ for zfs storage (notes on the latter coming soon to a post near you).

When you see the whiz bang, pretty appliance think about the above. Even a Synology is a shiny layer on top of the traditional software raid + lvm2 + ext4 model. It's not as durable or resilient as they want you to believe. A computer running FreeNAS has better durability and resiliency and will cost you the same.

Thank you for reading.

you guys would tell me if i been cancelled right

We now offer a live demo of #FreedomBox!🎉🎉🎉

The demo is a custom version of FreedomBox which comes with a few apps pre-configured and automatically resets itself every 30 minutes. Hop on the demo server and try it out!

Demo: freedombox.org/demo/

My son hasn't bathed for days Show more

Show more

A place for the XOXO Festival community. Share your dreams, your struggles, your cat photos, or whatever else strikes your fancy, and see what everyone else is sharing.

This space is just for XOXO members. Never heard of Mastodon? Head over to joinmastodon.org to learn more and start posting.