chsmc.org

From Open Social — overreacted by overreacted.io:

However, the closed social web has also excluded us from the web. The web we create is no longer meaningfully ours. We’re just rows in somebody else’s database.

Building a private & personal cloud computer

An irony of the personal computing revolution is that, while everyone has a supercomputer in their pocket, a majority of our actual computing has moved to machines in the cloud that we neither own or control.

This surely has some benefits: apps and data are available anywhere and instantly in sync across devices, users are less likely to lose any of their data if their physical device is lost or damaged, and consumers don’t have to worry about the maintenance and upkeep of complicated software.

I’ve written before about the importance of taking ownership of our digital lives. There is indeed a power waiting to be claimed by those who are willing to trade some of those aforementioned benefits for a computing environment that they control.

The web browser is something of a great equalizer here: a universal surface for running and accessing software on any device; the stability of the platform and the ubiquity of its distribution mechanism is unrivaled by the postal service alone. You would expect, then, that the web would be the perfect platform for private, personal software that is owned and controlled by individuals. Unfortunately, that’s often not the case!

Running personal software on the web is a trade off between privacy and ownership. Don’t want anyone to access your private data? You’ll need to implement some sort of authorization scheme and security mechanisms to ensure your data is only accessible to those you trust. Don’t want to manage all of that yourself? Then you’ll need to use a platform owned by someone else that does it for you.

Why is it so hard to scale down on the web? What if there were a better middle ground? Well, dear reader, I’m here to tell you that I’ve found one.

Imagine: a computer you own, connected to a private cloud, accessible anywhere (for free!), running software on your behalf. All of this is easier than it sounds to setup and maintain if you use the right tools. In this piece I want to share how I’ve set this up for myself using a spare Mac mini.

CHIP magazine, April 1982

What is a cloud computer good for? Lots of things! Here are just a few examples:

  • Run your own software and host your own web apps.
  • Host web apps for your family and friends.
  • Use Homebridge to do cool smart home stuff.
  • Store and stream your own media library.
  • Set up automations to organize emails, files, and more.
  • Serve as a Time Machine backup destination.
  • Run your own local LLMs or MCP servers to plug into other AI tools.
  • Have access to a full computing environment on the go from a tablet or smartphone.
  • Anything else you might dream of doing with a computer that’s always on and available.

Hardware wise there are lots of options for hosting your own cloud computer, but I happen to think Apple’s latest M4 Mac mini is a superb choice, especially if you’re already familiar with MacOS. It’s shockingly small, whisper silent, plenty powerful, and well priced at a starting point of $599.

This guide will focus on how you can take an off the shelf Mac mini (or really any Mac)and run your own web software that is accessible through a web browser to you and you alone. We’ll make use of just a few free-to-use dependencies.

Configure your Mac

Running a Mac mini as a headless server means we won’t have it connected to a display or keyboard during normal operations. You will, however, need to connect those accessories for these initial steps to set up remote access. I’ve outlined the steps below, and once you have these settings configured you can disconnect the monitor and keyboard permanently.

It’s worth noting that as a side effect of making your Mac remotely accessible we’ll need to change some settings that make your machine less secure to anyone who is able to access it physically. Be sure your device is kept somewhere secure and can’t be accessed by any bad actors.

  • Turn on remote management and login
    • Before we enable the ability to access our cloud computer from outside our home network, we first want to set up local remote access so that we can connect to the Mac mini from other devices.
    • Navigate to System Settings → General → Sharing
    • Toggle on File Sharing, Remote Management, Remote Login, and Remote Application Scripting.
  • Startup automatically after a power failure
    • Navigate to System Settings → Energy
    • Toggle on the following settings:
      • Prevent automatic sleeping when the display is off
      • Wake for network access
      • Start up automatically after a power failure
  • Automatic login
    • This makes sure that if the machine reboots it will automatically bypass the user selection screen and log in as whichever user you choose.
    • Navigate to System Settings → Users & Groups
    • Select a profile next to “Automatically log in as”
  • Never require password to log in
    • Navigate to System Settings → Lock Screen
    • Select “Never” for “Require password after screen saver begins or display is turned off.”
    • Select “Never” for “Start Screen Saver when inactive”
  • Enable file sharing
    • This allows us to access the filesystem of our Mac mini and any connected drives remotely.
    • Navigate to System Settings → General → Sharing
    • Toggle on File Sharing
  • Disable automatic updates (Optional)
    • This step is optional, but I prefer to update my Mac mini server manually during a monthly maintenance routine.
    • Navigate to System Settings → General → Software Update
    • Click the info icon next to “Automatic updates” to configure your preference.
  • Enable Time Machine server (optional)
    • If you’d like to use your cloud computer as a Time Machine backup destination for other Macs, you can do so by following these instructions.

Once these settings are configured you should be able to connect to and control your Mac server from another Mac, which is helpful if you choose not to keep a monitor and keyboard connected to your server.

To remote in to your server, open Finder on another Mac and press Command–K to “Connect to Server.” Enter vnc:// followed by the Mac mini’s local name or IP (e.g., vnc://macmini.local or vnc://192.168.1.42) and choose “Share Screen.” If you’ve set automatic login, it’ll drop you straight into the desktop; otherwise, sign in with your user credentials.

This works for connecting to your Mac over your home’s local network, but what about accessing it remotely when you’re away from home? Next we’ll setup a tool that will take care of that for us.

Set up Tailscale

A cloud computer is only useful if you can access it from anywhere, at any time. Ideally we’d like to be able to access software running on our home server over the internet just like any other web app.

This is where Tailscale comes in. Making your Mac accessible over the open internet is a dangerous endeavor that’s easy to get wrong and comes with a host of security issues. Tailscale works around all of that by giving us a virtual private network over which our machines can connect securely, without allowing access from the open internet.

Thanks to Tailscale’s client apps on Mac and iOS, we can connect to our VPN from anywhere and access our cloud computer as if it was any other web server (even though it’s not exposed publicly).

Lucky for us, Tailscale is a breeze to set up and completely free for the purposes described here. There are other tools out there that serve the same purpose, but I’ve been very happy with Tailscale. It’s one of those magic bits of software that just works without any fuss.

Getting set up with Tailscale is simple:

  1. Go to tailscale.com and create an account.
  2. Download/install the client application onto your Mac mini server and sign in.
  3. That’s it!

Well, almost. You’ll also want to do the same on any other devices that you use so that they can connect to the VPN and access your cloud computer. And you’ll want to ensure that the Tailscale app is set to launch on login via its settings panel.

Voila! You now have access to your Mac from anywhere in the world. Tailscale will assign each device on the network with a unique domain name that looks something like machine-name.tail1234.ts.net.

This URL can be used to access your Mac mini remotely via screen sharing, so you can use another device to connect and manage your cloud computer on the go.

The Tailscale dashboard lists your machines and the addresses associated with each one. It's safe for me to share these because they're only accessible on my private Tailnet, which my devices access via the Tailscale client.

You can now spin up a web server on your Mac and access it on your other devices using the assigned domain name combined with the port. Later I’ll discuss how to set up custom domain names for your services so you don’t need to remember what’s running on which port.

You can also now screen share and remotely control your Mac from anywhere in the world. Simply repeat the steps mentioned above, but replace the local IP or hostname in the vnc:// address with the ones provided by Tailscale.

Manage and access your software

Now that your cloud computer is set up and accessible remotely, it’s time to put it to use!

Any app running on your machine that is exposed on a local port will be accessible over your Tailnet, but we need to make sure that those apps are always running and restart automatically if, say, our machine reboots or the app crashes. For this, we’ll use a process manager called PM2.

Start by installing PM2 globally using npm:

npm install pm2@latest -g

Next, we’ll create a PM2 configuration file that specifies the apps we want to run and manage. Execute the following command to generate an ecosystem.config.js file:

pm2 init simple

That will create a JavaScript config file called ecosystem.config.js in the current working directory. It doesn’t matter too much where you store this file (mine lives in the home directory). The file contents will look something like this:

module.exports = {
  apps: [
    {
      name: "app1",
      script: "./app.js"
    }
  ]
}

The important bit here is the apps array that contains objects representing the services we want to run and manage. Each app, at the very least, will have a name and a script that should be executed to start the app.

Here’s an example config for one of my apps:

{
  name: 'api',
  cwd: '/Users/voyager/Repositories/api',
  script: 'yarn start',
}

This tells PM2 to run yarn start within the directory specified by cwd (current working directory) whenever starting the process.

Once your PM2 config file is ready to go run pm2 start ecosystem.config.js to start up all of the apps defined therein. Swap the start for stop there to stop all of your apps. Check the PM2 docs for all of the available options and commands.

Now that we’ve told PM2 about our apps, we need to make sure it’s configured to start them whenever our Mac reboots. PM2 makes this easy—run the following command:

pm2 startup

This will output a terminal command that you’ll need to copy, paste, and run. Once that’s done, PM2 itself will be set up to run whenever your machine boots.

Finally, we need to tell PM2 which apps should be restored on reboot. Luckily, the config file we created defines all of these, so we’ll just start PM2 with that config and then save the list of current processes:

pm2 start ecosystem.config.js
pm2 save

If you ever make any changes to your config file, you’ll want to repeat this step to ensure those changes are picked up by the launch daemon.

PM2 comes with a great terminal UI for observing the status of your running apps which you can access by running pm2 monit. I like to keep this running in a terminal window on my Mac mini so I can see the status of my apps at a glance.

Custom domains (optional)

The last step in perfecting this setup is to assign custom domains for the apps hosted on our cloud computer. This is nice because we won’t need to memorize or keep track of which app is running on which port number.

There are many way to get this to work, but the solution I found to be the most straightforward uses a combination of Cloudflare as a domain registrar and a reverse proxy tool called Caddy.

You’ll need a domain name registered on Cloudflare that that you’d like to use for your cloud computer. Separate apps/services running on the machine will be assigned to subdomains.

Configure DNS

The first step in this process is to point our domain on Cloudflare to the IP address of our server on Tailscale. Navigate to the Machines tab in your Tailscale dashboard, find your server in the list, and copy the IP address assigned to it by Tailscale. We’ll need that address for the next step.

Next we’ll update the DNS configuration of our domain to have it route to our machine on Tailscale. In the DNS records panel for the domain, create a new A record pointing to the IP address you copied from Tailscale. Use the domain name for the “name” field, and be sure to uncheck the option to proxy through Cloudflare’s servers.

Install and configure Caddy

Caddy will proxy requests to our machine to specific ports, and will take care of setting up an HTTPS certificate for us.

You’ll need to download and install Caddy with the Cloudflare DNS plugin, which can be found here. Select the appropriate OS/platform, search for and select the caddy-dns/cloudflare plugin, right click the Download button, copy the URL to your clipboard, and use it within the following commands.

# Download Caddy with the Cloudflare plugin
# Replace the URL with the one you got from the Caddy downloads page
sudo curl -o /usr/local/bin/caddy "https://caddyserver.com/api/download?os=darwin&arch=arm64&p=github.com%2Fcaddy-dns%2Fcloudflare&idempotency=89062609982188"

# Make Caddy executable
sudo chmod 755 /usr/local/bin/caddy

# Verify that Caddy is installed
caddy --version

Once Caddy is installed we need to create a Caddyfile to specify which URLs will be reverse proxied to which port running on our machine. Create a file named Caddyfile, stick it anywhere on your filesystem, and update it to look something like this:

(cloudflare) {
	tls {
		dns cloudflare CLOUDFLARE_API_TOKEN
	}
}

lab.chsmc.tools {
	reverse_proxy http://localhost:3000
  import cloudflare
}

api.chsmc.tools {
	reverse_proxy http://localhost:8000
  import cloudflare
}

You’ll need to replace the CLOUDFLARE_API_TOKEN string with an actual API token acquired from Cloudflare. Navigate to Cloudflare’s API token page in profile settings and generate a token that has zone DNS edit permissions. This step allows Caddy to connect to Cloudflare via its API to ensure SSL certificates are automatically set up for our domain.

In the example above, subdomains of the root domain (for me, chsmc.tools) are reverse proxied to specific ports on localhost. You’ll of course want to update this to match your domain and the ports on which your processes are running.

Finally, we’ll update our PM2 config to include Caddy as one of its managed processes. In the apps array within the PM2 config file we created earlier, add the following:

{
   name: "caddy",
   cwd: "/Users/chase/Documents",
   script: "caddy stop || true && caddy start"
}

Make sure the cwd property points to wherever you saved your Caddyfile.


While it requires some initial setup, I’ve been very pleased with this solution both in terms of its usefulness and how little maintenance has been required after getting up and running. I’ve had virtually no down time or cases where my server was inaccessible for any reason, and I only do maintenance about once a month to update the OS, the software running on the device, etc.

It’s also pleasing to know that all of this is running on a machine in my home. There’s something really satisfying about the knowledge that an increasing part of my daily toolkit is code I wrote running locally on a happy, humming machine in my apartment.

The point of all this isn’t to replace the open web: it’s to create a low‑friction space, a laboratory, where we can experiment with and run software without the headache of sign‑up flows, hosting providers, authorization, dependency overload, or vendor lock‑in.

I started by using my cloud computer to run a couple of scripts, but over time I’ve built up such an arsenal of tools that my little machine feels in many ways like more of a companion than the computer I carry in my pocket every day.

For me, perhaps even more than AI coding tools and app builders, this bit of kit has made software feel more malleable and approachable than ever.

From Shame Is the Enemy of Joy by Heather Havrilesky:

Shame is that persistent voice that tells you that you can’t take risks and you need to hide who you are. According to shame, your most natural self is inherently embarrassing or inadequate. But shame whispers in quieter tones, too, telling you that you can’t age and also have a beautiful life, you can’t be poor and also be luminous and unstoppable, you can’t make art if no one knows about it, you can’t experiment and play and laugh and dance if no one approves of how you do it, you can’t feel romance under your skin if you’re not loved unconditionally by one person.

All designers have an origin story, and mine is rooted in a childhood obsession with fictional and futuristic interfaces. I watched science fiction not for the story or characters, but for a glimpse at what a computer could be.

One of my favorites was Star Trek and the LCARS (Library Computer Access/Retrieval System) interface for the ship’s computers. Here is an interface that breaks free from the rectangle! It uses lots of colors and sounds! It’s fantastic.

So I was delighted to see that Jim Robertus has brought LCARS to the web as a free to use, responsive HTML and CSS template. It even comes with sound effects.

I can’t wait to find a reason to build something with this.

From The Future of Media Is a Bank by dirt-media on Notion:

Paying attention will increasingly include paying something—even a very micro amount of money—as an attestation that something was viewed. We’ll see novel combinations of taste, limited digital space, and reputation as the read receipts of the post-AI era, when friction doesn’t just signify a barrier to consumption but, rather, the presence of friction indicates that something was consumed.

Regardless of your opinion on traditional media, three things have become abundantly clear:

  1. Media is no longer the core product of any media company
  2. Subscriptions as we know them will soon be obsolete
  3. The rise of AI agents will create net new audiences for media

Cue the rise of agentic capital. A new class of audience using autonomous AI agents trained on their taste to extend their own attention and spending. Suddenly, one person can be worth more than one impression. The average entertainment budget can be more efficiently split across creators, leading to all sorts of opportunities.

From Tokyo Walk, TBOT Cover, Aloneness by Craig Mod:

Subverting habits means replacing habits. What I’ve learned from my walks is that every day — every step — on the road is a chance for self-renewal, to cast off some small micron of a past, shittier, scared, low-self-worth, less-kind self, and replace it with a more patient, more empathetic, higher value bizarro self. Someone you could have been earlier in life, given a different set of circumstances. Micron by micron, atom by atom, it adds up (one hopes!).

Someone found the real Spotify accounts of famous politicians, journalists, and media/tech figures and scraped their listening data for more than a year. They’ve published some of the data online as the Panama Playlists.

The Panama Papers revealed hidden bank accounts. This reveals hidden tastes.

Scroll in wonder and/or horror!

File this one away as another excellent example of culture surveillance, which I’d argue would make for an excellent entry in an updated addition of the New Liberal Arts.

Antibuildings

We all have our own antilibrary, the books we buy with the best intentions of reading but never quite get around to. For architects, a similar concept might be the sketches and plans that never leave the drawing board: antibuildings?

I’d venture to say there’s likely as much to learn in studying the antibuildings of great architects as there is in studying those works that have been fully realized.

Frank Lloyd Wright left behind a treasure trove of antibuildings (582 that we know of!), and artist David Romero has been creating digital models based on their plans as part of a project called Hooked On The Past. That includes The Illinois: FLW’s ambitious plan for a mile-high skyscraper in downtown Chicago that would have been twice the height of the Burj Khalifa.

Colossal has a great interview with David about his work that features some additional structures I didn’t see on the official website or Flickr page.

Reanimating a ghost

Mary Oliver once said that “attention is the beginning of devotion.” I want to highlight a few examples of this in practice that have recently crossed my desk.

First: the YouTube channel of Baumgartner Restoration, of which I’ve been recently obsessed. The videos feature the proprietor, Julian Baumgartner, narrating the painstaking process of restoring and conserving fine works of art.

There is some peculiar pleasure in seeing a centuries-old painting transformed under steady, gloved hands. It’s not just the ASMR crackle of varnish being carefully removed, or the delicate touch with which he inpaints a lost eyelash on a Madonna’s cheek. It’s the sense that you’re witnessing a dialogue across time: a conversation between the artist, the restorer, and the persistent materiality of the canvas itself.

As a generalist it’s incredible to see the combination of disciplines that go into this sort of work: chemistry, material science, fine arts, woodworking, history, and more. Plus it’s just really relaxing to watch!

One of the foundational principles in art restoration is reversibility: the notion that any intervention made to a work should be removable without harming the original. It’s a kind of humility encoded into the restorer’s practice, a tacit acknowledgment that today’s best solution might be tomorrow’s regrettable overstep. You see this restraint in Julian’s work: the materials he uses are chosen not just for their compatibility with the painting, but for their ability to be taken away if future conservators, armed with better tools or new information, decide to try again.

But what happens when the artwork in need of restoration isn’t made of oil and pigment, but of code and pixels?

A friend recently shared with me the Guggenheim’s ambitious digital restoration of Shu Lea Cheang’s “Brandon,” an early web-based artwork from 1998 (thanks, Celine!).

Brandon isn’t a painting hanging on a wall; it’s a sprawling, interactive digital narrative exploring gender, identity, and the malleability of self in cyberspace. The work, like much of the early web, was built on now-obsolete technologies like Java applets, deprecated HTML, and server-side scripts that no longer run on modern browsers. Restoring this piece isn’t about cleaning and repairing a surface but rather reconstructing an experience, reanimating a ghost in the machine.

Not to mention the piece is made up of 65,000 lines of code and over 4,500 files (!!).

The restoration of Brandon focused on migrating Java applets and outdated HTML to modern technologies. Java applets were replaced with GIFs, JavaScript, and new HTML, while nonfunctional HTML elements were replaced with CSS or resuscitated with JavaScript.

Don’t miss the two part interview with the conservators behind the project. (And please, if I am ever rendered unconscious, do not resuscitate me with JavaScript.)

In the physical world conservation is tactile, direct: a kind of respectful negotiation with entropy. In the digital realm it’s more like detective work, piecing together lost fragments of code, emulating vanished environments, and making decisions about what constitutes authenticity.

There’s an odd poetry in this sort of work. Just as a restorer must decide how much to intervene—when to fill in a crack, when to let the passage of time show—so too must digital conservators choose what to preserve: the look and feel of a Netscape-era interface? The original bugs and quirks? The social context of a work that once existed in the wilds of early internet culture? The restoration of Brandon becomes not just a technical project, but a philosophical one, asking what it means to keep an artwork alive when its very medium is in flux.

In both cases the act of restoration is, at heart, an act of care. A refusal to let things slip quietly into oblivion. It’s love as an active verb, the intentional transfer of energy.

And perhaps, as our lives become more entangled with the digital, we’ll find ourselves needing new ways to honor not just the objects we can touch but the experiences, stories, and communities that flicker across our screens. I personally feel very grateful that there are organizations and individuals taking on this sort of work.

Both of these examples remind us that conservation is less about freezing the past than about paying attention to it, and keeping it in conversation with the present. In that dialogue we might discover new forms of devotion: ways to care, to remember, and to imagine what else might be possible.

Beyond the gamut

It’s not every day that you get to experience a whole new color, and yet: scientists recently viewed an entirely new color by firing lasers to manipulate individual cone cells. No, really!

Most of us don’t have precise eye-lasers at home, but luckily there’s a workaround to approximate the effect. A biological cheat code, if you will.

This blog post over at dynomight.net features an animation you can stare at for a bit and, eventually, you might see the new mystery color. It works pretty reliably for me, and it is kind of wild. My brain doesn’t expect a screen to be able to produce such a saturated color.

You might describe the color as a sort of HDR cyan, but luckily the authors of the paper gave it a much better name: olo.

Olo! To quote the paper, “olo lies beyond the gamut.”

Why do we hallucinate this specific color?

M cones are most sensitive to 535 nm light, while L cones are most sensitive to 560 nm light. But M cones are still stimulated quite a lot by 560 nm light—around 80% of maximum. This means you never (normally) get to experience having just one type of cone firing.

Because M and L cones overlap in the wavelength of light they capture, we don’t get to see the full range of either cone without lasers or mind tricks. Kinda like that myth about not being able to use 100% of our brain power, but in this case: cone power.

From The Browser Is a Printing Press by robinrendle.com:

• I’ve always seen the browser as a printing press.
• Because of that, I’ve always seen myself as a publisher first and then everything else second.

From The Stream by thesephist.com:

Most of modern software is industrially produced for mass-market use, so people get used to lazily thinking that is all that is possible. If you can write software, you can build your own kitchen where you can cook your own food. You can host your friends and feed them too. Maybe it will inspire your friends to cook for themselves and invest in their own metaphorical kitchens, too. Hopefully it can be something that grows with you and becomes a part of your every day.

From The Digital Physical by Craig Mod:

There’s a feeling of thinness that I believe many of us grapple with working digitally. It's a product of the ethereality inherent to computer work. The more the entirety of the creation process lives in bits, the less solid the things we’re creating feel in our minds.

From Things Become Other Things by Craig Mod:

Thirty years later and I'm still operating on scarcity, still trying to put in the distance between then and now. As if there would never be enough steps. As if that town could reach out and grab me and pull me back at any moment.

Each year you invited me and each year I begged my mother to let me go. Just to observe. I'd be an adoptee there, too, but one swaddled in a vitality we just didn't have (despite my mom's concerted and genuine efforts), our tiny family, my quiet grandparents. I would have cut off a leg to sit in the corner of your home, soundless, motionless. To bask in whatever shape your lives took on. To try to understand a fullness I had never known, to wear it like a suit, even if just for a moment. These are the simple dreams of the adopted.

From Overtourism in Japan, and How it Hurts Small Businesses by Craig Mod:

I don’t know if there’s some Platonic or deontic mode of travel, but in my opinion, the most rewarding point of travelling is: to sit with, and spend time with The Other (even if the place / people aren’t all that different). To go off the beaten track a bit, just a bit, to challenge yourself, to find a nook of quietude, and to try to take home some goodness (a feeling, a moment) you might observe off in the wilds of Iwate or Aomori. That little bundle of goodness, filtered through your own cultural ideals — that’s good globalism at work. With an ultimate goal of doing all this without imposing on or overloading the locals. To being an additive part of the economy (financially and culturally), to commingling with regulars without displacing them.

From Quality Is a Trap by Eric Bailey:

Hopefully I’ve illustrated how pointless it is to try and talk about quality by showing how malleable and variable a term it is.

This slipperiness is also something worth keeping in mind if and when you need to contend with other people bringing up the term. Remember that it is a proxy phrase, often born of an inability or unwillingness to articulate other concerns.

Like “interesting,” “quality” is a neutral word. It is a proxy phrase, and can almost always be replaced by more concrete, constructive, and actionable things that contribute more towards the conversation.

From Platform reality by Robin Sloan:

There’s one plat­form for which none of this is true, and that’s the web plat­form, because it offers the grain of a medium — book, movie, album — rather than the seduc­tion of a casino. The web plat­form makes no demands because it offers nothing beyond the oppor­tu­nity to do good work. Cer­tainly it offers no attention — that, you have to find on your own. Here is your printing press.

From Craig Mod on the Creative Power of Walking by Craig Mod, May 9:

The phone, the great teleportation device, the great murderer of boredom. And yet, boredom: the great engine of creativity. I now believe with all my heart that it’s only in the crushing silences of boredom—without all that black-mirror dopamine — that you can access your deepest creative wells. And for so many people these days, they’ve never so much as attempted to dip in a ladle, let alone dive down into those uncomfortable waters made accessible through boredom.

From No Good Alone by rayne fisher-quann:

Your job is not to lock the doors and chisel at yourself like a marble statue in the darkness until you feel quantifiably worthy of the world outside. Your job, really, is to find people who love you for reasons you hardly understand, and to love them back, and to try as hard as you can to make it all easier for each other.

These were sad and difficult times in which we all learned that it is often impossible for us as individuals to save someone we love from the sum of their suffering, especially so when you’re ignoring your own needs in the process. But to extrapolate that reality into the idea that we shouldn’t want to tend to our loved ones, to receive them as flawed and imperfect people and care for them anyway, is a grave miscorrection. We all exist to save each other. There is barely anything else worth living for.

But even outside of the material barriers imposed by this kind of standard, I am troubled by its implication: it insists that healing is a mountain to be climbed alone, and that relationships are the reward we get once we’ve reached the summit. When we insist that we could only ever effectively love someone who’s been perfectly “healed” — who will not struggle, accidentally hurt us, trigger us, say the wrong thing, do the wrong thing, or participate in any other uncomfortable display of humanity — we are reinforcing, and perhaps projecting, our own beliefs that we have to be perfect in order to be loved.

From all's unfair in love by john d. zhang:

Such insistence on forcing love into a meritocracy-shaped mold doesn’t only do a disservice to everyone who dates, it reinforces the idea that any negative, even traumatic, experiences could and should have been avoided, had we done things differently. It’s not quite victim-blaming, but it sure reeks of it.

Here’s something that checks all kinds of boxes for me: Lori Emerson has a gorgeous new book coming soon (April) called Other Networks: A Radical Technology Sourcebook, which you can preorder now from publisher Mexican Summer.

Other Networks is writer and researcher Lori Emerson’s speculative index of communications networks that existed before or outside of the internet: digital as well as analog, IRL as well as imagined, state-sponsored systems of control as well as homebrew communities in the footnotes of hacker culture.

You would be hard pressed to purposely conceive of a book more squarely aimed at my niche interests. And the book itself is a beautiful hardcover tome, rife with archival imagery as well as original artwork. Instant preorder material right here.

By the way, I discovered Mexican Summer and Lori’s book by way of Claire Evans on Bluesky, a site I am spending more time on as of late. Join me, won’t you?

I really enjoyed this writeup from Matt Webb about extending AIs using Anthropic’s proposed Model Context Protocol. In its own words, MCP is “an open protocol that standardizes how applications provide context to LLMs”.

Back in 2023 I wrote a bit about an early attempt at something similar by OpenAI and was pretty excited about the potential. MCP takes things to another level by making it an open protocol. Anyone can host an MCP server, or create a custom client that works with any language model.

Protocols are cool! And it’s fun to explore them. So I wanted to get a sense of MCP for myself.

I was pleasantly surprised by how easy it was to get started with and see the potential of MCP servers. You don’t even have to build your own as there are lots that have been built and shared by the community. Here’s a great list of reference servers by Anthropic, and there are also over a thousand open source servers available.

If you do want to build your own, I recommend checking out this video from Cloudflare on how to get started using their open source workers-mcp package.

But to quickly get a sense of the potential of MCP, I recommend checking out an existing server first. I decided to start by exploring the Filesystem MCP server which is exactly what it sounds like: a server that gives an LLM access to your filesystem through various tools like read_file, list_directory, search_files, etc. This is great place to jump in and see the potential of MCP.

Adding the server to Claude’s desktop application (one of several clients that currently support the protocol) is as simple as dropping this into the app’s config file:

"filesystem": {
  "command": "npx",
  "args": [
    "-y",
    "@modelcontextprotocol/server-filesystem",
    "/Users/chase/Notes"
  ]
}

All this is really doing is allowing the LLM to run an npm package which implements an MCP server. Neat!

After restarting the Claude app, I was off to the races. I use Obsidian as my personal knowledge base, and the great thing about it is that it stores notes as plain text on the filesystem. Combined with the filesystem MCP server, I could now ask Claude about my own notes.

Here’s a screenshot from my very first time using the filesystem MCP server in Claude. I asked it to find my log file (the file I use for running notes throughout a year) and summarize the entries from the past month:

What’s fascinating here is the chain of thought the model goes through, and how it uses the tools exposed via the MCP server to solve problems. It starts by searching for files with a .log extension, doesn’t find anything, and thus broadens its search parameters and tries again. It’s then able to find and recognize my Log.md file, read its contents, and summarize them for me. Neat!

I’m really excited about the potential here to make computers more malleable for the masses. There’s been a lot said about the ability for non-technical folks to create their own apps using LLMs, but the ability for those LLMs to manipulate data and interact with APIs themselves might even reduce the need for a lot of dedicated apps entirely.

From ritual humiliation eggplant parm - by arielle gordon by arielle gordon:

I was down bad. I had forgotten what it felt like. Scarier still, I had forgotten how much I loved the feeling. If you do it right, being down bad is transcendent. It’s an existential response to the elusive search for meaning: your purpose, now, is to bend your world towards the object of your affection.

From The Shape of the Internet - By Meghna Rao by meghna rao:

The internet has an original shape, and it’s not the bells and whistles of platforms that we see today. It is an architecture that incited near-spiritual practitioners, one that appeared to be unique from the top-down or bottom-up control of the real world, the control instead running horizontally on protocols that were necessary to adhere to—HTTPS, HTML, and so on—in order to connect various parts of the web together securely, for various webpages to speak with each other, in order for anything online be able to operate at all.

From Issue 55: Personal Computing Paves the Way by Bram Adams:

The issue is that we are now deluged with data, our interpreter antenna is going haywire trying to calculate, to store, to relate, to understand. LLMs have changed the math entirely in this endeavor, particularly thanks to the ability to store, reference, and transform data that we find to be important, not data that others tell us is important.

From 🌻 the post-literate society by Jasmine Sun:

2016 was a turning point for oral culture. Peak Trump, peak Twitter, the death of the text and the fact. When we all lost our minds to the collective unconscious, the birth of a worldwide “vibe” that could shift together as one. And at the risk of sounding hyperbolic: I think there is a correlation between oral culture and authoritarianism, between a less literate population and strongman leaders. When people don’t evaluate ideas separate from their speakers, power gravitates to the most magnetic voice in a room.

We could all be archivists

Contrary to oft-repeated wisdom, the internet isn’t written in ink. Physical ink on paper is often a far better method for carrying data forward into the future. Manuscripts that are hundreds and even thousands of years old are still with us, and still being discovered every day. Will the same be true of our own data a hundred years from now?

Physical collections benefit from their form: by taking up space in the real world they demand attention and care. Digital collections more easily fall into the trap of “out of site, out of mind”. How many online services have you signed up for, added data to over time, and then later forgotten about? How much of our data, the traces of our lives online, are permanently lost?

It’s amazing how fragile we’ve let our data become. When I hear about someone who loses a device and with it their entire digital photo collection (if not backed up), I consider it a tragedy. Photo albums used to be sacred heirlooms, passed down through generations to remind us all that we come from certain people and places. Now we turn over all of that data to a custodian like Apple or Google, and we don’t think about whether their stewardship will continue throughout or beyond our lifetimes. Will Apple exist and still be storing my photo library in 100 years? Even if the data exists somewhere, will there be a way for me to access and view it?

I worry about this especially for those of us who aren’t chronically online or attached to their devices, who might not understand the effects of fragmentation and walled gardens might have on them in the future, and who don’t have the foresight or knowledge to protect their data for the long haul.

Even digital artifacts that are preserved are still lossy in an important way. When looking back at the work of creatives from the past, we can trace their process through a series of physical artifacts that lead up to a final work. Digital files are often “flattened” representations of a creative process, capturing the final state but missing the messy middle. Another way our digital legacies are flattened is through the loss of metadata. Traditional filesystems lack standard ways of capturing the context around files: why they were created, by whom, how they relate to other files, what topics they pertain to, etc.

It feels more important than ever in a future with LLMs that we not only control our data, but that we all maintain our own sort of wildlife preserves made up of content unspoiled by computer generation. Over time I expect original, unique datasets will become a commodity for those looking to train models.

Managing our data has only gotten more difficult as personal computing has gotten more sophisticated. So much of our digital lives have moved from our machines and into the cloud. Our documents, photos, and music used to exist on our devices where they could be backed up and preserved, but now they exist more and more in privately-owned corporate silos.

It’s no surprise that we turn to these tools. Organizing and browsing the masses of data we generate is not a task well served by modern operating systems. People love online tools like Notion, Airtable, and Google’s suite of apps because they make it easy for consumers to organize data in a way that makes sense to them. They make it searchable, shareable, and available everywhere. But this power comes at a cost: we hand our data over to privately-owned silos whose long term existence is far from guaranteed.

Sure, you could store all of the same data on your computer as you could in a tool like Notion. But I think the metadata those tools allow for is what is so important to preserve. A folder full of files is limiting compared to the database-ness of something like Notion.

In order to properly organize, retrieve, and preserve massive amounts of data (which we all generate nowadays simply by being online) we need ways of tagging, commenting on, sorting, filtering, slicing, linking, searching, etc. A folder is static, representing one way of looking at data, but most information is useful in many contexts.

The rise of graph and database-like features in popular tools like Notion or Obsidian is a sign that the simple filesystem has failed us. And that failure has pushed us towards other solutions which require sacrificing ownership of our data.

If an average consumer wanted to organize information like they might in Notion while maintaining ownership and storing their data locally, I literally do not know of a solution that doesn’t involve administrating a database. That’s crazy, right?

Personal computers could feel like this

I’d love to see these sorts of use cases solved for at an operating system level. Third party apps have been a great way to experiment with new computing primitives, but at some point those primitives need to exist without the compromise of giving away control of our data. A simple, hierarchical file structure just doesn’t cut it when it comes to organizing and making use of the massive amounts of data we accrue simply by being a human on the internet.

What might that look like? I’m not sure, but taking cues from relational and graph databases is probably a good place to start. Imagine databases a la Notion as a first class feature of your operating system. A GUI built in to browse and organize a vast repository of data, and programmatic ways for 1st and 3rd party apps to hook into.

One place we might take inspiration from is Userland Frontier, an object database and scripting environment for both native and web applications. Frontier made it easy to create your own software, using your data, on your terms.

At the center of Frontier was its object database, a set of nested tables that could contain data, scripts, bespoke UIs, and, of course, other tables. The object database could be browsed visually via an app, and accessed easily in scripts where you could persist data to disk as easily as setting a variable.

Frontier was first and foremost a developer tool, but I think the ideas contained therein are powerful for average consumers as well. I keep using Notion as an example, but it demonstrates perfectly how these ideas could resonate beyond developers.

Brent Simmons, a developer who used to work at UserLand, wrote about the history of the company and gives a great summary of Frontier.

It’s inherently geeky, since it’s a developer tool. But at the same time it’s more accessible than text editor + command line + Ruby/Python/whatever. It can give more people a taste of what power on the internet is like — the power to create your own things, to re-de-centralize, to not rely on Twitter and Facebook and Apple and Microsoft and Google for everything.

Our computers should be databases! We should be able to script them, access them using browser APIs, browse them via a first party application, etc. They should accrue data and knowledge over the course of our lifetimes, becoming more useful as we use them. They should be ours, something we can control and back up and preserve long after we’re gone.

Bespoke software, created on the fly is becoming increasingly common thanks to AI. But software is only as useful as the data it’s able to operate on.

All of our emails, recipes, playlists, text messages, Letterboxd reviews, TikTok likes, documents, music, photos, browser histories, favorite essays, ebooks, PDFs, and anything else you can imagine should be something we can own, organize, and eventually leave behind for those that come after us. An archive for each of us.

From Century-Scale Storage by Maxwell Neely-Cohen:

One day, someone will find the flash drive on the ransacked floor of a house, the forgotten server in the ruin of a data center, the file in the bowel of a database. It will matter. Even if their contents had been damaged or forgotten, actions of previous care can bear fruit decades later. They are the difference between recovery and despair.

Preserving digital data also requires preserving the means to access that data, just as preserving a book requires preserving the language in which it is written.)

One of my favorite albums from last year was Mk.gee’s Two Star and the Dream Police, so I was delighted to recently discover this collaboration between Mk.gee and another artist I like called Dijon. If the electricity of their creative partnership in this video doesn’t get you excited, I don’t know what will.

Regarding books and their sellers

Happy New Year! I’ve returned from holiday travels and am settling back into work for 2025. Here are three, quick, bookish recommendations from links that have crossed my desk recently.

First, a newsletter: Katie Clapham’s Receipt from the Bookshop is a new favorite of mine.

Katie runs an independent bookshop in Lancashire, and every Friday when she opens the shop she also starts a new draft of the newsletter. Throughout the day she fills it up with commentary on running a bookshop in these modern times, quips from the shoppers, witty observations, a record of books sold and purchased, etc. At the end of the day, she closes the shop and sends the newsletter.

One of my favorite genres of art is “totally mundane but fascinating and engaging for reasons that are hard to explain,” and Receipt from the Bookshop fits that bill perfectly.

More than just being fascinating, though, it’s a good reminder for us all how creativity and fulfillment as a writer can come from mundanity. There is beauty in the mundane! We can find it if we look hard enough.

Whatever this is: more of it please.


Books are powerful cultural artifacts, and so much of human history is wound around them. It should be no surprise then that notebooks carry a similar significance.

Well that’s just what Roland Allen’s book, The Notebook: A History of Thinking on Paper, is all about. I picked it up recently after seeing it recommended by a few folks in their 2024 reading recaps.

The cast of historical characters that pop up throughout the book is a lot of fun, and the author does a great job of showing just how critical notebooks were to the development of civilization and culture as we know it today.

Another truly great example of something unassuming (notebooks) being explored with a contagious enthusiasm. There is, repeatedly, poetry in the mundane.

By the way, I love reading books which tell history non-linearly through the lens of ultra-niche subtopics. Another great example that comes to mind is Fallen Glory: The Lives and Deaths of History’s Greatest Buildings by James Crawford. Books and buildings both make great portals back in time.


Finally, a celebration of the book collectors, dealers, sellers, and conservators who preserve the art of books and bring the most important ones along into the future.

While reading The Notebook: A History, I stumbled upon a 2019 documentary that resonated on the same frequency.

The Booksellers, which (from what I can tell) has recently been made free to watch on YouTube, focuses primarily on rare book dealers in New York City and their bookshops, and it’s a visual feast for book lovers. But more importantly it’s an homage to those still dedicating their lives to preserving the written word and the book as a form.

There’s a clear and present danger to the world of books that is felt palpably in the documentary, with many sellers and collectors worried about a diminishing market for book collectors. There are also those in the film who see a bright future. It’s nice to hear both takes.

As for myself, I am a huge fan of collecting physical books, and maintain a digital version of my collection which you can browse if you’d like. Whether you collect books or not, I recommend giving The Booksellers a watch.

From Emotional Buffer Zone by annie's blog:

Why not let myself feel?

Because feelings bring in unpleasantries. My feelings may not line up with how I want to be. My feelings seem chaotic and I want to be calm. My feelings make me vulnerable and I want to be in control. My feelings are childish and I want to be mature. My feelings are unpredictable and I want to know. My feelings don’t give a flying fuck about goals, plans, opinions, consensus, and I want to achieve, be cool, be approved.

From Losers by Heather Havrilesky:

You’re mad, bro, because you miss skinning your knees on the concrete, you miss the exfoliating properties of gravel and tar, you’re dying to get dragged, to meet a difficult new friend, to change your mind, to feel uncertain, to fall in love, to speak before you know what you mean, to ask better questions, to get lost and not know your way home.