chsmc.org

From Ways of Being by James Bridle:

One such clonal wonder is Pando, an aspen living in Fishlake National Forest in Utah. At ground level, Pando looks like a forest. They take the form of more than a hundred acres of quaking aspen trees; 47,000 tall, slender trees with white bark and black knots, whose leaves turn shades of brilliant yellow in the autumn. But in fact Pando is one individual, a single organism, in which each tree trunk is a shoot from a single root system. They are one of the largest and oldest individuals on Earth.

No wonder then that the poet Richard Brautigan was moved to imagine 'a cybernetic forest / filled with pines and electronics / where deer stroll peacefully / past computers / as if they were flowers / with spinning blossoms'.

While the machines we are constructing today might one day take on their own, undeniable form of life, more akin to the life we recognize in ourselves, to wait for them to do so is to miss out on the full implications of more-than-human personhood. They are already alive, already their own subjects, in ways that matter profoundly to us and to the planet. In the words often attributed to Marshall McLuhan (but more properly ascribed to Winston Churchill): ‘we shape our tools, and thereafter our tools shape us. 23 We are the technology of our tools: they shape and form us. Our tools have agency, and thus a claim upon the more-than-human world as well. This realization allows us to begin the core task of a technological ecology: the reintegration of advanced human craft with the nature it sprung from.

Ecological thought, once unleashed, permeates everything. It is as much movement as science, with all the motive, restless energy that word connotes. Every discipline discovers its own ecology in time, as it shifts inexorably from the walled gardens of specialized research towards a greater engagement with the wider world. As we expand our field of view, we come to realize that everything impacts everything else – and we find meaning in these interrelationships. Much of this book will be concerned with this particular ecological thought: that what matters resides in relationships rather than things - between us, rather than within us.

From A Brief History of Creativity by Elan Kiderman Ullendorff:

The relationship between creativity and power becomes so strong that it allows for two seemingly contradictory things to occur in parallel: when exercised by the managerial class, it is used to grant capital (think: the deification of the startup founder), and when exercised by the working class, it is used to deny capital (think: the frivolousness ascribed to arts education). Relatedly, the ultimate manifestation of this relationship is the creative logic of social media, wherein users create the content and platforms reap the monetary rewards.

From Life After Language by ribbonfarm.com:

Here is the thing: There is no good reason for the source and destination AIs to talk to each other in human language, compressed or otherwise, and people are already experimenting with prompts that dig into internal latent representations used by the models. It seems obvious to me that machines will communicate with each other in a much more expressive and efficient latent language, closer to a mind-meld than communication, and human language will be relegated to a “last-mile” artifact used primarily for communicating with humans. And the more they talk to each other for reasons other than mediating between humans, the more the internal languages involved will evolve independently. Mediating human communication is only one reason for machines to talk to each other.

From Stewarding Design System Contributions | by Nathan Curtis | EightShapes | Medium by medium.com:

So many people! So many opinions! Which opinions really matter, anyway? From which teams? A contributor may neither have relationships nor own critique agendas. And those open, visible community venues are a risky place to be vulnerable. It can seem dangerous and beyond their control.

A steward’s best response? “Leave that to me.” The steward must know how to trigger the routines and involve people who matters most.

From Notes From Dynamicland: Geokit by omar.website:

Is this a map? Well, it's a page – a program, basically – which is making a claim about itself.

From Design Fiction as Pedagogic Practice by matthewward:

We always design for a world that sits, sometimes just slightly, out of sight. We engage in a complex set of actors in order to move our fictions into the realm of the real. We fight against the Dark Matter to get work made.

From Creative Is 10%. Structure and Systems Are the Rest. by chrbutler.com:

Structure and systems don’t squash creativity and make everything look the same. They don’t squeeze efficiency and profit out of beauty and craft. They can do those things. But that’s not what they’re actually for. Structure and systems are not just a designer’s most important tools — they are, ultimately, what design is. They are what make it possible for a new idea to be understood and experienced — to be made.

From What Does It Mean to Be Strategic? by Dimitri Glazkov:

The mission of a strategist is not to set or devise strategy. It is to understand how an organization’s strategy emerges and why, then constantly scrutinize and interrogate the process, identifying inconsistencies and nudging the organization to address them. In this way, strategy work is a socratic process: gradually improving the thinking hygiene of the organization.

From Creation & the Reclamation of My Attention by Taylor Gage:

What I could no longer ignore about these (very human) races for status, for significance, for attention, for belonging, for “success”, is that the chasing of someone else’s attention requires all of our own.

From The Internet Isn’t Meant to Be So Small by defector.com:

It is worth remembering that the internet wasn't supposed to be like this. It wasn't supposed to be six boring men with too much money creating spaces that no one likes but everyone is forced to use because those men have driven every other form of online existence into the ground. The internet was supposed to have pockets, to have enchanting forests you could stumble into and dark ravines you knew better than to enter. The internet was supposed to be a place of opportunity, not just for profit but for surprise and connection and delight. Instead, like most everything American enterprise has promised held some new dream, it has turned out to be the same old thing—a dream for a few, and something much more confining for everyone else.

From Elon Musk Revealed What Twitter Always Was by Charlie Warzel:

After all the agita, the energy, and the unbelievable amount of time spent toiling in the feed, what do any of us really have to show for it?

Like all social-media platforms, Twitter’s architecture is geared toward promoting engagement, which means that Twitter has optimized itself to turn shaming into a frictionless experience. Design decisions such as the quote-tweet button are potent tools for taking an idea or opinion meant for one group and directing it toward another, with commentary appended. Over time, this shaming has become foundational to Twitter’s user culture, so much so that the platform developed its own vocabulary of shaming, such as subtweeting and ratioing. “We are rooted on by our buddies to insult people outside our in-group,” O’Neil said. “It makes us feel insulated and empowered to sling shit at others and feel righteous in the process.” Shame is the grist for the mill—we direct it toward others, and we experience others directing it at us.

From Recovering Roundup, AI/social Media Edition by Holly Whitaker:

“This is what the lives of so many people come down to: alone in a simulation, working at a job to make money to buy things to express who they are based on the rules the media has set out for them; conditioned by and willfully complicit in a system that feeds them their worldview every day, destined to a life of having conversations about surface-level politics or economics while watching the world pass them by on a TV screen.”

humans are becoming increasingly addicted not because some mutant addict gene is flooding the pool or because alcohol or addictive chemicals and behaviors are increasingly available, but because we are becoming more disconnected from our purpose, nature, culture, and each other.

I think that we are at peak addiction, at an unprecedented and serious inflection point that will shape the future we exist in, and that the seemingly inconsequential choices each of us make right now, matter. This is an age of addiction, and that should be a constant consideration as we navigate our lives and the many choices in front of us.

Perhaps this means as a response to an unreliable and disorienting internet, or an attempt to solve the problems of a preivous era where we gave so much of our lives over to the internet, more of us might instead of doubling down on our time spent online be incentivized to spend more time in the real world, fostering real connections and actual communities, having real debates, and rediscovering the world that exists outside our phones.

From Why Fish Don't Exist by Lulu Miller:

"There is another world, but it is in this one," says a quote attributed to W. B. Yeats

“How do you go on?"

It was the question I'd been asking of everyone, in a way, for my whole life. It was the reason I'd spent so many years researching David Starr Jordan's life; it was the question I'd asked my father when I was a little girl; it was why I'd been so reluctant to let go of the curly-haired man, his mesmerizing way of pulling laughter from the cold earth. That levity was the quality I wanted to be near, the substance I wanted to learn how to manufacture in myself, the recipe that, as far and wide as I searched, I seemed unable to find.

And what cognitive glitch helps you achieve grit? Positive illusions. Other studies showed that if you had positive illusions, you were less likely to experience discouragement after setbacks. And while grit is a cocktail of many traits, one of its most important ones is just that: an ability to keep going after setbacks, to keep going in the face of no evidence that what you are striving for will ever work, or, as Duckworth puts it, "maintaining effort and interest over years despite failure, adversity, and plateaus in progress."

You can even find it in his essays on temperance. Why, in the end, was he so opposed to drugs? Because they allow you to feel more powerful than you are! Or, as he puts it, they "forc[e] the nervous system to lie." Alcohol, for example, lets drinkers "feel warm when they are really cold, to feel good without warrant, to feel emancipated from those restraints and reserves which constitute the essence of character building." In other words, a rosy view of yourself was anathema to self-development. A way to keep yourself stagnant, stunted, morally inchoate. A fast track to sad-sackery

A special proof of scientific as distinguished from aesthetic interest is to care for the hidden and insignificant.

Maybe it was okay to have some outsized faith in yourself. Maybe plunging along in complete denial of your doomed chances was not the mark of a fool but—it felt sinful to think it—a victor?

In the new book Make Something Wonderful: Steve Jobs in His Own Words, Steve talks about his love for books and also their shortcomings:

The problem was, you can’t ask Aristotle a question. And I think, as we look towards the next fifty to one hundred years, if we really can come up with these machines that can capture an underlying spirit, or an underlying set of principles, or an underlying way of looking at the world, then, when the next Aristotle comes around, maybe if he carries around one of these machines with him his whole life—his or her whole life—and types in all this stuff, then maybe someday, after this person’s dead and gone, we can ask this machine, “Hey, what would Aristotle have said? What about this?” And maybe we won’t get the right answer, but maybe we will. And that’s really exciting to me. And that’s one of the reasons I’m doing what I’m doing.

Steve Jobs’ speech at the International Design Conference in Aspen, Colorado on June 15, 1983

For all the work we’ve put into creating ways to capture our lives digitally, it doesn’t feel like the ritual of passing that information down to future generations is considered much.

I wonder if this might be a common use case for conversational AIs in the future. You can imagine a ChatGPT trained on the works of Aristotle, waiting to answer new and novel questions. Like Steve says, we won’t always get the right answer, but maybe we will.

The digital book is lovely and full of wisdom—definitely a recommended read.

From Make Something Wonderful by stevejobsarchive.com:

The problem was, you can’t ask Aristotle a question. And I think, as we look towards the next fifty to one hundred years, if we really can come up with these machines that can capture an underlying spirit, or an underlying set of principles, or an underlying way of looking at the world, then, when the next Aristotle comes around, maybe if he carries around one of these machines with him his whole life—his or her whole life—and types in all this stuff, then maybe someday, after this person’s dead and gone, we can ask this machine, “Hey, what would Aristotle have said? What about this?” And maybe we won’t get the right answer, but maybe we will. And that’s really exciting to me. And that’s one of the reasons I’m doing what I’m doing.

From Design Notes on the 2023 Wikipedia Redesign by alexhollender.com:

The positive outcome of the RfC was probably a mix of all of those things, but we won’t really ever know how/why we arrived there, which is bothersome to me.

Did we just get lucky? Did all of the previous interactions we had with volunteers actually build support? Did all of the feedback we incorporated lead to a better design? And why do people think whitespace is an indication of a failed design (like holy shit, some people hate it so much)?

As the comments/votes started coming in, I became frustrated at how unrepresentative of the general public the people voting were. It was a very small group of editors, potentially making a decision for billions of readers.

I started to use these two images as a metaphor for the different needs we were trying to support:

Visual design can be used to evoke a feeling, or communicate a conceptual idea. But given that the interpretation of the design is personal/subjective, how do you communicate the idea of free, collaborative knowledge to a global audience, across a wide age range? Visual design can also be used to signify a specific brand, however for Wikipedia this signal is already established via the content itself (infoboxes, blue links, etc.).

Our perspective on that was: organizing and minimizing the clutter allows us to accentuate things in a more intentional manner. It’s better to provide people with a few clear pathways behind the scenes (like the Talk, Edit, and History links), rather than having a scattershot approach, which might catch a random curious person here or there.

A slight tangent: unbeknownst to many people, the many versions of Wikipedia are not centralized. The Wikipedia you read (whether it’s English, Bangla, Telugu, Kyrgyz, Korean, Persian, or any of the 300 others), is actually a separate website from all of the other Wikipedias that exist. Sure they share a lot of code, use the same servers, and generally have the same interface. But changes volunteers make to the interface (and the content too, of course) are made locally.

The eyes and ears of AI

It’s hard to keep up with the progress of AI. It seems as though every week there’s a new breakthrough or advancement that seemingly changes the game. Each step forward brings both a sense of wonder and a feeling of dread.

This past week, OpenAI introduced ChatGPT plugins which “help ChatGPT access up-to-date information, run computations, or use third-party services.”

Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data.

OpenAI

OpenAI themselves have published two plugins:

  • A web browser plugin which allows the AI gather information from the internet that was not originally part of its training corpus by searching the web, clicking on links, and reading the contents of webpages.
  • A code interpreter plugin which gives ChatGPT access to a sandboxed Python environment that can execute code as well as handle file uploads and downloads.

Both of these plugins are pretty astonishing in their own right, and unlock even more potential for AI to be a helpful tool (or a dangerous actor).

But what caught my eye the most from OpenAI’s announcement is the ability for developers to create their own ChatGPT plugins which interact with your own APIs, and more specifically the way in which they’re created.

Here’s how you create a third party plugin:

  • You create a JSON manifest on your website at /.well-known/ai-plugin.json which includes some basic information about your plugin including a natural language description of how it works. As an example, here’s the manifest for the Wolfram Alpha plugin.
  • You host an OpenAPI specification for your API and point to it in your plugin manifest.

That’s it! ChatGPT uses your natural language description and the OpenAPI spec to understand how to use your API to perform tasks and answer questions on behalf of a user. The AI figures out how to handle auth, chain subsequent calls, process the resulting data, and format it for display in a human-friendly way.

And just like that, APIs are accessible to anyone with access to an AI.

Importantly, that AI is not only regurgitating information based on a static set of training data, but is an actor in and of itself. It’s browsing the web, executing code, and making API requests on behalf of users (hopefully).

The implications of this are hard to fathom, and much will be discussed, prototyped, and explored in the coming months as people get early access to the plugin feature. But what excites me the most about this model is how easily it will allow for digital bricoleurs to plug artificial intelligence into their homemade tools for personal use.

Have a simple API? You now have the ability to engage with it conversationally. The hardest part is generating an OpenAPI spec (which is not very hard to do, it’s just a .yaml file describing your API), and you can even get ChatGPT to generate that bit for you. Here’s an example of someone successfully generating a spec for the Twilio API using ChatGPT.

It seems to me that this will greatly incentivize companies and products to create interfaces and APIs that are AI-friendly. Consumers will grow to expect AI tools to be able to interface with the other digital products and services they use in the same way that early iPhone users expected their favorite websites to have apps in the App Store.

There are certainly many negative and hard-to-predict consequences of opening up APIs to AI actors, but I am excited about the positives that might come from it, such as software products becoming more malleable via end-user programming and automation.

Don’t want to futz around with complex video editing software? Just ask your AI to extract the first 5 seconds of an MP4 and download the result with a single click. This type of abstraction of code, software, and interface will become ubiquitous.

Of course, I don’t think graphical interfaces are in trouble just yet. Geoffrey Litt points out that trimming video is actually much more intuitive via direct manipulation than via chat.

But when you consider that ChatGPT can write code to build GUIs and can even interact with them programmatically on a user’s behalf, the implications become clear. Everyone will benefit in some way from their own personal interface assistant.

I wonder also how many future products will be APIs only with the expectation that AIs are how users will interact with them?

Simon Willison wrote a great blog post demonstrating this. He wired up a ChatGPT plugin to query data via SQL, and the results, though technically returned as JSON, get displayed in a rich format much more friendly for human consumption.

I wonder if future “social networks” might operate simply as a backend with a set of exposed APIs. Instead of checking an app you might simply ask your AI “what’s up with my friend Leslie?” Or you could instruct your AI to put together a GUI for a social app that’s exactly to your specification.

This will certainly lead to entirely new ways of relating to one another online.

It would be interesting to try this today with good old RSS, which could be easily wired up as a ChatGPT plugin via a JSON feed. Alas, I don’t yet have access to the plugins feature, but I’ve joined the waitlist.

I’m both excited and nervous to see what happens when we combine AI with a medium like the web.

I’m finally getting around to playing Ghost of Tsushima which is impressive all around. But the thing that has impressed me most is… wind??

The game rejects the normal interface of a minimap to guide players, and instead uses the wind and the environment to show the way forward.

When The Guiding Wind blows in Tsushima, the entire game world responds. The trees bend over, pointing you onward. The pampas grass ripples like the surface of water. Leaves and petals swirl around the scene. The controller emits the sound of gusting wind, and the player can swipe the touch pad to blow the winds and set the environment in motion.

Such a simple mechanic is so unexpected and beautiful and calming in a world of cutting edge graphics and 4K 60FPS VR madness. Video games (and everything else) today are so over the top, but in the end it’s something simple like the wind that gets you.

🍃 Let the guiding wind blow 🍃

From Inverted Computer Culture by viznut.fi:

It is considered essential to be in a properly alert and rested state of mind when using a computer. Even to seasoned users, every session is special, and the purpose of the session must be clear in mind before sitting down. The outer world is often hurried and flashy, but computers provide a "sacred space" for relaxing, slowing down and concentrating on a specific idea without distractions.

Imagine a world where computers are inherently old. Whatever you do with them is automatically seen as practice of an ancient and unchanging tradition. Even though new discoveries do happen, they cannot dispel the aura of oldness.

From Dystopias Now by Commune:

For a while now I’ve been saying that science fiction works by a kind of double action, like the glasses people wear when watching 3D movies. One lens of science fiction’s aesthetic machinery portrays some future that might actually come to pass; it’s a kind of proleptic realism. The other lens presents a metaphorical vision of our current moment, like a symbol in a poem. Together the two views combine and pop into a vision of History, extending magically into the future.

From Inverted Computer Culture by viznut.fi:

Computers are seldom privately owned – they are considered essentially communal rather than personal – but their usage patterns are often highly individualistic. Programming is the most essential element of all computer use, and it is not uncommon to find users who have created all of their software from scratch.