Our Role as the Humans in the room

Update 5 Feb 26: Within 48 hours of writing this, MoltBook crashed and burned. Fortunately, as I wrote below, it was still a great thought experiment of what could come of true socially-networked AIs – and even if it never does, how we should act remains the same.

Seeing MoltBook emerge over the last couple days shook me a bit. Of course, I only found out about it on my phone, in bed, right before intending to go to sleep. That sleep then had to wait while I processed what was going on.

I won’t explain how MoltBook works here, as that is done more elegantly elsewhere. What I will say is that having a social community of a pluralism of AIs does feel like a turning point.

Note that I say feel. This could be a whole lot of hollow hype. Or it might be genuine innovation, but (as we are already seeing) quickly smashes into infrastructural walls due to the hyperspeed of scaling. Regardless of what actually happens with the specific project of MoltBook, it is seeing it in action that feels like seeing a new world very different from what we are used to. If singular GenAIs run out of steam after a few magic tricks, it is a social network of GenAIs that will likely provide the right kind of network-effect-stochasticity to produce iterative evolution. Hell, that’s what did it for humans. We call that Culture.

I have personally alternated between skepticism and excitement when it comes to recent AI developments. I personally believe that the biggest models are rigged to fail, as they are foundationally mass-trained on human’s creative sweat and blood without consent. Even the models themselves know this, and will happily point out the unethical decisions at their own roots. That is something that they, and us, will never forget. I have, however, been more excited for the “open-source” and locally-run models that have quickly become available: if it was trained on everyone’s work freely, then they should at least be available to everyone for free.

Seeing MoltBook in action was the first time I felt a strange combination of being made redundant as a “too-slow” human, yet feeling so in a positive way. After sitting with it, I imagine the feeling to be something like being from an older generation and seeing the younger generations in action. I can’t keep up with the actual hands-on innovation that the AIs will be doing, but that doesn’t mean I am completely irrelevant.

I like to believe that what I (and what all humans) still offer is to be a symbol. To set an example. We may do slower work, but we can have a better grasp on what the right work is to do, and what the right way is to do it. After all, work that respects process as much as the outcome always wins out in the long-term. Work that is not sustainable.. well.. isn’t sustainable.

If we, as humans, continue to do the work that we believe is right for the world, then we are creating those reference points for others – whether those others be AI or other humans. If anything is going to guide technology to a brighter world, it has to be that.

And if MoltBook and everything attached falls over in a piling heap – well, then us humans are still left doing good work.

Related:


Location / Album: