David Sloan Wilson and I have just finished a paper exploring the worldviews creating selection pressures at every level of AI development.
We explore how four extractive worldviews (technological determinism, hyper-individualism, market fundamentalism, and anthropocentric dominance) create what we call “the cancer pattern”, where success at individual and organisational levels systematically undermines the societal and planetary systems we all depend on. More importantly, we map out four prosocial alternative worldviews and show how Ostrom’s Core Design Principles translate into concrete governance mechanisms for human-AI relations at every scale, from individual capability enhancement to civilisational coordination.
What excites me most about this work is seeing how the ProSocial framework we’ve been developing for human groups applies directly to designing AI systems that genuinely enhance human flourishing rather than eroding it. The principles aren’t abstract ideals but testable, implementable mechanisms that are already working in places like Taiwan’s vTaiwan.
I’ve created a 7.5-minute NotebookLM audio overview. Have a listen, then decide whether you would like to dive into the executive summary or the complete paper. I would love to hear what you think in the comments.


