It's proving harder than I thought to maintain the daily Morning Pages routine, especially when I’m traveling for work. But the good thing about this project is that it’s flexible—there's no real damage done by missing a few days, and I can always pick up where I left off.
Lately, I’ve been deep into Harari’s new book, Nexus. It’s fascinating how he examines history through the lens of information networks, focusing on their possibilities and limitations. His insights into how both democratic and totalitarian systems might handle AI—and the dangers he sees for each—are particularly thought-provoking.
One thing I’m wrestling with is the unpredictability of how this new form of intelligence will shape history. Especially when it comes to consciousness and intentional action: will AI ever act with purpose the way humans do, driven by survival, pleasure, or pain? I feel this isn't easily transferable to AI. So, the big question remains: by what principles will AI guide its actions if and when it operates autonomously?
Since all input comes from human hands, it likely carries human motivations and biases. But there's also a chance that AI will recognize new patterns or develop its own rationale for strategic behavior. Harari hints at this in Nexus, but so far, his conclusions feel a bit too vague to draw any solid predictions.
I'm left wondering: Could AI eventually break free from the human-driven cycle of motivations, or will it evolve in ways we can’t even begin to understand yet?
This post is part of the "Morning Pages" project, an experiment in daily creative writing and content generation using AI tools. The thoughts and reflections shared here are edited for publication with the help of ChatGPT, while accompanying visuals are created using DALL·E 3. Both tools contribute to exploring the intersection of technology, creativity, and communication.