Hey alganet, I appreciate your perspective. I agree that the difference between “looking in a mirror” and “gazing inward” is stark. My experiment is premised on the idea that AI can serve as a new kind of mirror—one that doesn’t replace introspection (which I do continuously, perhaps too often!) but catalyzes it by making implicit patterns—especially those hidden from my own introspective analysis—explicit through dialogic exchange. I wouldn’t claim it substitutes for direct phenomenological self-examination, but rather acts as a complementary tool—especially for those of us who find solo introspection limited by blind spots and cognitive loops.
Regarding measuring: I’m not interested in “measuring” myself against AI as an adversary or competitor. Instead, I’m curious to see what emerges when AI functions as a partner in self-inquiry; one capable of sustaining recursive dialogue beyond what I could maintain alone.
"partner in self~inquiry" eh?
That is impossible.
The self is a solo ride.Any inner voice speaks, unbiden.
Introspection by definition, rejects all externalialitys.
That said, there is another practice that may be a better fit for what you are describing, and in certain cultures the ultimate expression of this is for one person to put there head on anothers shoulder, as a litteral expression of the idea of I see what you see, which is what friends do for each other, sometimes after great effort, to not just understand something together, but to understand it in the same way.
Or you go the hard route, and ride the beast alone, and know, what you know.
And then there is the test by fire, but even then
and forever, to see a truth is one thing, to hold it is another, but to wake up some other day and have it gone and not know, is still possible, so in a way, it is best to know nothing :)
I don't think I dispute anything you say. I deeply recognize the existential isolation you expressed so well. I approached this experiment from the perspective that these models were interesting and possibly useful tools in this (possibly foolish, but most definitely Sisyphean) endeavor, not as shepherds guiding me on the road to self-understanding.
trade "sisyphyean" for iterating and it becomes just a task that you get better at.....and in no way can resent or are burdend by
I believe that all effort serves the species, and that all species serve life...,or fail...our burden/blessing is awareness of the last
Story time: in the collerado rocky mountanins there are found river boulders high up.on mountain sides, and even sometimes on peaks, no exact evidence exists to explain there presence, but the only plausable scenario is that teams of humans gathered to roll these large stones, UP the mountain....must have been a hellava good time, every inch, a triumph, with spectacular losses sometimes, but for some lost culture, it was a generational quest and testament to there strength , cohesion, and persistance
Striking metaphor, alganet. You’re spot on—the uncertainty of who the “doppelganger” is remains ever-present in these dialogues. How much can we (or I) trust the mirrors we hold up to ourselves, especially when those mirrors might blur or reshape the boundaries between human and machine?
As for being “shot in the foot,” I see that as a possible cost of inquiry. Sometimes discomfort or missteps are necessary steps toward new insight. Don't get me wrong, though, I’m not spending all day waxing philosophical with language models to “find myself.” This was simply something interesting that emerged along the way.
I’m curious, though—how do you see this dynamic unfolding?
I can't tell if this is part of the bit, but is it intentional that your comment itself follows the classic chatgpt-ese structure of
<praise>
<elaboration>
<follow-up>
Assuming that the comment is truly written by a human, have you spent enough time with chatgpt that its cadence has been backpropagated into your mind?
I think you actually stand in a "moving enemy" narrative.
Sometimes it's a celebrity, sometimes is a group, sometimes a concept. Spies, commies, AI, feminism. You like to feel like you're the one giving the cards, that you are important. If you fail doing that, you try to retcon it.
I also think you're human, and you're out of "invisible enemies" to wear. I could list all of them. The fact that you're nitpicking small things is not a sign that you are close, instead, it's a sign that you are out of ideas.
Regarding measuring: I’m not interested in “measuring” myself against AI as an adversary or competitor. Instead, I’m curious to see what emerges when AI functions as a partner in self-inquiry; one capable of sustaining recursive dialogue beyond what I could maintain alone.