I have referred multiple times to this trend where we don't value people, but without talking about what valuing people means.
I obviously mean valuing individuals and their welfare, but some of these stories have been making me think of the value of humanity collectively, even with (or especially with) all of our flaws.
I had to search a bit for two articles because they irritated me so much that I didn't save a link.
This one was less frustrating than the other. Shannon Vallor discusses transhumanism and the tendency to elevate technology, like maybe AI can come up with something more moral than us, while disputing those hopes.
I sympathize with frustration with human choices. I also know that the human flaws get replicated by artificial intelligence. That replication may not bring along sympathy and sentiment, areas in which humans still frequently come through (though perhaps less so with the humans having the largest influence on technology).
I couldn't find the article I absolutely hated, but another writer's reaction is here:
https://siobhanbrier.com/932/review-of-confessions-of-a-viral-ai-writer/
The original piece was about Vahini Vara using ChatGPT to write about her sister's death. This included ChatGPT telling her a memory of something that never happened, but that Vara wished had happened.
I have not read the original piece, but in the linked article Siobhan Brier has; she found herself skipping the ChatGPT parts, though Vara expressed her preference for those.
I see some sense in that. Brier was looking for the human and did not find it in ChatGPT. Vara felt like she was finding something better than human, perhaps, but I think there were two important factors with that.
Obviously Vara was already more aware of her own words and feelings and was looking for something new. In addition, it was very clear that she had not worked out her feelings about her sister's death; the reason she used ChatGPT was that she could not write about it. In that way, perhaps it functioned as a type of therapy, helping her to get unstuck.
It is not unheard of for therapy to go badly because the therapist has an idea in their head -- whether from their training or their own experience -- where they are not helping you in the way you need.
Their training could still help them realize when that is happening.
I know we are in an imperfect world, but I can't help but think that Vara might have done better talking to a friend or a someone in a support group or a family member or just writing on her own, taking it down the paths that she needed to follow. It might not have produced something ready for publication, but is something where readers keep wanting to skip the ChatGPT parts really "ready" for publication?
There can be struggles in getting through writing on your own, I know, but there is strength to be found in the struggling that I don't know that AI can provide.
Then, for those who are struggling with human relationships (possibly needing some maturity and development), is customizing a companion the best option there? Will they do better in a world where they can -- instead of learning about respect and mutual regard with living beings --go for the "ultimate personalized girlfriend experience" ?
I will not link to that, but here's the story of a guy who created his own AI board members, immediately hit on own, then had her tell him it was okay:
https://futurism.com/investor-ai-employee-sexually-harasses-it
With kindness and grace for each other, we can be beautiful in our imperfections, and create beauty.
That's what I hope to see.
That is going to require something more genuine than AI can provide.
But for more signs of bad ideas and opportunities for abuse:
AI Steve did lose the election.
No comments:
Post a Comment