Thursday, June 19, 2025

Personal update

In previous posts I have referenced a school issue I was having.

My school uses a third party to proctor tests. I understand the need for them to have ways of making sure that no other software is running and to see the whole room so they can guard against cheating. Getting through that and the identity verification with whoever comes up in a large center is a stressful process even when it works, but it frequently doesn't work, which seems to be strongly related to luck.

I have been very unlucky. 

It is also not really their fault that webcams are often a kind of flimsy technology, but still, if you meet the stated technical requirements... getting into everything wrong with that would be a kind of long tangent. I think I am going to want to submit some detailed feedback soon, though.

Anyway, I started school in July, almost a year ago.

It's a process getting back into being a student. I agonized too much over assignments and time management and figuring things out.

As I was starting to hit my stride, I had a scheduled test and could not connect. That happened in December.

Perhaps the most frustrating part of it was that was going to be my last proctored test. Everything else is going to be papers or presentations or submissions like that. 

I could not get this test done.

I got various suggestions about what the problem might be, but none of them really worked. I kept getting different answers about my options.

Technically the classes are sequential, so it would have been easy for me to not progress at all. I was allowed to take some other classes, but it was never guaranteed. There was one alternative option that was only allowed to students near graduation, so the question of whether I was going to be able to get closer to graduation was a big one.

Even with the opportunity to take additional classes, I could never only think about that class; there was always the test issue and scheduling the next attempt and failing and growing ever more frantic and discouraged.

Yesterday I was able to connect through the alternate service and I passed the test.

Previously I had been sure I could pass, but I was so stressed out yesterday I started having doubts. I was afraid to (and therefore did not) scratch an itch, just in case that would mess something up.

Having it done is a huge relief.

It is also not the end of the frustration. 

Now I can concentrate on moving forward, but I am behind schedule by two classes. There is a good chance I will not graduate when I intended, even though my original intention was at a slightly accelerated pace.

On the other hand, if I have actually hit my stride and I now have a clear path forward, who knows what I can do? 

Onward! 

Tuesday, June 17, 2025

Wrong; start again

I know I said I was going to start on protests this week. It seemed like a reasonable plan, but before that I found a different, recently stated plan wasn't working at all.

This particular plan was that in addition to not using AI myself (and writing about it to try and discourage other people from doing it), I would delete uses of AI from my feed.

https://sporkful.blogspot.com/2025/06/rejecting-ai-as-much-as-possible.html 

This went wrong in two ways.

The first is that when you ex out a post on Facebook, it assumes that the issue is the poster, not the content. I was getting options to snooze various people for 30 days, or to report the post. 

There is an option to select preferences, but that is whether you want to see political or sensitive content, and you have a chance to look at your snoozes and designate some people as favorites so they come up.

Also, that doesn't work with ads. If you ex out an ad, it will tell you that you won't see that particular ad again, but you don't get any additional choices.

Therefore, instead of meaning that I got to see the non-AI stuff that my friends posted, it just meant some people were less likely to come up, even without the snooze.  

Even worse, the algorithmic pursuit of engagement started flooding my timeline with movie and television content instead of friends!

I am not quite ready to adopt replying to friends' posts encouraging them to quit sharing AI. Frankly, I don't think a lot of them even realize that's how the images in their shared item were generated. When you think about it a lot, you kind of develop an eye for it and you realize how ubiquitous it is, but a lot of people aren't thinking about it.

So, yeah, that didn't work out; that's life and you try again.

Right now my plan involves having Facebook closed a lot more often, even when I am on the computer. I will still post my songs, articles, blogs, and selfies, and wish people happy birthdays, and do a check then, but I can't keep scrolling through movie and television content and get anything done or stay in touch with people.

There is probably a later segment of that plan where when I tune in I will look up specific people's pages and see how they are doing, perhaps getting a rotation of some kind going. 

My goals are not changing, but my methods must and so they will.  

Friday, June 13, 2025

Science reading list

This actually starts with some books related to the environment that I have been meaning to get to for a while, but I like to combine things.

Books that seemed like they could accompany the environmental books well included...

The Science Class You Wish You Had by David and Arnold Brody
Great Feuds in Science: Ten of the Liveliest Disputes Ever by Hal Hellman
Lab Girl by Hope Jahren 

The Science Class was more of a historical chronicle, going by seven discoveries and the context that led to them. As many of the discoveries were related to the feuds, there was quite a bit of repetition between the two.

If you want to read one, I recommend Liveliest Disputes for two reasons. In addition to having ten feuds instead of seven discoveries -- so, three additional sciences -- there is more of a unified voice, with one enthusiast sharing his interests. 

In The Science Class, two magazine writers talked to a lot of academics and then felt smart about putting it together. What they do really well, though, is in giving that context. 

They do reference Isaac Newton's quote about "standing on the shoulders of giants" but that applied to many before and after Newton. The existing body of knowledge is what you have to build on. 

That is not only how some discoveries get made, but how some are arrived at individually around the same time period, like with Newton and Liebniz and calculus (there was a feud) or similar thoughts on natural selection between Darwin and Wallace (not a feud).

One interesting point made in The Science Class is that the era of big science names is probably over. When things happen like CRISPR or a new COVID vaccine, there is usually not a single name associated with it, unless that is a company name.

Of course, with those particular examples, you could come up with specific names, like Emmanuelle Charpentier and Jennifer Doudna for CRISPR or June Almeida identifying the first coronavirus. 

It is certainly true that there is not as much of a sense of foundational knowledge with these fairly recent discoveries, compared to what you would feel with Newton or Einstein, when there were still very new concepts. 

There is also a much greater specialization now, which is going to mean more people along the way and more specific (and perhaps limited) applications. If Charpentier, Doudna, and Almeida's names are less known, it's not necessarily because of sexism. 

Still, sexism in science's track record is not great.

Lab Girl has some examples. It's not the focus, but I am afraid I have already seen so many examples that when a few more appear, it's not at all surprising.

However, if we think about how many contributors were needed to get to where we are, and about how much further we need to go, we can't afford to lose anyone's contributions.  

If we are throwing good minds away for reasons of sexism or racism -- not even acknowledging the possibility of these being good minds for those reasons -- it is stupidly tragic. 

Thursday, June 12, 2025

People needing people

I had thought that I would essentially move on from AI to capitalism, but with current events was feeling the need to spend some time on protesting. 

Today's post could work as a transition, or as its own thing.

https://x.com/CoraCHarrington/status/1933188476213571990 

I’ve felt this since folks went all in on Grammarly, like it wasn’t flattening everyone’s voice. Going from letting software automatically check everything and going along with its edits to letting software just write the thing for you, may not be so large a gap to many people.

I saw this because I follow Cora, and also it is goes along with what I have been writing about, but the thread she is responding to has its own points:

https://x.com/roryisconfused/status/1933050364103962815 

A big divide in attitudes towards AI, I think, is in whether you can easily write better than it and it all reads like stilted, inauthentic kitsch; or whether you’re amazed by it because it makes you seem more articulate than you’ve ever sounded in your life

The second post in that thread, where kids may not feel the need to develop their own skills, is a big concern, but there is another point here that may just be my common thread through everything.

I don't worry about expressing myself, mostly. I mean, I do worry about getting things in the exact right order so that it is all logical and coherent, but I feel like I have a good sense of myself, I know whether I understand a topic or not, and I am confident in my ability to use words (if not in my ability to pronounce words correctly when I only know them from reading). 

That combination of things helps me now, but they didn't come automatically. 

There was a lot of reading and looking things up, and lots of writing spent trying to understand myself, and quite a bit of connecting with people and nature, and there was growth. I value that for the experiences along the way and where they have led me.

There are so many things that can get in the way of reaching that level of comfort. 

Right now the more obvious problem is people who seem to believe in their own expertise without any logical base for it (is there a level where they do know?), but there are people who do not value themselves enough. It would be easier for them to feel the allure of AI.

It's not about whether they have good grammar skills or good emotional intelligence; those things can be learned. It's that there is a beating heart behind it, with unique experiences and a point of view. That's a wonderful starting place.

Maybe the point of having people with no regard for study or research is that it diminishes effort in general. It shouldn't. People can learn, they can innovate, they can grow, and they are valuable even before all of that.

If I care about you, I want to hear from you, not an artificial intelligence that quite often has been influenced by terrible people.

There were also some interesting things in Rory's thread about how a lot of writing well comes from reading well, which is sad given recent stories about Generation Z not wanting to read to their kids and not liking reading in the third person. I can't help but wonder if this is related to No Child Left Behind and Teaching to the Test and sucking all of the fun out of reading.

There are problems there, though I don't believe they are insurmountable.

Regardless, whether I am worrying about the environment or politics or technology, the thing that keeps coming back is whether we care about and value people. 

We aren't going to be able to fix anything else if we don't fix that.  

Wednesday, June 11, 2025

Rejecting AI as much as possible

Going back to that original hope -- rejecting Artificial Intelligence as much as possible -- what does that mean?

I am becoming more aware of the difficulty of opting out.

I had stopped using Google -- and told them about it -- because of their acquiescence on renaming the Gulf of Mexico. I'd also noticed that their search engine was no longer as helpful, but I was thinking of that part of a general downhill trend. 

https://sporkful.blogspot.com/2025/02/corporate-communications.html 

I have been using Bing. 

Bing is not that different from Google; the different ways of grouping results and such are all kind of following the same pattern. That pattern includes getting AI results at the top.

I felt very good about scrolling right past those and going on to actual articles and entities and web pages. 

That is reasonable for seeking better sources to avoid the replication of errors. It is good for valuing the creators of content... valuing humans.

Those things are important to me, but so is not killing the planet. If Copilot (Microsoft's AI tool) is running whether I am using the results or not, there is still damage being done.

I thought of this because I saw some complaints from people about not being able to opt out. That was a reasonable concern, but I wasn't sure if it was true.

That is a more complicated question.

I was able to find directions for turning Copilot off in Bing and in Windows, except that the Bing instructions didn't work. For Windows, if you don't have Professional it involves a registry edit, which I have not completely ruled out but it's a little intimidating.

I did submit feedback on it.

I am also going to check out some other search engines. It's very possible that they are all that way, but it's at least worth looking.

It's not perfect. 

Still, there is so much that can be done.

One thing that has become more clear to me is that I need to make the rejections more clear.

It is not just that I am not going to use AI applications to see what I would look like as an elf or as people in four different decades, or that I won't click on those romance novels with AI covers (I wasn't going to anyway), but I will delete them from my feed.

When someone puts up a picture of Muppets or Simpsons characters or anyone else in front of something culturally relevant, I am deleting that from my feed.

If they are using AI, I am going to keep voting "No" and hope that more posts by actual humans show up in my feed.

If I can't be as thorough and effective as I would like, I will make up for it by being exceedingly stubborn. 

Tuesday, June 10, 2025

Playing nice when you have to

Now that I have spent six posts on how terrible AI is, there is something else to note:

Among the people that I love, there is one going back to school to study AI so he does not become obsolete, and another being advised that if you don't learn AI you will be replaced by someone who does. One is in the tech sector, but the other is in banking.

Even in my course of study, when we were learning about technology there were lots of good things said about AI.

I will note here that I have seen discussions about the difference between machine learning and generative AI. My criticisms have been primarily focused on generative AI. 

When we are looking at environmental damage, I believe we need to consider both. However, I am not trying to get you fired.

We live in a vastly imperfect society that just keeps getting worse. There may be times when it's right to martyr yourself, but often the right thing is to survive; that might require some compromises.

How do we ethically work in this world?

First of all, knowledge makes sense. It makes sense to know how others in your specific job and in your field might use artificial intelligence and what the perceived benefits are. 

I hope, though, that one result of these posts can be also being aware of the downsides and potential flaws in using AI.

It replicates errors. It perpetuates bias. It is killing the planet.

Maybe you can be the one who asks -- in a company that has at least pretended a commitment to the environment -- how to compensate for the extra energy use.

Maybe you can be the one who encourages scrupulous proofing and checking of everything that gets generated via AI.

Maybe you can raise the questions.

And maybe that will get you in trouble. There will be people who are so high on the rush of technology and job elimination that some of this can be dangerous. You will have to use your best judgment.

One thing I believe, though, is that it is better to know and understand more. 

Take information and do good things with it, as best as you can.

Related posts:  

https://sporkful.blogspot.com/2025/05/the-scuffle.html 

https://sporkful.blogspot.com/2025/05/for-arts-sake.html 

https://sporkful.blogspot.com/2025/05/garbage-in.html 

https://sporkful.blogspot.com/2025/06/ai-lies.html

https://sporkful.blogspot.com/2025/06/ais-human-cost.html 

https://sporkful.blogspot.com/2025/06/reasonable-questions.html 

Friday, June 06, 2025

In my garden: May's daily songs

I didn't have any clear ideas for songs for Asian-American Pacific Islander Heritage Month, though I did do daily articles for it.

I had been thinking about doing a month themed with songs about flowers and fruits and vegetables for a while, and that's what I decided on.

Does it reflect my gardening hopes well? Not particularly. There are things in there that I would not grow. A lot of them are wild or they tend to grow in different climates.

It was still fun looking, and I found new songs from familiar artists.

I will also add that you can easily do a month of just "rose" songs. My desire to not be too repetitive meant I only did three: "Monarchy of Roses", "Kiss From A Rose", and "Every Rose Has Its Thorn".

"Monarcy of Roses" was one of my favorite new ones, along with "(Nothing But) Flowers". 

There were still repeats. I have definitely used "Build Me Up Buttercup" and "Love Grows (Where My Rosemary Grows)" before. I think I have used "Green Onions" at least twice before, and I will surely use it again. I love those funky onions.

I hope to be planting soon. 

Think green thoughts!

Daily songs

5/1 “Waltz of the Flowers” by Tchaikovsky, performed by London Symphony Orchestra
5/2 “San Francisco (Be Sure to Wear Flowers In Your Hair” by Scott McKenzie
5/3 “Where Have All the Flowers Gone” by The Kingston Trio
5/4 “Scarborough Fair” by Simon & Garfunkel
5/5 “Edelweiss” from The Sound of Music
5/6 “Forget Me Nots” by Patrice Rushen
5/7 “Wildflower” by Skylark
5/8 “Build Me Up Buttercup” by The Foundations 
5/9 “Love Grows (Where My Rosemary Goes)” by Edison Lighthouse
5/10 “Poison Ivy” by The Coasters
5/11 “Blueberry Hill” by Fats Domino
5/12 “Vegetables” by The Beach Boys
5/13 “Green Onions” by Booker T. and the MGs
5/14 “Listen to the Flower People” by Spinal Tap
5/15 “Tangerine” by Led Zeppelin
5/16 “Every Rose Has Its Thorn” by Poison
5/17 “Fading Like a Flower” by Roxette
5/18 “Kiss From A Rose” by Seal
5/19 “Peaches” by The Presidents of the United States of America
5/20 “Monarchy of Roses” by Red Hot Chili Peppers
5/21 “Lotus Flower” by Radiohead
5/22 “Pineapple Head” by Crowded House
5/23 “Oranges on Appletrees” by A-ha
5/24 “Sunflower” by Vampire Weekend
5/25 “Watermelon Man” by Herbie Hancock
5/26 “Bleeding the Orchid” by Smashing Pumpkins
5/27 “Amaryllis” by Shinedown
5/28 “Wildflowers” by Tom Petty and the Heartbreakers
5/29 “Tulips” by Bloc Party
5/30 “(Nothing But) Flowers” by Talking Heads
5/31 “The Garden Song” by John Denver

Thursday, June 05, 2025

Reasonable questions

I remember a time when the business world was looking for English majors. I also remember reading an argument once that there should only be essay tests for English majors, because writing ability would not necessarily be important for other applications.

I'm not saying that these mindsets were close together.

As it is, it is not uncommon that regardless of how much you know about math and science, some of it may not be very useful without the ability to communicate it to others. 

I have been thinking about those things because of artificial intelligence, of course, where Grammarly and ChatGPT and automatic suggestions in word processing programs are all trying to guess and shape what you say.

However, I have also been thinking of it because of my schoolwork. One of the things I have studied has been Universal Design for Learning:

https://udlguidelines.cast.org/ 

One of its recommendations is to have multiple means of expression. If students have the option of reporting their research in not just a written essay, but perhaps in a slideshow or a video presentation, that may help more students to convey their learning. If what you want to know is that they understand the human digestive system -- not their ability to follow the standard five paragraph format -- then the essay may hold back some students who understand the digestive system really well.

That doesn't mean that things like vocabulary and expository ability aren't important, but maybe they don't need to come up every single time in every single class. There has to be some kind of balance.

Personally, I find the word suggestions annoying. If I don't know what I want to say, the program is unlikely to guess correctly for me. I don't mind the automatic spelling check. Typos happen.  

Expressing my thoughts and spelling are also both things that come easily to me, which I know affects my thinking on the issue.

I have found some of my school assignments very difficult to get started. Help might be more desirable there, except that in the struggle I do learn more about it. 

I am in school for the purpose of learning. 

There are people who don't feel that way. Schools put measures in place to try and prevent cheating and encourage original work, but sometimes it is hard to feel confident.

One concern I have is if we are getting a populace that won't value or desire expertise. There are some signs.

Educators can and are working on what better defining what the learning goals are, how to effectively accomplish them, and assessments to know whether they have been successful. That will help, but if too many people don't care, then what?

We have to decide on values and then stick to them. There is room for disagreement.

There is one area where I kind of feel ridiculous but am adhering to it anyway.

Since getting on Facebook, I have been very conscientious about wishing people a happy birthday; the reminder is right there, and if I am seeing it we have agreed to friendship, at least in the social media sense.

Some time ago, Facebook started automatically populating the birthday wish, giving a few additional options in case you didn't like the main one. There are always little emojis too.

I am erasing that every time and doing my own birthday wish. 

It is a less grammatically correct one, because Facebook always puts the comma before the name. I know that's correct, but it doesn't feel natural to me so I had not been doing it. (That's assuming I use the name, because if you are the age of my parents or I used to call you a nickname and now you are going by your full name... there are some neuroses at play, I know.)

Spending that extra time so that your birthday wish is less fancy is part of me being me. I will continue to do so. Even if I accidentally hit "Enter" I will go to your page and edit it. That's the kind of weirdo I am.

That is one way I stay human. 

Wednesday, June 04, 2025

AI's human cost

I have referred multiple times to this trend where we don't value people, but without talking about what valuing people means.

I obviously mean valuing individuals and their welfare, but some of these stories have been making me think of the value of humanity collectively, even with (or especially with) all of our flaws.

I had to search a bit for two articles because they irritated me so much that I didn't save a link.

https://www.vox.com/future-perfect/384517/shannon-vallor-data-ai-philosophy-ethics-technology-edinburgh-future-perfect-50 

This one was less frustrating than the other. Shannon Vallor discusses transhumanism and the tendency to elevate technology, like maybe AI can come up with something more moral than us, while disputing those hopes.

I sympathize with frustration with human choices. I also know that the human flaws get replicated by artificial intelligence. That replication may not bring along sympathy and sentiment, areas in which humans still frequently come through (though perhaps less so with the humans having the largest influence on technology).

I couldn't find the article I absolutely hated, but another writer's reaction is here:

https://siobhanbrier.com/932/review-of-confessions-of-a-viral-ai-writer/ 

The original piece was about Vahini Vara using ChatGPT to write about her sister's death. This included ChatGPT telling her a memory of something that never happened, but that Vara wished had happened.

I have not read the original piece, but in the linked article Siobhan Brier has; she found herself skipping the ChatGPT parts, though Vara expressed her preference for those.

I see some sense in that. Brier was looking for the human and did not find it in ChatGPT. Vara felt like she was finding something better than human, perhaps, but I think there were two important factors with that.

Obviously Vara was already more aware of her own words and feelings and was looking for something new. In addition, it was very clear that she had not worked out her feelings about her sister's death; the reason she used ChatGPT was that she could not write about it. In that way, perhaps it functioned as a type of therapy, helping her to get unstuck.  

It is not unheard of for therapy to go badly because the therapist has an idea in their head -- whether from their training or their own experience -- where they are not helping you in the way you need.

Their training could still help them realize when that is happening.

I know we are in an imperfect world, but I can't help but think that Vara might have done better talking to a friend or a someone in a support group or a family member or just writing on her own, taking it down the paths that she needed to follow. It might not have produced something ready for publication, but is something where readers keep wanting to skip the ChatGPT parts really "ready" for publication?

There can be struggles in getting through writing on your own, I know, but there is strength to be found in the struggling that I don't know that AI can provide.

Then, for those who are struggling with human relationships (possibly needing some maturity and development), is customizing a companion the best option there? Will they do better in a world where they can  -- instead of learning about respect and mutual regard with living beings --go for the "ultimate personalized girlfriend experience" ?

I will not link to that, but here's the story of a guy who created his own AI board members, immediately hit on own, then had her tell him it was okay:

https://futurism.com/investor-ai-employee-sexually-harasses-it 

With kindness and grace for each other, we can be beautiful in our imperfections, and create beauty.

That's what I hope to see.

That is going to require something more genuine than AI can provide. 

But for more signs of bad ideas and opportunities for abuse:

https://www.nbcnews.com/tech/tech-news/ai-candidate-running-parliament-uk-says-ai-can-humanize-politics-rcna156991 

AI Steve did lose the election.

https://theconversation.com/ai-scam-calls-imitating-familiar-voices-are-a-growing-problem-heres-how-they-work-208221 

Tuesday, June 03, 2025

AI lies

“The result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed.” -- Hannah Arendt

Sadly, I am not sure that the constant falsehoods regurgitated by artificial intelligence are even deliberate. I think a lot of them are just the normal failures of technology, exacerbated by the landscape in which it came to be. 

The damage is the same, though, and it doesn't have to be this way.

Let's look at some examples.

https://arstechnica.com/tech-policy/2025/05/judge-initially-fooled-by-fake-ai-citations-nearly-put-them-in-a-ruling/

A lawyer used AI to generate a legal brief for a case. Nine of the twenty-seven citations had errors, including two that simply didn't exist.

The judge found it pretty convincing, but still did his own research and discovered the... well, fraud implies a level of intent that I don't think it was there. I suspect the reason for the use of AI was simply laziness, but it's still not a good justification.

Estimates are that chatbots have AI hallucinations as often as 27% of the time, and errors up to 46% of the time.

https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) 

It does make sense that laziness would result in shoddy work.

Of course, looking up enough legal cases to come up with 27 citations for a single brief does sound tedious, but might this happen in other areas too?

Why, yes.

https://www.sciencebase.com/science-blog/vegetative-electron-microscopy.html 

This article may give us the source for one of those hallucinations. An old paper had the words "vegetative" (in reference to cells) and "electron microscopy" in parallel columns and they were put together. 

"Vegetative electron microscopy" is not a thing, but now it is getting cited a lot. 

I really liked this quote (in the current article about AI, not the old one about vegetative cells):

... in a world where scientific endeavour is being derailed by moronic politicians and their henchmen, we need a stronger science base, not one polluted with such nonsense as vegetative electron microscopy. It leads to distrust in scientists and in science, it gives those who peddle pseudoscience, disinformation, misinformation, and fake, greater leverage to shake off the facts and replace them with ill-informed, politically-driven opinion. 

We need human understanding and diligent minds. 

This technology is not going to solve climate change. Even if you can use AI to run simulations and save time that way, you need a coherent mind with innovative thoughts setting it up.

If the past few years have shown us anything, it's that some people will bite at any false information that supports what they want to believe. This is a trend that doesn't need any help, but it's getting it.

https://www.axios.com/2025/05/23/google-ai-videos-veo-3 

"Google's new AI video tool floods internet with real-looking clips"

All for fun, right? 

One more thing:

https://www.cbsnews.com/news/sextortion-generative-ai-scam-elijah-heacock-take-it-down-act 

They didn't even have real images of 16 year old Elijah Heacock. He still took the threats seriously, and he still killed himself.

One interesting thing in this article is that it mentions legislation supported by Trump to reduce sextortion. What about that provision in the "big beautiful bill" prohibiting regulation on AI for ten years? 

Again, that is a vector from which I do not expect coherent thought. But here, among us, we can think about this and we need to think about it.