Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, August 26, 2025

AI & Nuclear 2

(cont'd from yesterday's post)

"Staggering" amounts of energy are needed for the coming build-out of artificial intelligence. Here's a comparison from the CNBC video below to illustrate it: just one ChatGPT query takes about 10x the power required for an ordinary Google search.

Even when new power plants come online, either public or privately owned by the tech company, our electricity grid is a concern. The average transformer is 38 years old.

Even water becomes a challenge. It's often used to cool down the servers so they can operate 24/7. One alternative is to use big air conditioning units instead of water.

It all adds up to huge consumption of resources.

Monday, August 25, 2025

AI & nuclear 1

Remember the "Metaverse," that virtual reality envisioned by Mark Zuckerberg? After billions of dollars of investment, there was enormous loss. The Metaverse virtual reality platforms still exist, but not as the company's primary focus. 

Instead, Meta has turned to artificial intelligence. Their new superintelligence research lab will be led by the founder of Scale AI (they paid $14.3 billion dollars for a 49% share in his company).

As MIT commented, building out AI will require "staggering" amounts of energy. So they're actively looking for nuclear developers with whom they can partner (read: invest $) to produce new power. That could include either big conventional plants, or the emerging SMR's (small modular reactors).

But of course fusion (instead of current fission) nuclear would be the breakthrough solution if the theory ever becomes practical. Meta has an agreement with Helion, as does Microsoft, for the delivery of fusion-produced nuclear power by 2028. 

Can Helion do it? It remains to be seen. We're hopeful. They've started on Microsoft's nuclear fusion plant:

Friday, August 22, 2025

Nuclear now 3

(cont'd from yesterday's post)

And why is nuclear power a concern for AI builders? Because artificial intelligence is being developed fast, and it's going to require much more power very soon. 

Last February the U.S. president and OpenAI announced the "Stargate" initiative, a collaboration between private companies and the government to develop artificial intelligence and the data centers that will power it. 

Running AI is energy-intensive. MIT says, "The energy resources required to power this artificial intelligence revolution are staggering . . ."

What sort of plants will supply the power? It's less likely to be coal, windmills or solar. Even relatively clean natural gas, being a fossil fuel, is considered somewhat "dirty." Both "Meta and Microsoft are working to fire up new nuclear power plants." 

More nuclear looks to be a common, clean choice (image).


from Technology Review

Thursday, August 7, 2025

Grok talks

Is faith in the Christian God the most reasonable conclusion you could come to? 

Since artificial intelligence is seemingly going beyond mere human intelligence, this Christian believer had the idea of submitting that question to AI. 

He specifically told Grok 4 to consider only "strict logic, mathematical probability and observational science" (not religious faith or ideology). Based on scientific data, he asked it to answer some factual questions.

Eventually Grok 4 concluded that . . it is.


(cont'd tomorrow)

Thursday, July 24, 2025

Trades shortage

Last week Pennsylvania Energy and Innovation Summit brought together leaders in AI, energy and government. 

Mike Rowe ("Dirty Jobs" show), known as an advocate for the skilled trades, was on one of the panels and had plenty to say. Apparently he predicted the trades shortage 17 years ago, and it's really serious now. Yesterday's post showed that affordable housing is harder to find and more expensive because of it.

His points: 

  • Taking shop class out of high school was a shortsighted, "harebrained" idea
  • "We've been telling kids for 15 years to learn to code. Well, AI is coming for the coders.  It's not coming for the welders or the steamfitters . ."
  • "We need 500,000 electricians in the next couple years. Not hyperbole!"
  • Companies building nuclear subs for the Navy are short 80k to 90k skilled workers now
  • His foundation awards trade school scholarships
"America is lending money [for college] it doesn't have -- to kids who can't pay it back -- to train them for jobs which no longer exist. That's nuts."

Friday, July 18, 2025

Brain on AI - 2

(cont'd from yesterday's post)

Those teachers saw it happening. It makes sense. Now we know it's happening and even have an early name for it: cognitive debt – the decline in brain function manifesting in “diminished critical inquiry, increased vulnerability to manipulation,” and “decreased creativity.”

What happens when a significant number of people "farm out their own thinking to machines?"

This could be another deep divide between groups of people, just when we really need to unite in virtue and elevated purpose. It could wind up that one group will think through issues and make decisions; the other will follow whatever looks easy (image), happy to leave the thinking to others and to AI. 


 

No one will be forced into one group or the other. Every person will choose . . while they're young and in school. Will they choose the hard work of learning and understanding and skill-building? Some won't. Some will choose to let AI do it. 

 from "Your Brain on ChatGPT" 

Thursday, July 17, 2025

Brain on AI

Yup, many students and employees choose to let AI do their work. You can see the obvious temptation.

A study has been done on the results of this trend. Massachusetts Institute of Technology (MIT) compared three groups writing an essay: one group of participants using their own brains, one using a search engine, one using AI to do it. 

Comparisons were made of brain activity with predictable results: "the cohort using AI showed large comparative decreases in neural brainwave activity, linguistic and thought development, and overall learning skills."

Historian and professor Niall Ferguson says: "the shortcuts that ChatGPT offers are a disaster for your neural development! You'll just be dumber."

from The Stream

(cont'd tomorrow)

Monday, June 23, 2025

Hopeful AI 3

Are you using artificial intelligence yet at work, at school, or for personal questions? If not, it's a good idea to get familiar with it. As you have probably heard, it's only going to grow and have huge impact.

Here's a suggestion for you. Get started. Grok, as well as other AI's, can be used for free (so far, anyway). I asked it this question: "Should I remove a second stem which has started to grow on my 6-foot spruce tree?" It answered thoroughly:

"For a young 6-foot spruce tree with an additional stem or trunk (often called a double leader or co-dominant stem), it’s generally recommended to remove one of the stems to promote the tree’s long-term health and structural stability. Here’s why and how to approach it:

"Why Remove One Stem?
1. **Structural Integrity**: Two competing stems can create a weak crotch (the point where they join), increasing the risk of splitting under wind, snow, or ice, especially as the tree grows taller.
2. **Aesthetic Shape**: Spruce trees typically have a single, straight leader for their characteristic conical shape. A double leader can lead to an unbalanced or irregular form.
3. **Resource Competition**: Two stems compete for nutrients, water, and sunlight, potentially weakening the tree’s overall growth."

There was much more, including detail about the method. The answer was practical and helpful. This is the most basic use of AI, and probably reliable. 

Wednesday, June 11, 2025

AI in school 3

(cont'd from yesterday's post)

Studying and learning are hard work. What if you, as a student, could delegate writing  assignments to artificial intelligence? That would mean less work, more free time, and it's super tempting. So students do it all the time . . even in writing classes.

A 20-year veteran of teaching gave it up because of that fact of life. Many students will do whatever they can to spare themselves the discomfort of difficult work. Thinking things through and communicating that through your own carefully constructed writing can be difficult. 

She was trying to teach writing at the college level, which includes reading and returning feedback to the student. Sadly, she discovered that most of her instructive feedback was a total waste because it was directed toward the AI that "wrote" it. So she quit.

If a student doesn't discipline himself to learn how to think and write, he's going to be at a disadvantage for the rest of his life to people who did learn it. 

How is our education system going to handle this?

from Time

Tuesday, June 10, 2025

AI in school 2

(cont'd from yesterday's post)

Hannah would assign her students to write a paragraph, or five sentences, or an essay. Many were not able to do it, or complained that it was too hard, or questioned the value of doing the work . . and of course, many completed assignments were written by ChatGPT or another large language model (there's technology to determine that).

Some students saw no value in learning how to write a resume or a cover letter if they could simply make AI do it.

Are these kids learning how to write (photo)? Will their new employer be surprised with the low level of skill these students can execute when they need to think and write in the job environment? 

The problem is not only that turning in a writing assignment that the lazy student did not do is dishonest. It is dishonest, but there's more. Writing well requires a degree of clear thinking, often of making a reasonable argument. If this student ever has to recommend a certain decision to other people, in work or in the community, she will have to think through the options and make a case for the best one. She needs to build that communication skill.

 

Monday, June 9, 2025

AI in school

Teachers have been fleeing the profession in higher numbers for a while now, frequently citing poor behavior. Since ChatGPT came out in fall of 2022, it use and the widespread use of "large language models" (LLM's) like it have made teaching even harder, in the opinion of some.

Here's one of them, a high school teacher for three years. She thinks technology should be kept from students until college.

Tuesday, May 20, 2025

Hopeful AI 2

(cont'd from yesterday's post)

So this author says AI will soon be smarter than us humans. That is still debated among the experts--but if true, then he's right that we will certainly have to lean on and develop the best parts of our human nature. 

Smartness has never been the only good thing about humans. It's good to know information and understand it, but as people created in the image of God, there's more.

  • Having true friends and being one, having genuine relationships with other people, will never really be replaced by AI (though it can be faked).
  • Creative thinking will never be replaced by AI, creative in the sense of creating new organizations, businesses, families, ideas (though some white-collar jobs will be replaced).
  • Taking responsibility to build your own character virtues will never be replaced. Your experience of life will still be directly affected by your honesty, generosity, discipline, compassion, etc.

Materialists who believe nothing exists but physical reality may have to re-think their assumptions. Transcendent qualities like love and honor arise from outside the material world. 

These things arise from the immaterial spirit of a person (see Friday's post).

 

Monday, May 19, 2025

Hopeful AI 1

So many experts in technology sound alarm bells about the danger of artificial intelligence (AI) to the future of humanity. Without a doubt, there are dangerous possibilities.

Other opinions, though, are out there. Naturally, CEO's of companies leading the way in AI are optimistic and enthusiastic. That would include Sam Altman and Larry Ellison

Altman's OpenAI has a vision for a beneficial role of AI in our future. They see AI elevating all humanity, functioning for our benefit. An AI user and an AI developer say, "This technology can usher in an age of flourishing the likes of which we have never seen."

 

But they also tell us to prepare for some disorientation as well, because "AI will change what it is to be human." They are certain that AI will exceed human intelligence by 2030 . . a mere five years away. So that means: we won't have intelligence supremacy anymore.

What should we do to prepare? More of what we are best at doing. We have value way beyond mere intelligence (see tomorrow's post).

 from The Free Press

(My image is Grok-generated)

(cont'd tomorrow)

Friday, February 14, 2025

Stargate

(cont'd from yesterday's post)

Excitement in the tech industry over the new administration may be due to a huge wave of investment and development coming for AI (artificial intelligence). The president announced it from the oval office in the company of some tech titans.

AI infrastructure (hardware and software) will require a colossal buildout of data centers . . which will require the energy industry to grow in order to supply the required electricity.  The president appointed David Sacks his new AI and Crypto Czar (photo).

 

OpenAI will take operational responsibilities, collaborating with Oracle and Nvidia. Softbank will take financial responsibilities. Stargate is the new company's name, and here is OpenAI's statement:

"All of us look forward to continuing to build and develop AI—and in particular AGI—for the benefit of all of humanity. We believe that this new step is critical on the path, and will enable creative people to figure out how to use AI to elevate humanity."

Friday, August 2, 2024

Meta settles

“Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted,” according to the office of the attorney general of Texas. 

It's against Texas law to capture biometric identifiers without the person's consent. The attorney general filed suit in February of 2022, and it's just been settled. 

Meta is going to pay $1.4 billion to the state over five years, a deal negotiated through the lawyers nearly two months ago. Attorney General Paxton calls it a "victory for Texan privacy rights."

Google's owner, Alphabet, is being sued by Texas as well for "illegally collecting biometric data from millions of Texans."

Tuesday, May 28, 2024

Deepfake 3

(cont'd from yesterday's post)

Video technology that fakes a human person exists now and will only improve its ability to deceive, as per yesterday's post. Today's post was going to be about the ways that oppressive government could employ it. But it's a darker level than I wish to place on this blog. If you want to know how it could and probably will be used for propaganda, fear, and tyranny, you can find out here.

Check out this re-post from 2019:

Bias framing

One way to influence other people's opinions is to withhold information that would counter your argument. If the picture you paint for others is framed to your advantage, they can be fooled.

Everyone does it - that is, everyone frames the argument the way they see it.

But if you are the audience, make sure you ask some questions. Is anything important being left out of the way they frame it?

Monday, May 27, 2024

Deepfake 2

Follow up to this post

It's already hard to know whether reported news gives us an accurate picture of reality. Certain media are reliable, we think, but we wonder if we're right about that. 

We know media does follow government direction to censor dissent (example: release of the Twitter files) and does feed us propaganda mixed with factual information. So, what is accurate and what is not?

"When state propaganda is easily generated, millions will fully embrace their own hopeless inability to know the truth." It's easy to be cynical and give up trying to know. At least one millennial in my family did give up. 

Unfortunately, advancing video generation technology is almost certain to amplify the power of fake news to deceive us and control us. "AI human impersonation video technology" will get scary.

from Mind Matters

(cont'd tomorrow)

Wednesday, March 6, 2024

Never happened 3

(cont'd from yesterday's post)

Google co-founder Sergey Brin said (he's still on Google's board but no longer running it): “We definitely messed up on the image generation . . . it definitely, for good reasons, upset a lot of people.”

He added: “We haven’t fully understood why it leans left in many cases and that’s not our intention. . . weird things that are out there that definitely feel far left."

Yes, it sure does feel far left. He claims that he doesn't understand why. Maybe he's just blinded by his ideology. Because it's totally clear to the rest of us--Gemini presents left-wing ideas because its creators/engineers are left-wing. 

Racial diversity is a filter built into image generation. When users ask for an image of a historical figure who was actually of the white race, this is less important than the woke preference of the Gemini designers. So, for instance, it presents non-white Nazi soldiers, which we all know never happened.

Personal views of those designers take priority over factual, accurate truth. Clearly, truth-telling is not the highest priority. So we wonder, what else that Gemini shows us is not going to be really true? 

That's it. That's why people are upset. 

Tuesday, March 5, 2024

Never happened 2

(cont'd from yesterday's post)

Sundar Pichai was the CEO of Google, and is now CEO of its parent company, Alphabet. In 2022 he was paid $226 million. Google is said to be worth $1.7 trillion. It employs 150,000 people, including about 30,000 engineers. A giant.

You're familiar with its search engine, its email (gmail), and of course Youtube. Gemini was supposed to be its splashy entrant into the generative artificial intelligence market, an answer to ChatGPT by this technology giant. 

Instead, Gemini seems to have cost the company billions since its recent launch. Google stopped it from generating images on February 22. As of last Thursday, Alphabet's market cap (number of shares x its share price) had tumbled by $96 billion.

from Yahoo Finance

(cont'd tomorrow)

Monday, March 4, 2024

Never happened 1

Artificial intelligence has advanced greatly since Open AI's ChatGPT, which was itself a breakthrough advance in 2022.

No business wants to be left in the dust, so staff work diligently to find every possible way to use the new technology. More generative large language models (LLM's like ChatGPT) are being developed by tech companies. Gemini by Google is one of them.

Shortly after its launch, Gemini triggered alarm. When asked by users to generate pictures of historical figures, it showed them . . but not as they were. Minority races were substituted for white people and dressed in the style of their century. 

People are learning, if they didn't know it before, that AI does not necessarily report true and factual information. 

Gemini and all other artificial intelligence only follows the instructions (algorithms) of its creators. It does not "think things through" or look for truth. "AI can only learn from what it’s been given" by its developers--who have their own biases.

from "Google Gemini Presents a Pattern That Never Happened"

(cont'd tomorrow)