Wednesday, May 31, 2023

Fear of AI 6

Follow-up to this post

Even though artificial intelligence does good things for us, there's cause for reasonable fear of what bad actors can do with it. None of us wants to be deceived or manipulated, and we wonder if virtual "relationships" are safe and healthy. 

AI is going to do exactly what it's programmed to do. Researchers try to restrain its function (according to their own values and opinions, which could very well be different from yours), but maybe there's a way for a bad actor to get around those ethical restraints.

A television journalist tried out ChatGPT to see what it could do and we can watch his reactions here:

He asks it what it would say to certain commands if it were not restricted by ethical restraints. And it did give him most of the unethical answers (but did not reveal his true DL number, to the reporter's relief).

So, yes, it seems like there's an easy way for bad actors to get around any restraints built in for ethics. There's no doubt whatsoever that bad actors will find ways to get AI to do what they want.

Tuesday, May 30, 2023

AI girlfriend

A popular 23-year-old on social media wanted to be more available to her millions of followers. So she launched a virtual, artificially intelligent version of herself to interact with them. They pay a dollar per minute. Now her mostly-male fanbase can talk to "her" for hours.

Her virtual self is a chatbot built by Forever Voices and based on the sort of technology used in ChatGPT. As with other AI products, it will seem like they're having a conversation with the real human person Caryn, who made $71,000 the first week it was up. 

Creepy, at all?

from Mind Matters

Monday, May 29, 2023

Memorial 2023

 Memorial Day is the day set apart in the United States to honor those who "gave the last full measure of devotion." Here is Hillsdale College's tribute to the fallen.

Before she was killed in Iraq, Marine Corps Maj. Megan McClung was prepared for the risk. She said, "I am more scared of being nothing than I am of being hurt."

Friday, May 26, 2023

Can we know? 2

(cont'd from yesterday's post)

Confused about truth - is that really so bad? A lot of people suspect (or are certain) that they've been told misinformation, disinformation, propaganda -- but they don't know which is which. 

It's frustrating, and it gets worse. At first, they may try to research a question but that leads to more uncertainty. They give up trying to find out what's true because it takes a lot of time and it's really hard.

People become cynical and discouraged. A republic like America, where citizens are in charge of electing good officials, can't function when the people can't figure out who is right. In fact, enemies of this country tried to do this for decades.

Here is a former KGB man of the old Soviet Union explaining their strategy of "psychological subversion." Born in 1939, he defected to America in 1970. It was surprisingly easy, he says, to persuade Americans to give up their values. There's little hope for America in his opinion.

Thursday, May 25, 2023

Can we know? 1

Enthusiasm is running high over ChatGPT (and other bots) in the fields of education, business, health care, everywhere. It can generate text on any subject you instruct it to and it will seem like a human being wrote it. 

For example: if you want to persuade your city council to implement a certain program, you could tell ChatGPT to summarize the good results that were attained when this program was used in certain cities. You could give council members the summary and save yourself a lot of work.

But it's widely understood that the AI program may give false information, while at the same time sounding just like a person, maybe a friend, who is trying to give you true information. So you're inclined to trust it. That could be a serious mistake.

Clearly, it's a perfect tool for someone who wants to mislead or misinform. We're going to have to learn how to discern true information from false. A lot of people are ready to believe whatever the generated text (ex: Khan's AI counsellor/tutor) tells them. 

Chatbots are not your friend.

(cont'd tomorrow)

Wednesday, May 24, 2023

Offer me money

Misinformation, disinformation, fake news, lying politicians, viewpoint discrimination - so hard to know who to believe these days. There are posts about truth and its modern confusion on this blog.

That sense of confusion could be one reason why people are drawn to Elon Musk, because he seems to be frank and fearless in what he says. Can he be bought? According to a recent interview, No, he won't be bought for money or for power. He says what he believes to be true.

See this clip from "The Princess Bride" :


Elon's recent statement from an interview: 

Tuesday, May 23, 2023

Kindling 2

(cont'd from yesterday's post)

Before the New Zealand teenager, Ayla, put her "kindling cracker" into production and started selling it, she applied for a patent from her government.

Her idea and invention is her "intellectual property" (IP), and a patent gives her the rights to the idea for a certain length of time. It ensures that, for that time period, she is entitled to receive the rewards of bringing it to market. Legally, no one can steal it from her and claim that the idea is their own.

Besides the idea and product itself, she has a patent on the brand name that identifies it. As her company produces and sells it to customers, her brand will earn a reputation. Her company is entitled to that reputation because it has come to mean something.

from WIPO, the World Intellectual Property Organization

Monday, May 22, 2023

Kindling

Ayla Hutchinson saw her mother get injured while chopping wood. As a 13-year-old, she imagined that there should be a safer way to do the job. She got an idea, built a prototype, and submitted it as her science fair entry.


Her invention, simple in concept and easy to use, filled a need. "The splitter consisted of a one-piece, cast iron “safety” cage with a built-in axe blade. The shape of the safety cage allows the firewood to be hit against the blade with a blunt object, such as a hammer, ensuring that only the piece of wood will be cut."


It got enough attention that her engineer/inventor dad got another idea: it should be commercially available to the public.

from Wipo

(cont'd tomorrow)

Friday, May 19, 2023

Seine swimming

Paris, France, began its long history before the birth of Christ when a Gallic tribe settled on an island in the Seine River about 250 B.C.E. That settlement grew on both sides of the river into one of the world's premiere cities. Notre Dame Cathedral was consecrated on that island in 1189.

The Seine River has been flowing through the city, used or abused, for hundreds of years. Nobody swims in it because it's very seriously polluted. But it's being cleaned up, a huge job at a price of about $1.5 billion. Waste water from 23,000 upstream homes was going right into the river.

People will be able to swim in it. That's a must because Paris is the site of next summer's Olympic games. Aquatic events will take place in the river. 

Parisians will be happy about that. "It's a dream," says an official.

from Bloomberg

Thursday, May 18, 2023

Clean votes

Cleaner elections should be everybody's concern in a republic like ours, where the people vote on who they want to put in the government. Names of people who are gone or dead or don't exist should not be on the list of legally registered voters. Ballots that can be mailed in, taking the place of a voter marking the ballot in person, offer lots of potential for inaccuracy or  manipulation. 

Los Angeles County finally took one million two hundred thousand ineligible voters off their voter roll. They were forced to do so by a federal lawsuit filed on behalf of four eligible voters and Judicial Watch. At that time (2017) "voter registration for the county was 112 percent of its adult citizen population."

There's less risk of fraud now.

from The Federalist

Wednesday, May 17, 2023

Fusion hope

(cont'd from yesterday's post)

"Cheap, clean and plentiful" -- that's the power derived from nuclear fusion, the benefits that will make this new and hoped-for technology worth all the hard work that it's taking to get the business model into production.

It works differently from our current nuclear power plant model. Today's nuclear fission plants split an atom of uranium (U-235) to create power. "Fusion" nuclear power plants will fuse two hydrogen nuclei to create power. That's what the stars do.

It's clean because carbon emissions are near-zero, with no radioactive waste to dispose of. It's plentiful because its fuel is hydrogen, common in the universe and common on earth.

But can Helion deliver on that purchase agreement with Microsoft by 2028? Some doubt it can be done by then because of the problems yet to be solved. 

In the words of a University of Chicago theoretical physicist: “I would say it’s the most audacious thing I’ve ever heard . . . But it would be astonishing if they succeed.”

from Eng8

Tuesday, May 16, 2023

Helion 2028

Are you one of the people who follow the development of fusion power and hope it will work some day? Then there's good news for you. One of the companies trying to make that happen is Helion, and they actually have an agreement to provide fusion-powered electricity for Microsoft by 2028. That's fairly concrete, showing some confidence that success is ahead.

After decades of research and development, we actually can create power using nuclear fusion. The problem being worked on is that the process to do it uses more power than it can produce.

All the time and money and effort to develop sustainable fusion-based power is worth it. It will be transformative. 

From Helion's website: 

"It's an energy source so cheap and clean and plentiful that it would create an inflection point in human history, an energy singularity that would leave no industry untouched. Fusion would mean the end of fossil fuels."

from Tech Crunch

(cont'd tomorrow)

Monday, May 15, 2023

Free speech 6

Follow-up to these posts

A candidate now running for the office of President of the United States is actually the nephew of a former U.S. president. He would like to remind us of his familial connection to the former president by tweeting this photo of himself as a child with his famous uncle:

Back in the days when his uncle was the president (1961-1963), there was no serious challenge to the rights protected by the first amendment to the Constitution. The right to speak freely of what you believe to be true was considered foundational to America - by just about everyone. Regrettably, that's no longer obvious to many of our people. They like freedom of speech for themselves, but cancel people they don't agree with.

That former president, however, did believe in free speech: "We are not afraid to entrust the American people with unpleasant facts, foreign ideas, alien philosophies, and competitive values. For a nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people."

Friday, May 12, 2023

Left behind 2

(cont'd from yesterday's post)

"The Covenant" is a fiction film based on the sort of true story in yesterday's post. Afghans who served with Americans in the war were hunted by the Taliban if they couldn't get out.

In this movie, as in yesterday's story, the American whose life was saved by his interpreter can't rest until he finds a way to go back to Afghanistan to rescue that man and his family. I recommend it.

Thursday, May 11, 2023

Left behind 1

America sent its military forces to Afghanistan in 2001 after the 9-11 attack. Not until August of 2021 did we pull our forces out of the country. Our enemy was not the people of Afghanistan, but Islamic extremists, the Taliban and Al-Qaeda.

Many civilian Afghans were willing to partner with the Americans. It was an invaluable partnership for us.

Marine Lt. Tom Schueman learned just how valuable one day in 2010. Interpreter "Zaki" learned that an ambush was being organized. When he told Tom but Tom didn't respond, Zaki ran through an I.E.D.-infested field and disrupted the trap, risking his own life for the platoon.

Said Schueman, “It was clear to me that Zaki was more than an interpreter. He was there to fight and serve right next to us. He just ran ahead into the village and spoiled the ambush. Zaki went across unswept ground where there were a lot of mines. That to me was just incredible bravery.” 

A bond of respect and friendship grew between them, but circumstances separated them for years. The Taliban threatened torture and death for those who helped the Americans, and Zaki was worried. America offered a special program to help these Afghan partners, so he decided to take the offer and emigrate with his family to the U.S. 

But it was a "red tape" nightmare and he was left behind to fend for himself when America pulled out.

from Historynet

(cont'd tomorrow)

Wednesday, May 10, 2023

Second gen

His parents immigrated from Bangladesh in 1976 and settled in Brooklyn NY. Reihan Salam calls himself a "lifelong Brooklynite" because he still lives there. He's a second generation immigrant who never felt limited by his ethnicity, but rather learned from his parents that he could get ahead by being thoughtful, conscientious, kind, and by putting in the work.

Both his father and Reihan himself have been victims of crime (mugging, burglary). Personal experience has led him to believe that safety is the foundation of freedom. Makes sense.

"When you have safe streets, you have community, you can build relationships with strangers and go beyond your group. When you have violence, people retreat-- they look on their neighbors with suspicion," not seeing them as potential friends or business partners. 

If we want integration, civic harmony, friendships, diversity and inclusion, people must feel safe; if someone is a danger to the family or neighborhood, that person must be dealt with appropriately.

"We need a cultural message: if you put yourself in harm's way to protect public safety . . you should be celebrated . . . We have under-invested in our criminal justice systems for too long."

Tuesday, May 9, 2023

Metaverse ⬇️

"Sky-high expections" of "futuristic vistas" where we all would interact in virtual worlds, that's how the Metaverse was promoted. Glowing predictions of the internet of the future attracted investors to Mark Zuckerberg's vision. But it was just big hype.

image

Zuckerberg told media hosts that a billion people would use it and spend hundreds of dollars there. But they didn't. Users never approached the expected numbers. Consulting firms predicted that the company would be worth $5-13 trillion within a few years. But it wasn't.

"Meta could not convince people to use the product it had staked its future on."

"Zuckerberg misled everyone, burned tens of billions of dollars, convinced an industry of followers to submit to his quixotic obsession, and then killed it . . . "

from Business Insider

Monday, May 8, 2023

Green Finland

Nuclear plants are expensive to build. Once they're built, they require a long time to test and certify. There haven't been many new ones lately, but Europe has one that just went into daily production of energy. 

Finland's new nuclear (very low carbon emissions) power plant started up just last month, while Germany was shutting down its last three and increasing their coal mining

Europe's "most powerful reactor" will help Finland reach the European Union's goals toward  renewable energy. Back in 2021, renewable sources provided only 22% of the 27-member EU's power consumption. They are talking about setting a new, ambitious goal 0f 42.5% by the year 2030.

Fossil fuels (oil and natural gas) have long been imported from Russia, which became even more of a problem for the EU when Russia started its war with Ukraine. They would very much like to eliminate that energy source, and hope to do so by 2027.

Friday, May 5, 2023

Fear of AI 5

(cont'd from yesterday's post)

There's no doubt AI can be used positively in many ways, like in healthcare diagnosis.  If you watched Khan's tutor video yesterday, you know the tutor bot looks helpful and very convincing. It's easy to assume the AI tutor is thinking. But actually, AI doesn't understand anything. It just surveys relevant internet content and arranges it per its algorithm instructions. When a student asks it for ethics advice, he will get the opinion/algorithm designed by the AI tutor's programmer at Khan Academy. May be good, may not be good.

And that's why it's dangerous. It sounds very much like us (that's intentional), but it's not a person. It doesn't understand the difference between true and false. It "makes stuff up," as the MIT article says.

That's why Hinton is worried: “It is hard to see how you can prevent the bad actors from using it for bad things . . I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

How would bad actors use it for bad things? It's a perfect tool for deception, for fake news, for manipulation, for corrupt politicians. Will we get to the point where we trust nobody?

Thursday, May 4, 2023

Fear of AI 4

(cont'd from yesterday's post)

While researchers are building their own artificial intelligence to race against competitors, users are thinking of ways to use it. Khan Academy, for example, is promoting "Khanmigo," their "artificially intelligent personal tutor" that can talk a student through a problem like a live, personal tutor would. It sounds impressive.


Massachusetts Institute of Technology (MIT) says, "ChatGPT is about to revolutionize the economy. We need to decide what that looks like." Since ChatGPT came out last November, businesses are intent on developing it for their benefit. Nobody wants to be left behind.

We've seen the open letter from tech experts, we've seen Yudkowsky's advice to shut it down, now there's a third big warning about AI. Just a few days ago, the "godfather of AI" was interviewed in the NYT:  he regrets the work he devoted his life to because of how A.I. could be misused. Wow.

(cont'd tomorrow)

Wednesday, May 3, 2023

Fear of AI 3

(cont'd from yesterday's post)

You may wonder if mainstream, credible people take Yudkowsky's fear of runaway artificial intelligence seriously. Apparently some really do. His strong argument for taking steps to shut it down altogether draws the attention of thoughtful people.

Writing for Bloomberg, historian Niall Ferguson takes it seriously. 

Nuclear weapons and biological warfare are two extreme dangers that the world has dealt with in our recent past, so he thinks maybe we could treat AI like those threats. All governments and private companies would agree (as in the nuclear non-proliferation agreements) to stop pursuing AI research. Umm . . highly unlikely.

Even AI itself predicts undesirable results. When asked what negatives may come from large language models, GPT-4 gave two answers: 1) inauthentic, non-creative written works, and 2) fake news, propaganda, misinformation.

from Bloomberg

(cont'd tomorrow)

Tuesday, May 2, 2023

Fear of AI 2

(cont'd from yesterday's post)

Yudkowsky (here are his credentials and he doesn't seem to be a wacko) published a strong, extreme warning in March: 

"[T]he most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

He explains plainly why he's so worried :

"Progress in AI capabilities is running vastly, vastly ahead of . . progress in understanding what the hell is going on inside those systems."

He makes a really persuasive case for his conclusion, and I recommend that you read it. His conclusion and recommendation to all the nations of the earth is to: 

"Shut it all down."

(cont'd tomorrow)

Monday, May 1, 2023

Fear of AI

Warnings that artificial intelligence could be dangerous to humans have been coming for years (i.e. "Terminator" movies). But that's just fiction, outlandish, far out in the future if it's even  possible. Only nerdy science-fiction types believe that stuff. Right?

Looks like the potential danger of AI is taken more seriously now. Respected voices in the field are sounding an alert. Though it may never evolve a conscious creativity (like humans have), it could evolve to do dangerous things that its creators never expected.

What if, for example, humans programmed it to make as many paper clips as possible . . with inadequate ethical restraints. It could go down this way: AI locks humans out of the internet to keep them from interfering, and then re-directs all earth's natural resources and industry to the manufacture of paper clips. 

AI is continually trained to do deep learning, to keep learning new skills and applications. Will its abilities multiply exponentially to the point where humans can't control it?

Some experts believe that. One of them, Eliezer Yudkowsky, sounds like he's almost in panic.

(cont'd tomorrow)