This posting is inspired from my last 2 two-months of interaction with AI technology. I have used AI to help me finalized a short novel, based on a draft of a long novel, I have written in draft several decades ago. I have used AI to analyze several paintings I owned as well as a painting of a neighbor here at THD. I have used AI to draw images, tables, charts, travel plans, and answer medical questions. I have used AI to trace some of my ancestral roots in the Philippines. I have used AI to analyze art works and a Collage/Decoupage- a gift on my 90th birthday.
This posting is from Washington Post, that I read recently and very delighted to share it with you.
The most common way people experience artificial intelligence is through chatbots, which work like an advanced form of instant messenger, answering questions and formulating tasks from prompts.
These bots are trained on troves of internet data, including Reddit conversations and digital books. Chatbots are incredibly adept at finding patterns and imitating speech, but they don’t interpret meanings, experts say. “It’s a super, super high-fidelity version of autocomplete,” Birnbaum said of the LLMs that power the chatbots.
Since it debuted in November, ChatGPT has stunned users with its ability to produce fluid language — generate complete novels, computer code, TV episodes and songs. GPT stands for “generative pre-trained transformer.” “Generative,” meaning that it uses AI to create things. “Pre-trained,” means that it has already been trained on a large amount of data. And “transformer” is a powerful type of neural network that can process language.
Created by the San Francisco start-up OpenAI, ChatGPT has led to a rush of companies releasing their own chatbots. Microsoft’s chatbot, Bing, uses the same underlying technology as ChatGPT. And Google released a chatbot, Bard, based on the company’s LaMDA model.
Some people think chatbots will alter how people find and consume data and information on the internet. Instead of entering a term into a search engine, like Google, and sifting through various links, people may end up asking a chatbot a question and getting a confident answer back. (Though sometimes these answers are false —)
Taming AI: Deepfakes, hallucination and misinformation
The boom in generative artificial intelligence brings exciting possibilities — but also concerns that the cutting-edge technology might cause harm. Chatbots can sometimes make up sources or confidently spread misinformation. In one instance, ChatGPT invented a sexual harassment scandal against a college law professor. It can also churn out conspiracy theories and racist answers.
Sometimes it expresses biases in its work: In one experiment, robots identified Black men when asked to find a “criminal” and marked all “homemakers” as women.
AI ethicists and researchers have long been concerned that, because chatbots draw on massive amounts of human speech — using data from Twitter to Wikipedia — they absorb our problems and biases. Companies have tried to put semantic guardrails in place to limit what chatbots can say, but that doesn’t always work.
Sometimes artificial intelligence produces information that sounds plausible but is irrelevant, nonsensical or entirely false. These odd detours are called hallucinations. Other people have become so immersed in chatbots they falsely believe the software is sentient, meaning it can think, feel, and act outside of human control. Experts say it can’t — at least not yet — but it can speak in a fluid way so that it mimics something alive. Google shared AI knowledge with the world — until ChatGPT caught up
Another worry is deepfakes, which are synthetically generated photos, audio or video that are fake but look real. The same technology that can produce awesome images could be deputized to fake wars, make celebrities say things they didn’t actually say or cause mass confusion or harm.
Companies test their artificial intelligence models for vulnerabilities, rooting out biases and weaknesses by simulating flaws in a process called red teaming.
Despite attempts to tame the technology, the innovation and sophistication of generative AI causes some to worry.
“When things talk to us like humans, we pick up a little suspension of disbelief,” said Mark Riedl, professor of computing at Georgia Tech and an expert on machine learning. “We kind of assume that these things are trying to be faithful to us, and when they come across as authoritative, we can find it hard to be skeptical.”
No comments:
Post a Comment