👩🏻💻 Science / Technology
OpenAI worries people may become emotionally reliant on its new ChatGPT voice mode
OpenAI, 새로운 ChatGPT 음성 모드에 사용자의 감정적인 의존 우려
New York (CNN) — OpenAI is worried that people might start to rely on ChatGPT too much for companionship, potentially leading to “dependence,” because of its new human-sounding voice mode.
That revelation came in a report Thursday from OpenAI on the safety review it conducted of the tool — which began rolling out to paid users last week — and the large language AI model it runs on.
ChatGPT’s advanced voice mode sounds remarkably lifelike. It responds in real time, can adjust to being interrupted, makes the kinds of noises that humans make during conversations like laughing or “hmms.” It can also judge a speaker’s emotional state based on their tone of voice.
Within minutes of OpenAI announcing the feature at an event earlier this year, it was being compared to the AI digital assistant in the 2013 film “Her,” with whom the protagonist falls in love, only to be left heartbroken when the AI admits “she” also has relationships with hundreds of other users.
Now, OpenAI is apparently concerned that fictional story is a little too close to becoming reality, after it says it observed users talking to ChatGPT’s voice mode in language “expressing shared bonds” with the tool.
Eventually, “users might form social relationships with the AI, reducing their need for human interaction — potentially benefiting lonely individuals but possibly affecting healthy relationships,” the report states. It adds that hearing information from a bot that sounds like a human could lead users to trust the tool more than they should, given AI’s propensity to get things wrong.
The report underscores a big-picture risk surrounding artificial intelligence: tech companies are racing to quickly roll out to the public AI tools that they say could upend the way we live, work, socialize and find information. But they’re doing so before anyone really understands what those implications are. As with many tech advancements, companies often have one idea of how their tools can and should be used, but users come up with a whole host of other potential applications, often with unintended consequences.
Some people are already forming what they describe as romantic relationships with AI chatbots, prompting concern from relationship experts.
“It’s a lot of responsibility on companies to really navigate this in an ethical and responsible way, and it’s all in an experimentation phase right now,” Liesel Sharabi, an Arizona State University Professor who studies technology and human communication, told CNN in an interview in June. “I do worry about people who are forming really deep connections with a technology that might not exist in the long-run and that is constantly evolving.”
OpenAI said that human users’ interactions with ChatGPT’s voice mode could also, over time, influence what’s considered normal in social interactions.
“Our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions,” the company said in the report.
For now, OpenAI says it’s committed to building AI “safely,” and plans to continue studying the potential for “emotional reliance” by users on its tools.
**
revelation : 사실
Within minutes of: ~한 직후
apparently : (실제는 어떻든) 보기에, 외관상으로는(seemingly)
propensity : 경향
underscores: 강조하다
roll out : 출시하다
upend: 뒤엎다
implications: 영향, 결과
As with : 마찬가지로
deferential: 공손한
committed to: ~에 전념하다
토론 >
OpenAI has expressed concerns that 'the new human-sounding voice mode' of ChatGPT might lead users to develop unhealthy dependencies on the AI for companionship, potentially diminishing their need for human interaction. Pros of this feature include offering a valuable resource for individuals experiencing loneliness and providing highly engaging, interactive conversations. However, cons include the risk of users forming excessive emotional bonds with the AI, which could negatively affect their real-world relationships and social skills. Considering the potential for users to form deep connections with AI, how should technology companies approach the design and regulation of AI tools to prevent possible negative impacts on users' mental health and social interactions?
OpenAI는 ChatGPT의 '사람처럼 들리는 새로운 음성 모드'로 인해 사용자가 AI에 대한 건전하지 않은 의존성을 갖게 되어 잠재적으로 사람과의 상호작용에 대한 필요성을 감소시킬 수 있다는 우려를 표명했다. 이 기능의 장점으로는 외로움을 느끼는 사람들에게 도움이 되고 참여도가 높은 대화형 대화를 제공한다는 점이 있지만, 사용자가 AI와 과도한 감정적 유대감을 형성하여 실제 인간관계와 사회성에 부정적인 영향을 미칠 수 있다는 위험이 있다. 사용자가 AI와 깊은 관계를 형성할 수 있는 잠재력을 고려할 때, 기술 기업은 사용자의 정신 건강과 사회적 상호 작용에 미칠 수 있는 부정적인 영향을 방지하기 위해 AI 도구의 설계와 규제에 어떻게 접근해야 할까?
https://cnn.ybmnet.co.kr/cnn_discussion/3215
CNN - YBMNET
평일 매일 업로드되는 5가지 주제(비즈니스, 연예, 스포츠, 세계, 미국)의 CNN뉴스 영상과 기사에서 현지 아나운서의 정확한 발음과 완성도 높은 문장을 통해 영어 듣기, 읽기, 말하기, 쓰기를 학
cnn.ybmnet.co.kr
'어학공부와 자격증 등 > 영어' 카테고리의 다른 글
[영어/토익(TOEIC)] 맥북 유저 토익(어학성적) 성적표 발급 방법 👩🏻💼🔖 (4) | 2025.02.06 |
---|---|
[영어/오픽(OPIC)] 기적의 첫시험 IH 받은 후기😸 (10) | 2024.09.08 |
[영어/오픽(OPIC)] 거의 노베이스 첫 시험 후기 (0) | 2024.07.31 |
[ENGLISH / CNN : NEWS 📰 ] AI / Health / Environment (0) | 2023.08.19 |
[ENGLISH / CNN : NEWS 📰 ] Seven Wonders / AI / banking industry (0) | 2023.08.15 |