时间:2024-09-20 07:47:48 来源:网络整理编辑:探索
This week, OpenAI's co-head of the "superalignment" team (which overlooks the company's safety issue
This week, OpenAI's co-head of the "superalignment" team (which overlooks the company's safety issues), Jan Leike, resigned. In a thread on X (formerly Twitter), the safety leader explained why he left OpenAI, including that he disagreed with the company's leadership about its "core priorities" for "quite some time," so long that it reached a "breaking point."
The next day, OpenAI's CEO Sam Altman and president and co-founder Greg Brockman responded to Leike's claims that the company isn't focusing on safety.
Among other points, Leike had said that OpenAI's "safety culture and processes have taken a backseat to shiny products" in recent years, and that his team struggled to obtain the resources to get their safety work done.
SEE ALSO:Reddit's deal with OpenAI is confirmed. Here's what it means for your posts and comments."We are long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence]," Leike wrote. "We must prioritize preparing for them as best we can."
Altman first responded in a repost of Leike on Friday, stating that Leike is right that OpenAI has "a lot more to do" and it's "committed to doing it." He promised a longer post was coming.
On Saturday, Brockman posted a shared response from both himself and Altman on X:
Tweet may have been deleted
After expressing gratitude for Leike's work, Brockman and Altman said they've received questions following the resignation. They shared three points, the first being that OpenAI has raised awareness about AGI "so that the world can better prepare for it."
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," they wrote.
The second point is that they're building foundations for safe deployment of these technologies, and used the work employees have done to "bring [Chat]GPT-4 to the world in a safe way" as an example. The two claim that since then — OpenAI released ChatGPT-4 in March, 2023 — the company has "continuously improved model behavior and abuse monitoring in response to lessons learned from deployment."
The third point? "The future is going to be harder than the past," they wrote. OpenAI needs to keep elevating its safety work as it releases new models, Brock and Altman explained, and cited the company's Preparedness Framework as a way to help do that. According to its page on OpenAI's site, this framework predicts "catastrophic risks" that could arise, and seeks to mitigate them.
Brockman and Altman then go on to discuss the future, where OpenAI's models are more integrated into the world and more people interact with them. They see this as a beneficial thing, and believe it's possible to do this safely — "but it's going to take an enormous amount of foundational work." Because of this, the company may delay release timelines so models "reach [its] safety bar."
"We know we can't imagine every possible future scenario," they said. "So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities."
The leaders said OpenAI will keep researching and working with governments and stakeholders on safety.
"There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward," they concluded. "We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions."
Leike's resignation and words are compounded by the fact that OpenAI's chief scientist Ilya Sutskever resigned this week as well. "#WhatDidIlyaSee" became a trending topic on X, signaling the speculation over what top leaders at OpenAI are privy to. Judging by the negative reaction to today's statement from Brockman and Altman, it didn't dispel any of that speculation.
As of now, the company is charging ahead with its next release: ChatGPT-4o, a voice assistant.
TopicsArtificial IntelligenceOpenAI
Mom discovers security cameras hacked, kids' bedroom livestreamed2024-09-20 07:40
繼詹姆斯之後 ,下一個突破三萬分大關的球員是誰?兩人最有可能!(詹姆斯最多投進幾個三分)2024-09-20 07:19
現在 NBA 還有誰有可能拿到三萬分?(哪些球星的偶像是詹姆斯)2024-09-20 07:02
經典回顧 :詹姆斯最難受的係列賽 ,被逼著連續投絕殺 ,還是出局了(詹姆斯阻擋犯規)2024-09-20 06:48
One of the most controversial power struggles in media comes to a close2024-09-20 06:34
2023英超聯賽 :布萊頓VS伯恩茅斯賽前情報分析2024-09-20 06:21
官宣 !德約科維奇獲準參加法網 重返世界第1 衝擊第21座大滿貫(德約科維奇19個大滿貫冠軍)2024-09-20 06:10
2023年印度羽毛球公開賽2024-09-20 05:23
Did our grandparents have the best beauty advice?2024-09-20 05:17
2023.02.04 昨夜今晨 英超 比賽集錦!2024-09-20 05:08
Two astronauts just installed a new parking spot on the International Space Station2024-09-20 07:26
原創 英超賽事前瞻:狼隊vs利物浦、維拉vs萊切城2024-09-20 07:15
全明星替補出爐 :哈登不配?利拉德不進更好?誰替濃眉?誰哭了?(2024NBA選秀)2024-09-20 07:01
【英超】切爾西vs富勒姆,藍軍重磅引援,農場主表現出色2024-09-20 06:55
Mom discovers security cameras hacked, kids' bedroom livestreamed2024-09-20 06:19
現役球星除了詹姆斯外,還有誰能完成生涯30000分的壯舉?僅此3人(詹姆斯三分球能力)2024-09-20 06:02
NBA 2021 全明星首發陣容公布,哈登利拉德落選,你有什麽想說的 ?(2023nba模擬選秀)2024-09-20 05:54
國乒00後造冷門 !世界第3一輪遊,2位小將轟32024-09-20 05:24
Olympic security asks female Iranian fan to drop protest sign2024-09-20 05:11
NBA公布全明星首發陣容 ,誰被高估了 ?東西部全明星替補該怎麽選(2023nba扣籃大賽)2024-09-20 05:04