时间:2025-04-26 22:49:35 来源:网络整理编辑:焦點
ChatGPT is only two months old, but we've spent the time since it debuted debating how powerful it r
ChatGPT is only two months old, but we've spent the time since it debuted debating how powerful it reallyis — and how we should regulate it.
The artificial intelligence chatbot is being used by a significant number of people to help them with research; message people on dating apps; write code; brainstorm ideas for work, and more.
iRobot Roomba Combo i3+ Self-Emptying Robot Vacuum and Mop—$329.99(List Price $599.99)
Samsung Galaxy Tab A9+ 10.9" 64GB Wi-Fi Tablet—$178.99(List Price $219.99)
Apple AirPods Pro 2nd Gen With MagSafe USB-C Charging Case—$199.00(List Price $249.00)
Eero 6 Dual-Band Mesh Wi-Fi 6 System (Router + 2 Extenders)—$149.99(List Price $199.99)
Apple Watch Series 9 (GPS, 41mm, Midnight, S/M, Sports Band)—$299.00(List Price $399.00)
Just because it can be helpful doesn't mean it can't also be harmful: Students can use it to write essaysfor them, and bad actors can use it to create malware. Even without malicious intent from users, it can generate misleading information, reflect biases, generate offensive content, store sensitive information, and — some people fear — degrade everyone's critical thinking skills due to over-reliance. Then there's the ever-present (if a bit unfounded) fear that RoBoTs ArE tAkInG oVeR.
And ChatGPT can do all of that without much — if any — oversight from the U.S. government.
It's not that ChatGPT, or AI chatbots in general, are inherently bad, Nathan E. Sanders, a data scientist affiliated with the Berkman Klein Center at Harvard University, told Mashable. "In the democracy space, there are a lot of great, supportive applications for them that would help our society," Sanders said. It isn't that AI or ChatGPT shouldn't be used, but that we need to ensure it's being used responsibly. "Ideally, we want to be protecting vulnerable communities. We want to be protecting the interests of minority groups in that process so that the richest, most powerful interests are not the ones who dominate."
Regulating something like ChatGPT is important because this kind of AI can show indifference toward individual personal rights like privacy, and bolster systematic biases with regard to race, gender, ethnicity, age, and others. We also don't know, yet, where risk and liability may reside when using the tool.
"We can harness and regulate AI to create a more utopian society or risk having an unchecked, unregulated AI push us toward a more dystopian future," Democratic California Rep. Ted Lieu wrote in a New York Timesop-ed last week. He also introduced a resolution to Congress written entirely by ChatGPT that directs the House of Representatives to support regulating AI. He used the prompt: "You are Congressman Ted Lieu. Write a comprehensive congressional resolution generally expressing support for Congress to focus on AI."
All of this adds up to a pretty unclear future for regulations on AI chatbots like ChatGPT. Some places are already placing regulations on the tool. Massachusetts State Sen. Barry Finegold penned a bill that would require companies that use AI chatbots, like ChatGPT, to conduct risk assessments and implement security measures to disclose to the government how their algorithms work. The bill would also require these tools to put a watermark on their work in order to prevent plagiarism.
"This is such a powerful tool that there have to be regulations," Finegold told Axios.
Tweet may have been deleted
There are already some regulations on AIin general. The White House released an "AI Bill of Rights" that basically shows how protections that are already law — like civil rights, civil liberties, and privacy — affect AI. The EEOC is taking on AI-based hiring tools for the potential that they could discriminate against protected classes. Illinois requiresthat employers who rely on AI during the hiring process allow the government to check if the tool has a racial bias. Many states, including Vermont, Alabama, and Illinois, have commissions that work to ensure that AI is being used ethically. Colorado passed a billthat prohibits insurers from using AI that collects data that unfairly discriminates based on protected classes. And, of course, the EU is already ahead of the U.S. with regulations on AI: It passed the Artificial Intelligence Regulation Actlast December. None of these regulations are specific to ChatGPT or other AI chatbots.
While there are some state-wide regulations on AI, there isn't anything specific to chatbots like ChatGPT, neither state-wide nor nationally. The National Institute of Standards and Technology, part of the Department of Commerce, released an AI frameworkthat's supposed to give companies guidance on using, designing or deploying AI systems, but it's just that: a voluntary framework. There is no punishment for not sticking to it. Looking forward, the Federal Trade Commission appears to be creating new rulesfor companies that develop and deploy AI systems.
"Will the federal government somehow issue regulations or pass laws to oversee this stuff? I think that is highly, highly, highly unlikely," Dan Schwartz, an intellectual property partner with Nixon Peabody, told Mashable. "It is not likely you will see any federal regulation happening soon." In 2023, Schwartz predicts that the government will be looking into regulating the ownership of what ChatGPT produces. If you ask the tool to create code for you, for instance, do you own that code, or does OpenAI?
That second type of regulation — in the academia space — is likely to be private regulation. Noam Chompsky likens ChatGPT's contributions to educationas "high tech plagiarism," and when you plagiarize in school, you risk getting kicked out. That is how private regulation might work here, too.
We may run into a pretty big problem while attempting to regulate ChatGPT on the national level: AI systems can combat the very legislative regulatory system that would put them in check.
Sanders, the data scientist, explained in a piece for the New York Timesthat artificial intelligence like ChatGPT is "replacing humans in the democratic processes — not through voting, but through lobbying." That's because ChatGPT could automatically write comments and submit it in regulatory processes; write letters to submit to local newspapers and comment on news articles and post millions of social media posts every day.
Sanders explains to Mashable a concept called "the Red Queen's Race" in which someone — originally Lewis Caroll's Alice — exerts extreme effort only to make no forward progress. If you give an AI defensive and offensive capabilities, according to Sanders, you might get locked in a back and forth similar to a Red Queen's Race, and it could escalate out of control.
Sanders told Mashable the U.S. could potentially run into a problem: AI lobbyists trying to control the very legislation that is attempting to govern them. "It seems to me that's likely to be a losing battle for the human legislators," he said.
"My observation would be that the serious legislation that's been successfully passed for regulating machine learning in general has been painfully slow and insufficient to keep track with the progress in the field," Sanders said. "And I think it's easy to imagine that continuing into the future."
We have to be careful with how we regulate this, Sanders says, because you don't want to stifle innovation. So, you could, say, put in more roadblocks for people to submit comments to their legislators, like more captchas. But that could risk making it too difficult for regular people to engage in a democratic system.
"What I think is the most useful response is to try and encourage more democratic participation, try and encourage more humans to participate in the legislative process," Sanders said. "As AI presents challenges for scale and ubiquity, getting more people involved in the process and creating structures that allow legislators to hear from and be more responsive to real people, is a valid solution for combating that kind of threat of scale."
ChatGPT is in its infancy, and there are already plenty of ethical issues to take into account with its use. Yet, it isn't impossible to imagine a future in which sophisticated AI chatbots make our lives easier and our work better without risking the spread of misinformation and the downfall of the democratic system. It just might take a while for our government to put any meaningful regulation into action. After all, we've seen this play out before.
TopicsArtificial IntelligenceChatGPTOpenAI
Two astronauts just installed a new parking spot on the International Space Station2025-04-26 22:21
英超前排實拍:曼城半場讓二追四哈蘭德帽子戲法,球迷氣氛嗨爆!2025-04-26 22:01
米哈:伊布就像我的弟弟 ,我曾邀請他加盟博洛尼亞2025-04-26 21:57
阿萊格裏:穆裏尼奧很頑強 若人員齊整就不會堅持用4332025-04-26 21:44
'The Flying Bum' aircraft crashes during second test flight2025-04-26 21:21
菲爾米諾紀念利物浦百球 :和這幫小夥子經曆了一段難忘旅程2025-04-26 21:08
英超前瞻 :狼隊VS紐卡斯爾,阿斯頓維拉VS西漢姆聯,諾丁漢森林VS熱刺2025-04-26 21:07
RAC1:巴薩以未來收入做擔保成功注冊孔德 ,拉波爾塔支付了保證金2025-04-26 20:32
Whyd voice2025-04-26 20:31
西甲前瞻:赫塔費VS比利亞雷亞爾 ,巴薩VS巴利亞多,西班牙人VS皇馬2025-04-26 20:19
Here's George Takei chilling in zero gravity for the 'Star Trek' anniversary2025-04-26 22:31
英超前瞻:狼隊VS紐卡斯爾,阿斯頓維拉VS西漢姆聯,諾丁漢森林VS熱刺2025-04-26 22:16
RAC1 :巴薩以未來收入做擔保成功注冊孔德 ,拉波爾塔支付了保證金2025-04-26 22:08
750萬,巴薩吃虧,交易被攪亂,贏家變輸家,31歲球星比梅西強硬2025-04-26 21:12
Olympics official on Rio's green diving pool: 'Chemistry is not an exact science'2025-04-26 21:11
西甲2場:加的斯主場衝擊首分 ,瓦倫西亞主場不怵來自馬競的挑戰 !2025-04-26 20:44
單場20次射正!拜仁創五大聯賽近7年來最高紀錄2025-04-26 20:29
AC米蘭vs博洛尼亞首發:CDK首次先發,吉魯、萊奧登場2025-04-26 20:25
Early Apple2025-04-26 20:12
費拉裏讚揚迪馬爾科 ,盧卡庫開始邊緣化,馬羅塔可以考慮加維2025-04-26 20:06