ChatGPT安全隐患浮出水面!用户隐私或面临泄露风险
What not to share with ChatGPT
不能和chatGPT分享的事
Sometimes you have to keep things to yourself
有时你需要有自己的一些隐私,不能对它完全开诚布公
(Image credit: South House Studio via Shuttertsock)
However you may feel about ChatGPT, there’s no denying that the chatbot is here to stay. We’ve had this incredibly powerful tool thrust upon us, and the question we must ask ourselves has changed from ‘what can you do with ChatGPT?’ to ‘what should you do with it?’
无论你对ChatGPT有何看法,不可否认的是聊天机器人已来到了我们的生活中。我们必须得接受这款强大的工具,同时我们的问题也不再是“我能用ChatGPT做什么?”,而应该是“我应该用它做什么?”
Most people have a vague awareness of the possible dangers that come with using chatbots like ChatGPT, and the potential data or privacy breaches users are susceptible to. In all honesty, ChatGPT could become a security nightmare, and we’ve already seen a few small-scale examples of this in the short time the product has been made public.
大多数人没有清晰地意识到,使用ChatGPT这样的聊天机器人可能会有潜在的危险,用户的个人数据或隐私可能会遭到泄露。坦白地说,ChatGPT可能会成为安全隐患,而且在该产品发布后的短短时间内,我们已经看到了一些案例,在小范围内造成了影响。
ChatGPT experienced an outage earlier in the year which left paid subscribers and free users feeling lost in conversations and unable to log into or use the bot. That was soon followed by a post from OpenAI where we learned that a bug had allowed users to see chat titles from other users’ histories.
ChatGPT在今年早些时候出了一次故障,这让付费用户和免费用户在对话中不知所措,无法登录或使用机器人。不久之后,OpenAl发布了一篇帖子,从中我们了解到ChatGPT存在漏洞,允许用户看到其他用户的历史聊天记录的标题。
What are the risks, and are they worth it?
这样做有什么风险,值得吗?
While that was a little unnerving – and quickly fixed when ChatGPT came back – OpenAI also admitted that the same bug “may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window”.
虽然这漏洞让人不安,但在ChatGPT恢复使用时就修复了。OpenAI也承认同样的漏洞“可能导致1.2%的ChatGPT Plus用户在一个特定的9小时窗口内活跃的支付相关信息无意中可见。”
This is only a small example of the kinds of data security threats we could be facing, but the point still stands that the incredible capabilities of ChatGPT now pose the integral question: at what point are you oversharing with the AI?
这只是我们可能会面临的各种数据安全威胁其中一个小例子,但重点仍然是ChatGPT过于强大的功能引申出了一个必要的问题:在什么情况下你已经将个人隐私暴露给了人工智能?
OpenAI’s CEO Sam Altman has acknowledged the risks of relying on ChatGPT and warns that “it’s a mistake to be relying on it for anything important right now”.
OpenAl的首席执行官Sam Altman承认了依赖ChatGPT的风险,并警告说“现在依赖它做任何重要的事情都是错误的”。
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
ChatGPT非常有限,但在某些方面表现却足够优秀,足以误导人们产生一种它无所不能的印象。现在依赖它做任何重要的事情都是错误的。它只是技术取得长足进步的前站;我们在稳健性和真实性方面仍有很多工作要做。
You should be approaching ChatGPT in the same way you would other platforms like Facebook or Twitter. If you wouldn’t want the general public to read what you have to say or what you’re feeding into ChatGPT, then don’t surrender that information to the bot – or any chatbot, for that matter.
你应该像对待Facebook或Twitter等其他平台那样,慢慢了解ChatGPT。如果你不希望公众看到您要说的话或在ChatGPT中输入的内容,那么就不要将这些信息交给它——或者就此而言,任何聊天机器人。
The friendly and innocent demeanor of chatbots like Google Bard and Bing AI may be appealing, but don’t be fooled! Unless you specifically opt out, all your information is being used to train the chatbot or being looked at by other humans working at OpenAI, so keep that in mind next time you start chatting.
像Google Bard和Bing AI等聊天机器人友好和无辜的风度可能很吸引人,但不要被愚弄了!除非你特别选择退出,否则你所有的信息都会被用来训练聊天机器人,或者被其他在OpenAl工作的人查看,所以下次开始聊天的时候,谨记这一点。
特别说明:本文内容选自Techradar官网,仅供学习交流使用,如有侵权请后台联系小编删除。
– END –