Sam Altman (@sama)
OpenAI가 @dylanscand을 Head of Preparedness로 영입했다는 발표입니다. 곧 매우 강력한 모델들과 함께 속도가 빠르게 변할 것이고, 이에 맞춘 대비와 안전장치 마련이 필요하다는 취지의 조직적 준비성 강화 소식입니다.
Sam Altman (@sama)
OpenAI가 @dylanscand을 Head of Preparedness로 영입했다는 발표입니다. 곧 매우 강력한 모델들과 함께 속도가 빠르게 변할 것이고, 이에 맞춘 대비와 안전장치 마련이 필요하다는 취지의 조직적 준비성 강화 소식입니다.
Sam Altman (@sama)
OpenAI가 @dylanscand을 Head of Preparedness로 영입했다는 발표입니다. 곧 매우 강력한 모델들과 함께 속도가 빠르게 변할 것이고, 이에 맞춘 대비와 안전장치 마련이 필요하다는 취지의 조직적 준비성 강화 소식입니다.
#ChatGPT is a more successful penny-farthing; a video essay:
https://youtu.be/4TsOD7x6rSQ
#ChatGPT #AI #llm #GenerativeAI #AIChatbot #AIEthics #AISafety #OpenAI #Anthropic #SCOT #sociology #Bijker #BigTech #Constructionism #socialconstruct #VideoEssay
Simon Willison (@simonw)
OpenClaw과 관련된 보안 문제는 악성 콘텐츠 노출과 도구 실행 능력을 결합한 다른 LLM 시스템들과 동일하다고 지적합니다. 프롬프트 인젝션과 이른바 'lethal trifecta' 같은 공격 위험이 있으며, 툴 실행 기능을 가진 모델 전반에서 유사한 보안·안전 리스크가 존재한다는 내용입니다.
Simon Willison (@simonw)
OpenClaw과 관련된 보안 문제는 악성 콘텐츠 노출과 도구 실행 능력을 결합한 다른 LLM 시스템들과 동일하다고 지적합니다. 프롬프트 인젝션과 이른바 'lethal trifecta' 같은 공격 위험이 있으며, 툴 실행 기능을 가진 모델 전반에서 유사한 보안·안전 리스크가 존재한다는 내용입니다.
"Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves.
In total, Margolis and Thacker discovered that the data Bondu left unprotected—accessible to anyone who logged in to the company's public-facing web console with their Google username—included children's names, birth dates, family member names, “objectives” for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff.
“It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. “Being able to see all these conversations was a massive violation of children's privacy.""
#AI #GenerativeAI #AISafety #CyberSecurity #Bondu #AIToy #Privacy #DataProtection
"Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves.
In total, Margolis and Thacker discovered that the data Bondu left unprotected—accessible to anyone who logged in to the company's public-facing web console with their Google username—included children's names, birth dates, family member names, “objectives” for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation. Bondu confirmed in conversations with the researchers that more than 50,000 chat transcripts were accessible through the exposed web portal, essentially all conversations the toys had engaged in other than those that had been manually deleted by parents or staff.
“It felt pretty intrusive and really weird to know these things," Thacker says of the children's private chats and documented preferences that he saw. “Being able to see all these conversations was a massive violation of children's privacy.""
#AI #GenerativeAI #AISafety #CyberSecurity #Bondu #AIToy #Privacy #DataProtection