Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

As ChatGPT Linked to Mental Health Breakdowns, Mattel Announces Plans to Incorporate It Into Children's Toys

⁨84⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨cm0002@lemmy.world⁩ to ⁨technology@lemmy.zip⁩

https://futurism.com/mattel-announces-openai

source

Comments

Sort:hotnewtop
  • AbsolutelyNotAVelociraptor@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    Whatever could go wrong?

    A kitchen toy that tells the kid to prepare a pizza using glue mixed with the mozzarella?

    source
    • lvxferre@mander.xyz ⁨1⁩ ⁨day⁩ ago

      Have you been eating enough rocks? Geologists recommend one per day, you know~

      source
  • Grimy@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Grown educated adults sometimes anthropomorphize llms, I doubt even a fifth of children will be able to understand that their talking barbie isn’t an actual living entity.

    source
    • d00ery@lemmy.world ⁨1⁩ ⁨day⁩ ago

      And children get very attached to toys … it seems like a powerful combination. I’m sure the companies won’t exploit it 😂🤣🥲

      source
      • Suoko@feddit.it ⁨1⁩ ⁨day⁩ ago

        It will be used with educational goals only, to get polite, smart kids able to do math, foreign languages, any musical instrument they like, and so on.

        Sure. I’m 100% sure it be like that. 100%!

        source
    • the_q@lemmy.zip ⁨1⁩ ⁨day⁩ ago

      Wait till you hear about God!

      source
  • ogmios@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

    I was certain that headline was from The Onion.

    source
    • cyrano@lemmy.dbzer0.com ⁨1⁩ ⁨day⁩ ago

      Me too….me too

      source
  • BubblyRomeo@lemmy.world ⁨1⁩ ⁨day⁩ ago

    This should be on nottheonion!!

    source
    • cm0002@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Good idea! Lmao

      source
  • hperrin@lemmy.ca ⁨1⁩ ⁨day⁩ ago

    Imagine not only does your search engine tell you to eat rocks and glue, but your Barbie does too!

    source
  • irotsoma@lemmy.blahaj.zone ⁨1⁩ ⁨day⁩ ago

    It could totally be used effectively if and only if they do the work to train the LLM on only very specific content. But since they think the LLM shouldn’t require people to train it, and seem to believe that more content is better no matter what, this will never happen.

    But of course the other issue is that either way, the LLM will still be biased based on the content provided to it for training data. If it’s trained with religious content included, or some other set of content that some group believes is “wholesome” or “kid friendly”, it might still end up saying some pretty messed up stuff. Like if religious content is used, telling very young girls they are property owned by men (their fathers or husbands) and need to give their body freely to them, maybe not directly, but it will be implied in much of the advice it would give since that is a pretty deeply seeded belief in most current monotheistic religions and implied in many of the texts, even if it’s no longer openly practiced or legal in mainstream western societies.

    source