Free_Opinions
@Free_Opinions@feddit.uk
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Yeah, I agree with all of this. What I’m pushing back against is the absolute, dismissive tone some people take whenever the potential dangers of AGI are brought up. Once someone is at least willing to accept the likely reality that we’ll have AGI at some point, then we can move on to debating the timescale.
If an asteroid impact were predicted 100 years from now, at what point should we start taking steps to prevent it? Framing it this way makes it feel more urgent—at least to me.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
You can’t know that.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Sure, but that’s still just a speedbump. In a few hundred or thousand years the civilization would rebound and we’d continue from where we left.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
This doesn’t just apply to AGI, same could be said about any technology. If it can be created and there’s value in creating it, then it’ll just be a matter of time untill someone invents it unless we go extinct before that.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Firstly, I’ve been talking about improvements in AI technology broadly, not any specific subfield. Secondly, you can’t know that. While I doubt LLMs will directly lead to AGI, I wouldn’t claim this with absolute certainty - there’s always a chance they do, or at the very least, that they help us discover what the next step should be.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*
*
Unless intelligence is substrate independent and cannot be replicated in silica or that we destroy ourselves before we get there - Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Would we know it if we saw it?
That seems besides the point when the question is about wether we’re getting closer to it or not.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
But objectively measured no? Is there no progress happening at all, or are we moving backwards? Because it’s either of those two or then we’re moving towards it.
- Comment on Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI 3 weeks ago:
Are we not heading towards AGI then?
- Comment on BMW’s new iDrive turns the whole windshield into a heads-up display 1 month ago:
Make cars dumb again
- Comment on Are you better value for money than AI? 1 month ago:
LLM is not synonymous with AI. We have no clue what this technology will be capable of in few years.
- Comment on Are you better value for money than AI? 1 month ago:
It’ll take a while before AI can fix your toilet, paint a wall or build a shed. Being replaced with AI isn’t something I need to worry about.
- Comment on Study reveals AI chatbots can detect race, but racial bias reduces response empathy 2 months ago:
It’s physically impossible for an LLM to hold prejudice.
- Comment on Tesla facing another Autopilot fatality lawsuit 2 months ago:
Have you considered that it might just be trendy to dunk on Tesla? These issues are in no way unique to their vehicles, and the over-representation in the news skews our perception of how dangerous they actually are.
There are 40k traffic fatalities in the U.S. alone every year. That’s 110 per day. Nobody gives a shit when Kia Ceed plows into pedestrians but when it’s a Tesla it’s instant clicks.