Multiplexer
@Multiplexer@discuss.tchncs.de
- Comment on TSMC is Set To Raise Prices of Cutting-Edge Chips By Up To 10%, As It Tries to Maintain Profit Margins With 'Hefty' US Tariffs 4 weeks ago:
10% higher prices for the US, right?
…RIGHT?? - Comment on Chrome increases its overwhelming market share, now over 70% 4 weeks ago:
Google <-> Browser
[Add “They are the same” Meme here]
- Comment on Looks like nuclear fusion is picking up steam 5 weeks ago:
Well, as almost all our industrial scale electrical energy sources boil down (this pun definitely intended) to rapidly heating up huge amounts of water, the pun seems to be obvious for a nuclear fusion power plant. But maybe it isn’t and it is just a coincidence. Who knows…
- Comment on [deleted] 5 weeks ago:
You are probably quite right, which is a good thing, but the authors take that into account themselves:
“Our team’s median timelines range from 2028 to 2032. AI progress may slow down in the 2030s if we don’t have AGI by then.”
They are citing an essay on this topic, which elaborates on the things you just mentioned:
lesswrong.com/…/slowdown-after-2028-compute-rlvr-…I will open a champagne bottle if there is no breakthrough in the next few years, because than the pace will significantly slow down.
But still not stop and that is the thing.
I myself might not be around any more if AGI arrives in 2077 instead of 2027, but my children will, so I am taking the possibility seriously.And pre-2030 is also not completely out of the question. Everyone has been quite surprised on how well LLMs were working.
There might be similar surprises for the other missing components like world model and continuous learning in store, which is a somewhat scary prospect.And alignment is even now already a major concern, let’s just say “Mecha-Hitler”, crazy fake videos and bot-armies with someone questionable’s agenda…
So seems like a good idea to try and press for control and regulation, even if the more extreme scenarios are likely to happen decades into the future, if at all… - Comment on [deleted] 5 weeks ago:
I think the point is not that it is really going to happen at that pace, but to show that it very well might happen within our lifetime. And also the authors have adjusted the earliest possible point of a possible hard to stop runaway scenario to 2028 afaik.
Kind of like the atomic doomsday clock, which has been oscillating between a quarter to twelve and a minute before twelve during the last decades, depending on active nukes and current politics. Helps to illustrate an abstract but nonetheless real risk with maximum possible impact (annihilation of mankind - not fond of the idea…)
Even if it looks like AI has been hitting some walls for now (which I am glad about) and is overhyped, this might not stay this way. So although AGI seems unlikely at the moment, taking the possibility into account and perhaps slowing down and making sure we are not recklessly risking triggering our own destruction is still a good idea, which is exactly the authors’ point.
Kind of like scanning the sky with telescopes and doing DART-style asteroid research missions is still a good idea, even though the probability for an extinction level meteorite event is low.