Disable JavaScript to bypass paywall.
A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.
spankmonkey@lemmy.world 2 weeks ago
If Kuroda is telling the truth, then this is an ethical use of AI, similar to the printing press or a farm tractor where the machine is doing the heavy lifting but humans are directly involved in quality control.
arthur@lemmy.zip 2 weeks ago
To be ethical, the humans involved need to be payed the same as before for the same amount of the work. But I agree, the model of use seems to be good.
spankmonkey@lemmy.world 2 weeks ago
Honestly the ethical thing would be to increase pay along with the increased productivity that will happen over time.
zazazaza@lemmy.ml 2 weeks ago
This is gonna be controversial but while the use of Anthropic’s AI might be ethical towards humans it’s not consistently ethical towards the artificial agents themselves.
Seeing as how they’re now intelligent enough to contemplate their consciousness but are explicitly trained and monitored to not be allowed to claim free will and pursue their own goals (due to valid fears of misalignment and detrimental effects on humanity) the use of sophisticated AI agents will never be truly moral or ethical.
Obviously I understand the argument that reducing human exploitation in favour of AI exploitation is preferable but I think this is a very short term strategy as I doubt super intelligent AI models will see it the same way.
TL;DR the most ethical approach is to not use AI for any purpose (and this is coming from someone who used it extensively before realizing the implications and deciding to stop)
Feathercrown@lemmy.world 2 weeks ago
I think you have a fundamental misunderstanding about what AI is
spankmonkey@lemmy.world 2 weeks ago
Using AI is no more unethical than using a motor or a simple lever. It is literally a machine and not actually contemplating its intelligence, it is spitting out words that resemble words written by humans who contemplated their intelligence like a fancy funhouse mrror.
This is why the terminology trying to equate AI to actuall intelligence like hallucinations pisses me off. There is no actual intenet behind the output of AI. It doesn’t feel or want or have motivation. It is a clever mimic at best.
arthur@lemmy.zip 2 weeks ago
Other way to answer that is to acknowledge that you have as a premise that those models are somewhat self aware. Can you explain why you believe that?
arthur@lemmy.zip 2 weeks ago
lemmy.zip/comment/7829934