Senal
@Senal@programming.dev
- Comment on System76 on Age Verification Laws 1 week ago:
By the sound of it, the disagreement is mostly in how direct an impact AB1043 will have on government plans for data collection and authoritarianism.
That’s not really the original disagreement i was referencing, nor is it a position i’ve taken, we agree that the local only bill isn’t the big bad.
You twice referenced the slippery slope fallacy when replying to comments clearly describing future actions, i was pointing out that it doesn’t meet that criteria because there is a reasonable assumption that the described escalation will occur.
Your original responses to which i was referring:
This is a slippery slope falicy. Just because the option is provided to self-identify age, doesn’t mean that it will be replaced with more complex and direct data collection (which I am against, if it wasn’t clear) later
You’re again relying on slipery slope falacy to say that because I’m okay with this one specific form of age gating, I’m okay with every other one, which I have repeatedly made clear is not true.
The first one is the main issue i was pointing out, the second one isn’t how the fallacy is applied at all.
As no one is taking the position that AB1043 is the actual danger most of what you are arguing doesn’t really apply.
Similarly with the Overton window, where it has been standard practice for over a decade to have a “are you at least 18?” popup, and for every single service to ask you your age, if not more. We absolutely need more data protections for systems such as this (ideally an outright ban on saving this information) but this doesn’t seem to make it worse.
Emphasis mine.
Hard disagree, moving the responsibility of this from individual websites to the OS is a big jump in scope.
The same kind of jump as making it the ISP’s responsibility if they serve illegal content from individual websites ( as has been suggested ).
Aside from that it centralises the surface area for future changes and enforcement.
Basically, from my understanding, this isn’t a step towards data collection or authoritarianism, and provides no significant benifit to either of those causes - its effectively a technical standard.
This is the disagreement, i (and obviously many others) are pointing at the long and comprehensive list of similar initiatives, both recent and historic, that were stepping stones to further encroachment and saying “oh look another small step in the continued and provable encroachment upon privacy” and you seem to be advocating for the benefit of the doubt.
Like, if this age-verification flag was proposed by the Linux Foundation, and agreed to by others, would the backlash be this big?
If the linux foundation had the same history of shenanigans, then yes.
Similarly, I don’t see any contradition between wanting a ban on storage/sharing of user data, and the implementation of a flag like this - even if we are able to ban all storage of user data, this law would be unaffected. That’s what I’m trying to figure out - how do people think that this leads towards those end goals? How would blocking it improve anything?
Ignore the technical implementation of this one step, nobody is saying this is the endgame big bad.
Think of it as a prevention measure, a single ant in the kitchen isn’t a problem in and of itself, but it’s almost certainly an indication of a larger potential future problem.
You are arguing it’s not a problem because the ant only has 5 legs, everyone else is saying the leg count doesn’t matter it’s still an ant.
Is it just a difference in opinion about the signicance of the Overton window?
See above
Is there a technical aspect I’m missing?
Not necessarily , it’s just that you are arguing a single technical issue in a conversation about perceived intentionality.
Is there some legal advantage this provides to survailance that I’ve missed?
See above
Right now, it seems like everyone is arguing against a strawman, implying that I support the idea of government/corporate surveillance and censorship, that I don’t expect that they’ll continue to be evil, or they’re simply saying its bad because its cosmetically similar to laws that do impede on freedoms. Given how unanimous the backlash is, I must be missing something?
That you are using a point nobody disagrees with to imply correctness in a context where said point doesn’t really apply makes it seem like you are coming at this in bad faith.
When bad faith is assumed, people look for underlying reasons.
- Comment on System76 on Age Verification Laws 1 week ago:
Ah, i think i see where the difference in opinion is, claiming this event leads directly to (as in the very next step is) AI/ID verification could be considered an unreasonable jump i suppose.
In my case i was interpreting the argument as this event will almost certainly lead to further encroachment events into privacy, one of which would probably be the AI/ID verification.
To me this is a reasonable assumption because it’s what has happened in pretty much all of the recent instances of similar event occurring and therefore not a slippery slope fallacy.
TL;DR
On further examination, the technical things you mention seem to be correct if you assume that this bill alone is the vector for privacy encroachment, but they don’t pan out at all if it is assumed that other steps will follow; which, given precedent, is highly likely to happen.
On the technical implementation:
The reason its a slippery slope fallacy is the assumption that this law is a direct attempt to implement those systems, in spite of the fact that AB1043 implements a system that would be redundant with AI or ID based methods,
As an aside i’m not sure anyone is claiming that this bill is a direct attempt at a hard AI/ID verification system, rather they are claiming that this another step in a series of encroachments that will lead to escalating requirements and enforcement, AI/ID verification being an obvious step in that series.
From a technical standpoint you are correct, it outright states that photo ID upload isn’t required, yet.
Opinion : A cynic might see this as indication that the politicians understand that political and public appetite for full photo id requirements is less than optimal, so this is just a small step in shifting the Overton window on this subject.
technically doesn’t offer any good way to transition into an AI or ID based system (since it all has to be done locally),
That is only correct in a very narrow set of circumstances, that local requirement isn’t set in stone at all.
All that needs to happen to go from this to full ID checks is to mandate they use a “trusted” service for verification. It wouldn’t need to be an always online thing either, think of how the bullshit online verification systems that already exist work, i.e. you need to go online every x days or your system/service/app will stop working.
opinion: I fully expect any “trusted” service they designate to be something that serves the governmental and corporate desire for as much data as they can get away with, this isn’t even a stretch, just look at the service discord was trying to implement, the one with deep ties to palantir
and legally, imposes additional data protection laws that are likely to interfere with AI-based age verification.
This isn’t wrong as much as it seems naive, we are talking about bills that change laws, any law introduced can be revoked, superseded or have “exceptions” carved out, such as the current favourite “think of the children” thin veneer they are using.
It wouldn’t take much to move from “all data is protected” to “all data is protected, unless we need it to protect the children”
That’s not even taking in to account that the laws are only as good as the system upholding them, the current US system is sketchy AF, other countries have similar issue with uneven application of laws.
Not to say we should throw out hands up, say “what’s the point?” and just do nothing, but pretending that these laws aren’t susceptible to the same issue affecting everything else doesn’t help anyone either.
The problem with AI and ID age verification isn’t the age verification. Its the data collection, limits on personal freedom, and to some, the inconvenience.
Agreed.
So far as I can tell, AB1043 doesn’t have a significant impact on data collection (it does add another metric that could be used for fingerprinting, but also adds stricter regulation on data collection when this flag is used,) or personal freedoms - esspecially not when compared to what is already the existing standard of asking the user for their age and/or if they’re over 18.
Mostly agreed.
the points i’d raise are that the whole idea of age verification is an encroachment upon personal freedoms for some, so there’s an aspect of subjectivity to that.
I addition, relying on data collection regulations at this point is almost dangerously naive, corporations and governments alike have shown that they will basically ignore them outright or make up some exception, this isn’t conjecture, this is something easily searchable, think flock, ring camera’s, stringray , PRISM, anything palantir is involved in, cambridge analytica, broad warrantless data requests etc.
There is absolutely no reason to give the benefit of the doubt to parties that have repeatedly proven to be doing sketchy shit.
- Comment on System76 on Age Verification Laws 1 week ago:
The fallacy is the expectation that following escalating events would arise from the event in question.
It’s only a fallacy if it’s unreasonable to expect the subsequent steps to occur or in this case, be attempted.
Does that mean it’s a guarantee, of course not, just that the fallacy doesn’t apply.
The intention or plan for escalating steps doesn’t have to be laid out perfectly to draw the parallels between this and previous similar events that were then subsequently used as foundations for greater reach.
Your reasoning around the technical implementation of such escalation isn’t applicable here (in the conversation about whether or not the fallacy applies)
If you want to argue that they won’t escalate, or it’s not possible , go right ahead, but raising a fallacy argument when it doesn’t apply isn’t a good start.
If you want i can address your arguments around implementation directly,as a seperate conversation?
- Comment on System76 on Age Verification Laws 1 week ago:
If you’re going to reference the slippery slope fallacy so much, you should probably read where and when it actually applies.
From the wikipedia entry:
When the initial step is not demonstrably likely to result in the claimed effects, this is called the slippery slope fallacy.
You yourself just acknowledged that the worst-case is already happening, so the assumption that the worst case will continue to happen is reasonable.
Unless you wish to argue that :
The worst-case scenario is already happening
followed by you saying
Okay, but
isn’t an acknowledgement ?
- Comment on Don't fall into the anti-AI hype 2 months ago:
Or Perhaps:
- They have a large corpus of context files to help with all aspects of how the output is generated
- They’re using a model with specialised fine tuning for the task attempted
- They have a series of MCP servers with access to relevant tooling available
- They have many many hours of prior experience with the setup and usage of such tools
- They used multiple tools manually and pulled the bits they needed
- They just said “Make me a thing” and it just worked like magic
they mention reinforcement learning, pre-training and other general LLM concepts, but none of these are related back to the tasks they are talking about.
The point is, there was no explanation of how any of this was achieved, which can lead to confusion about what was actually achieved.
The LLM wrote some docs vs the LLM rewrote the library from end to end are very different things.
It’s very much a “Don’t give up on X, look at what can be achieved” but without any actual details on what is required to achieve those results.
- Comment on Coinbase CEO explains why he fired engineers who didn’t try AI immediately 6 months ago:
Perhaps for the the style or complexity of the code you (and i) are seeing on a regular basis this is true.
I find, for low logical complexity code, it’s less about the difficulty of reading it and more about the speed.
I can read significantly quicker than i can type and if the code isn’t something i need additional time to reason about then spotting issue with existing code can be quicker than me writing the same code out.
Boilerplate code is a good example of this.
Though, as i said, I’ve found the point at which that loses it’s reliable usefulness is relatively low in the complexity scale.
The specific issue i have with people pushing LLM’s as a panacea for boilerplate code is that it’s not declarative and is prone to reasonable looking hallucinations , given enough space.
Even boilerplate in large enough amount can be subject to eccentricities of LLM imagination.
- Comment on Coinbase CEO explains why he fired engineers who didn’t try AI immediately 6 months ago:
It’s a revolutionary tool in its infancy, and it’s already very useful on certain tasks.
That’s a bold and premature statement IMO, how many AI winters have there been before this one ?
I’m not even saying you’re wrong, but to assert it with such confidence sounds like crypto bro-nanigans.
i would argue that it’s evolutionary rather than revolutionary, but that’s subjective i suppose.
I’ll never understand the LLM hate on lemmy.
Speaking for me personally i don’t hate LLM’s i dislike the confidence with which they are being pushed and the lack of acknowledgement of their limitations.
You get statements like
“And most devs I know use it everyday”
Without context, and that feeds into the general propaganda feel of the sentiment, because people who don’t actually use them or don’t understand the implied context hear “LLM’s can do all the things, all the professional devs are using it all day”.
I understand it’s not on you to police peoples impressions, but trying to add actual context to those statements isn’t hate, it’s prudence.
Then again, that’s subjective too.
- Comment on Coinbase CEO explains why he fired engineers who didn’t try AI immediately 6 months ago:
It’s much faster to check code for correctness than it is to write it in the first place.
In certain circumstances sure, but at any level of complexity, not so much.
At some point it becomes less about code correctness and more about logical correctness, which requires contextual domain understanding.
Want to churn out directory changing python scripts, go nuts.
Want to add business logic that isn’t a single discrete change to an existing system, less likely.
For small things is works OK, it’s less useful the more complex the task.
- Comment on Coinbase CEO explains why he fired engineers who didn’t try AI immediately 6 months ago:
That’s a deep cut and I am here for it
- Comment on Google reacts angrily to report it will have to sell Chrome 1 year ago:
Depends on what issue they are trying to fix.
Chromium is a problem but it doesn’t seem like that’s what they are trying to address here.
I was talking about the technical monopoly wrt to rendering engines and web standards, Chromium is a problem but it doesn’t seem like that’s what they are trying to address here.
From that article it seems like they might be trying to separate chrome in hopes that that will enable the new owners to “decouple” it from google search.
If that’s the case it’s a dumb move if it’s the only move they make, all that would happen is google would just build the new owners a scrooge mcduck swimming pool to make google the default search. Same thing they do with firefox.
It even says that in the article.
It would be interesting to see how they’d deal with the decoupling of the built in google proprietary panopticon bullshit.
They’d struggle to shift that over to chromium without upsetting…well…everyone.
- Comment on Google reacts angrily to report it will have to sell Chrome 1 year ago:
TL;DR;
They have an effective monopoly and have repeatedly shown they will use it to serve their needs.
One concrete way is the level of control that google has over the inner workings on the rendering engine giving it significant control over web standards.
A real life example fo this is the controversy around the JPEG-XL format, google decides to drop support for it, doing so removes support for every single browser based on the rendering engine in chromium (eventually).
Now, other browsers ( firefox for example) have to decide if it’s worth it to add in and maintain support for a format that will only work in their rendering engine.
Sounds like a win right? now firefox has a feature that chrome doesn’t.
Now, developers/businesses have a choice.
- A: Add/Maintain/Test features that use the JPEG-XL format exclusively, this feature is only available to the Y% of people not using a chromium based browser.
- B: Use some other format that is supported in chrome (and other browser).
- C: Do A with B as a fail-over, adding additional cost to development/maintenance and testing.
In almost all circumstances, B is the fiscally responsible option, which means that google has effective control over web standards and their implementation.
A non rendering engine example is ad-blockers, google decides there are underlying security issues with how some integrations with the web browser works, this “just so happens” to break how almost all decent adblocking is done at a browser level.
They go ahead and create an updated version of the specification that describes how this interaction works, implement this upstream and suddenly all chromium based browsers now can’t use the most effective adblockers.
Technically the downstream browsers could do some shenanigans to keep the ability to block ads effectively , but the technical and monetary barriers to such an endeavour are so high it is absolutely not worth it.
There is more technical nuance to this story, the security issues are real in V2 but the need to break adblockers in process of fixing these issues is debatable.