You're probably not the first person to ask the question of "but what about parents taking pictures of their naked kids in a pool", I'm sure they plan on having some way of dealing with it if anything to lighten their workload. That doesn't mean it will be perfect but no matter how jaded we all are, nobody wants to start falsely accusing innocent people.
I don't see how it will be different this time. The past decade has seen a bunch of failed projects that used algorithms to do the heavy lifting, and justice and redress for those involved has not even been served in all cases.
The Dutch Toeslagenaffaire¹ was a massive failure of using discriminating algorithms to judge if tax payers were defrauding the state (most of those marked as fraudsters were innocent and many lost jobs, housing, and hundreds even had their children placed into foster care (!) due to the cascading effects caused by being on this list).
The British Post Office scandal² is another famous case that will be familiar to those from the UK.
To call Dutch Toeslagenaffaire a failure of discriminating algorithms, when the main criteria was: Manually identify innocents based if they were born outside the Netherlands or married to somebody born outside the Netherlands, or if they were born in the Netherlands, had Dutch nationality but grand parents born outside the Netherlands....Is a very strange characterization.
I think you may have misread my comment. The algorithms didn't fail; they did exactly what they were programmed to do. The failure lay in the use of these algorithms; i.e., such algorithms should never have been developed and subsequently used to put people on a blacklist.
Anything automated that has the potential to ruin people’s lives should first be checked by humans when flagged. That’s why this csam thing, if used, will cause crazy amounts of human work, and/or, after first ‘convicting’ a bunch of people and a lot of bad press, will then be toned down to only go after already suspected individuals but now their data can be scanned without a warrant.
> nobody wants to start falsely accusing innocent people
I mean this is just blatantly false. History has shown us this is false. Even if you think the current government is fully comprised of kind-hearted, magnanimous, good people, what makes you think that will be the case for the next 10, 50, 100 years?
A story - I have a google alert set up for my name. Someone in a different state with the same name was recently arrested. I've gotten 5-6 alerts every day for the last week from various local (to him) newspaper and television station websites announcing his arrest. Name, address, age, alleged offenses, etc. How many will I get if he's found innocent two years from now? My guess is, at most, 1 or 2. Why would we want AI, even with human verification (which is just them checking the hashes match, anyway, so not real "verification"), to trigger the cascade of events that would occur from someone getting arrested for CSAM based on this technology? It would ruin someone's life. Now if I'm being honest, if someone actually has CSAM, I couldn't care less if they end up homeless and their life is actually ruined. But taking a step back, and acknowledging that someone innocent will get caught by this, as it's just a matter of time at that point, it's not worth it. If we had a society where you could just say in a job interview "oh yeah that was a false positive, I can prove it" and that's that, then there's a bit more of an argument to be made. But that's not our society. Once your name pops up with this kind of thing, that's who you are for the rest of your life, guilty or not.
>>omeone in a different state with the same name was recently arrested. I've gotten 5-6 alerts every day for the last week from various local (to him) newspaper and television station websites announcing his arrest. Name, address, age, alleged offenses, etc. How many will I get if he's found innocent two years from now?
That sounds like a problem that's very unique to the weird justice system in US(and UK to an extent, unfortunately), where the law allows publishing the name and other details of an arrested person. In most(if not all?) EU countries that is strictly forbidden - until the trial is done, any arestee can only be reported on by their first name and with the face hidden. Avoids runining innocent lives like American media do.
For starters, you seem to giving anecdotes from the US. Privacy laws in the EU differ from country to country but in some places at least, names & faces of people arrested suspects aren't published. And afaik these gross miscarriages of justice that seem to be ubiquitous in the US are nowhere near as common the EU.
> But taking a step back, and acknowledging that someone innocent will get caught by this, as it's just a matter of time at that point, it's not worth it
Couldn't you extrapolate that to law enforcement in general?
It's not clear to me if you're saying that AI specifically will falsely flag people and / or if you're worried about being identified as someone else with the same name getting arrested for something. Either way both scenarios exist without any AI being involved.
Personally I was falsely flagged because someone driving a car with a number plate that belonged to me was caught over a dozen times driving ridiculous speeds and it took me the better part of a year to convince the justice system it wasn't me. All because they didn't know how to update some database records with the other guy's name and despite the fact that no human ever claimed I was driving that car (the other guy said it was him not me, the car leasing company said it wasn't me, plenty of paperwork was provided). Massive pain in the ass (and nowhere near as serious as CSAM) and it required lawyers and court appearances but I'm not advocating to abolish speed checks.
It always amazes me how anyone can be technologically literate and still think world governments should be given the benefit of the doubt when it comes to things like this. I agree OP, most people in government are probably not inherently evil or malicious but why should anyone trust that these initiatives won’t be poorly implemented and cause massive collateral damage to innocent lives?
It is not just the people in government today that you need to worry about. It is anyone in the potential future also that may abuse these initiatives.
This is the biggest danger. These technologies are a boon to the potential future populist dictators of Europe. It starts with Orban. Who knows where it might end?
> It always amazes me how anyone can be technologically literate and still think world governments should be given the benefit of the doubt when it comes to things like this
Yes I do think that. I'm very familiar with how the EU institutions work and I don't like it, so I'm no fan boy and certainly not trying to defend these kind of proposals.
I realize I'm preaching to a very cynical crowd here but I just don't see EU countries locking up innocent people accused of being pedophiles en masse by some garbage AI (different story entirely in the US or China).
Justice here massively leans toward giving people the benefit of the doubt, imposing light sentences etc. much to everyone's frustration.
> I realize I'm preaching to a very cynical crowd here but I just don't see EU countries locking up innocent people accused of being pedophiles en masse by some garbage AI
555 wrongful fraud convictions because of Second Sight, between 1996 and 2014, with a 2015 claim of no wrong doing and no system problems [0], in the UK.
Rubina Nami, jailed for a year. Seema Misra jailed for longer, whilst innocent and pregnant.
Noel Thomas jailed for twelve weeks at the age of 60 - the judge refusing to consider a flaw in the computer system to be possible, because of a report that concluded:
> If the Horizon system was flawed, I would expect to see issues raised by all 14,000 branches in the UK and not only a handful.
They locked up people over a garbage AI system over something much less sensitive than protecting children. Recently.
>nobody wants to start falsely accusing innocent people.
If anything, the 21st century has proven quite a few people are more than willing to do so when they deem the lack of true positives to be the greater evil. Or even just doing it for their own benefit.
Seeing as how things are going in China/Russia/N. Korea we should all learn to fight for our rights to freedom and privacy and not turn them over to the government. Western democracy is currently losing the cultural war and people aren't taking notice. US democracy was almost subverted by a phony bologna real estate scheister. I'm not sure why people think governments and rights can't change overnight unless we're diligent to push back against surveillance and other gross governmental overreach.
Can you give some EU specific examples that can be considered representative and not huge outliers?
Shit happens everywhere but where am I in Europe if anything there's a lack of law enforcement in the sense that prison terms always seem to be considered too low, people with sentences under X months don't actually go to jail because of overcrowding, police is constantly frustrated that people they pick off the street are immediately released again by judges etc.
The starting comment gave you an example to start off with. The Netherlands, Toeslagenaffaire.
You can't go telling "yeah we really care about innocent people" when you let an algorithm wreak havoc producing false positives and then take ages to fix it while the families are still dealing with the aftermath. That's the opposite, caring for true positives despite the potential false positives produced.
I'd also ask you to take a few trips down some ideologies. If you believe "EU is so nice it can do no wrong", you'll probably be cured of it soon when you realize extremists are spread all around the world, and have no qualms exercising their sense of justice even if it hurts innocent people.
And it happened in the Netherlands, which aren't exactly the cleanest of the countries (just check their ties with drug cartels money [1]), so it's Netherlanders that should be worried, not EU citizens.
It happens all the time, I don't know anymore how many wrongful tax bills I have received over the past 30 years, you just show the documents that prove you're in the right (if you are) and that's about it.
We pay accountants for a reason.
Lawyers are still free in my country if you can't afford one, event though it's very hard go to prison for tax evasion, at least not in all the European countries I know, even less likely for undue child benefits.
[1] Money laundering is a growing problem in the Netherlands, with estimates suggesting that around €16 billion in illegal funds is laundered there every year — money derived from a range of criminal activities, including drug trafficking, sexual exploitation and extortion
You're being pedantic and apologist for very little reason. There are still dozens of families who have yet to be reimbursed for this mistake, let alone the damage which was caused in the aftermath. Instead, we have a bunch of people still arguing what to do about the situation, all which are paid in tax money that could've been used to solve the issue already.
It's not about them being evil. It's about them feeling justified to use the algorithm in the way it was used, which then lead to pretty dire consequences for many families, followed by a lack of action and a general lazy attitude towards the entire situation. Along with a general lack of foresight when they implemented it.
This is a very clear example of what happens when you optimize for true positives at the cost of false positives. You can argue the costs and benefits, but you can't argue "people don't want to harm innocents" when they select to do so in a scenario where it very clear does harm innocents.
"Caring about not harming innocents" gets dropped the moment it feels inconvenient. That's something people need to be aware of instead of living in their idealist dream world.
Again, you're being pedantic in an attempt to dismiss people.
People could interpret the comment in question as "name examples in Europe", as if commenter was trying to imply "European countries aren't as crazy and totalitarian as the US / the remainder of the world". Several commenters gave examples close to home.
But somehow the EU, as in European Union, will be the exception, and we can't be skeptical of that. Get real.
> Again, you're being pedantic in an attempt to dismiss people.
No, I'm being correct.
Netherland problems that are not caused by EU regulations, are not EU problems.
Deal with that.
> But somehow the EU, as in European Union, will be the exception, and we can't be skeptical of that.
Fale premise.
I never said that EU is gonna be an exception, just that this particular problem happens everywhere, tax fraud are probably the most common case of fraud World wide.
So implying that's proof of some potential problem inside a new EU regulation that has nothing to do with tax fraud is naive at best, but let's get real, you're being not naive, you're simply being biased.
>nobody wants to start falsely accusing innocent people.
Do you have a source for that? First, by triggering the filter, there's an assumption of guilt. The filter wouldn't trigger if there wasn't something bad there, right? Second, someone, somewhere is being incentivized in their job to increase the number of "detections" so they can increase their arrest numbers. Why would they want to "lighten their workload"? They're the cops. They just ask for more resources and get them.
It's a waste of time for everyone. Prisons are overcrowded, court cases take years to get going, criminals being caught red handed are released before cops even get the paperwork done, there's plenty of rock star defense lawyers.
> They're the cops. They just ask for more resources and get them.
> Meanwhile France last year just dropped the age of consent to 15 and if you check Wikipedia you'll most EU countries have it set around the same age.
France's age of consent has been 15, since 1945 (1982, if you weren't straight). What they did last year was add a Romeo & Juliet clause that allows for below that age if you're within five years of your partner and not committing something that would otherwise be considered assault. So a 15 yro and a 12yro won't end up in a rape case for being partners.
They tightened overly loose definitions, basically. Not dropped - raised. (A lot to do with this [0] particular case that demonstrated just how damn loose the laws were.)
15 vs 12 is huge gap in relative age, development, and maturity. But it sounds worse from your description, like a 15yo can have sex with a 10yo and it's considered consensual??
It is _now_ that a 15yro and a 10yro _may_ be considered consensual if no other factors come into play, like authority. However, the court would have to examine it on a case-by-case basis.
Previously, it was _assumed_ that consent occurred. Such as in the case I pointed out - the court struggled to say there was not willing engagement between a 28 yro and an 11 yro.
Historically speaking, France has some of the loosest laws around consent in the world.
"Family goes through seven months of hell falsely accused of child porn charges after Spanish police misread US-style date in tip-off from American group"
And if this passes, we will have plenty of examples from the EU too. Here it was not as easy to crack down on people because we take privacy very seriously, but it will happen as soon as the state gets the power to do it.
See for example how we treat people who have few grams of weed on them. We protect innocent people, you say? Bullshit, we crack down on gardening shops if they're even just slightly connected to the weed culture!