Hacker Newsnew | past | comments | ask | show | jobs | submit | gorbachev's commentslogin

Except when Zuck decides it's profitable to violate the rules.

The man is without any redeeming qualities.


Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.

"Fun" bonus fact: This isn't the first time Sama (the outsourcing company) has had these problems.

OpenAI had them classify CSAM, so Sama fired them as a client back in 2022. https://time.com/6247678/openai-chatgpt-kenya-workers/

We're 4 years on, 3 years since that report broke. Not a single thing has improved about how tech companies operate.


How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.

It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).


> How else do you want companies to remove and prevent CSAM?

Different situation.

Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.

And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.

---

In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.

---

This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)

But OpenAI is not such a firm. It's a general AI company.


> traumatizing contract workers for $2-3 an hour)

Is there an hourly rate at which this should be acceptable?


There's no dollar amount but proper support during and after employment is a minimum, and a large paycheque will both offset some of the human cost and make it easier for people to be pushed to quit the job; Such that they aren't doing the job for too long.

The current support systems for police in this subject are already insufficient. Facebook's treatment of their moderation staff is abhorrent. The point of including the pay figure is to further illustrate just how damning this subcontracting practice is.


There is labor that is necessary for our societies to function, but a direct threat to the people doing the work. Someone has to do it, and it should be seen as a great service to society and rewarded accordingly. In a just world, we would be paying significantly extra for threats to health that come from work, in the one we are currently in we use threat of worse harm instead.

We have coal miners destroying their bodies and lungs, cobalt mining slavery, cocoa nut child labour and de facto slavery, sex workers, CPS investigators, first responders, and doctors with high rates of suicide…

Not only is there an acceptable market rate for trauma, it’s sometimes competitive and requires licensing.


There is one difference between first responders/doctors and the other classes (and the moderators under discussion here)

First responders/doctors/CPS investigators see the worst but they also have days where they make a difference. Save a life or multiple lives. I'm sure it's a huge part of what makes the job bearable, and to some meaningful.

I'm not discounting your point about high rates of suicide either. If anything, when you take away any good days, you're left, as a content moderator, with just seeing the worst of the world day in, day out, with nothing to make it meaningful. I'd suggest that's something we as a society should not tolerate as being an acceptable trade for the ability to share cat photos.


>First responders/doctors/CPS investigators see the worst but they also have days where they make a difference. Save a life or multiple lives. I'm sure it's a huge part of what makes the job bearable, and to some meaningful.

You think miners don't make a difference or save lives?


> You think miners don't make a difference or save lives?

Do you think miners mining is saving lives in the same way that doctors saving lives is saving lives?

To continue the parents point, do you think miners derive a deep or powerful satisfaction from some of their mining work which might offer some of the heavy cost it has on them physically and emotionally?


Emergency Department^ doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.

^ i originally said "triage doctors" but i meant the resident ER doc.


Why take from other workers when it can be siphoned from upper management and shareholders?

you're right, it's a personal failing that i must snip at nurses whenever the word appears in my head. Apologies.

ER triage is usually done by a nurse, at least in England.

Rookie police officers in my country are paid 2500 euro per month and they have to deal with the underbelly of society.

They have access to better counselling and are ostensibly trained for the job. But there are still suicides.


OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?

What you're missing is that they're "separate" parts of the business.

The core Facebook product is users' posts. It's not possible to separate those two. Nor can one downscale Facebook in a way that stops the problem; The aforementioned "Facebook has had this problem because it's a problem we've had since the medieval days of a town bulletin board"

With OpenAI, the way ChatGPT was built and user submissions are separate things. The GPT models could have been have been trained without this mess. OpenAI could be more selective in what data it scrapes.

While OpenAI cannot stop users sending god knows what in their prompt text and images, OpenAI can choose to not interact with that data beyond the minimum legal retention, by e.g. not using it for training the next generation of models. This would massively downscale the problem.

AI output is another such problem, where A) Maybe this'd be less of a problem if they didn't recklessly include a bunch of CSAM into the training data by accident, and B) LLMs just aren't the kind of fundamental human right that "having a public opinion" is. It would be fine if they were less good, invented years later, or even not invented at all.

The main counterargument to the latter has been the "But China is inventing evil AI" spiel, which is fairly weak. If China builds an orphaned baby crushing machine, we do not need to build an orphaned baby crushing machine of our own. (And the reality is that China is only chasing AI so aggressively because the west does. They're reasonable people, it would have been entirely possible for both the west and China to make a mutual "no orphan crushing" agreement and just accept slower rollout of technology. This is exactly what has been done with human genetic engineering, and China did in fact enforce these norms.)


> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

You've just thrown the garbage over your fence. Instead of OpenAI contracting Sama to classify CSAM, the "Curators" have to.

At the end of the day, someone needs to classify it. If you say the platforms need to, and they miss some, and it ends up in OAI training data, OAI is going to be the entity paying the prices.


Not really different. They would need to report CSAM if it is ever uploaded by a user.

Any website that allows user to upload videos needs some sort of service that can identify and report CSAM.


> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.

This is of course incredibly illegal, but megacorps (by valuation) and oligarchy members are above the law so who cares. I assume there could be a regulatory framework which can make this legal for an extremely specific purpose, but there is zero change that OpenAI was part of this/abiding by this in 2022, absolutely none.


CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.

The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.


Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies. The abusers just have to find the weakest link, and that weakest link will have fewer resources than multi trillion dollar companies. You would also likely not hear many news stories about it, because they won't have the expertise to even detect it.

That's a tradeoff you can choose to make, but you need to enter into it with open eyes.


> Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies.

It doesn't matter how many are shared but how many are viewed. On a small server community policing works just fine, bad actors are easier and faster to block and to top it off, the smaller reach of each server makes it unprofitable to target multiple serves, fish for their weak points. etc - the dirty jobs become unprofitable which is what matters most.

With the help of AI, small players can do a better job at removing CSAM.


> With the help of AI, small players can do a better job at removing CSAM.

Chicken/egg. How do you expect that AI to be able to detect CSAM without appropriate training, which requires appropriately classified training data?


This isn't an either or. X isn't the only place CSAM is, there are gazillions of other sources. It I'd probably the easiest place to find it tho.

>That's a tradeoff you can choose to make, but you need to enter into it with open eyes.

No it's not. It's certainly not my choice. No one asked me if it's okay for Facebook to distribute CSAM because you insist it would be worse if it didn't.


I don't really care if you classify it as a choice or not. One set of actions results in more CSAM than others. Just because you don't like the implication of there being tradeoffs doesn't mean there aren't tradeoffs.

You classified it as a choice, not me.

> or not

So you don't care that you're wrong? Not a surprise coming from someone handwaving away the mess Meta made.

In what regard is it incorrect that a single, larger entity that is at least notionally committed to avoiding the existence of any specific type of content on their platform is more likely to successfully avoid the existence of that type of content on their platform than smaller entities with less resources?

Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?


What your opponent is saying is, "there are mutually exclusive A and B". A being widespread CSAM and B being somebody need to look at CSAM to remove it.

Can you elaborate on what exactly is wrong there? Do you see the third alternative C and it's not the "whole choice"? Or are you saying A or B do not exist and therefore there's no choice? Please name C, or tell us why A or B don't exist (or aren't acceptable), or explain your view that doesn't fit into these options.


Some people are not okay with actively facilitating harm to people, even if inaction results in harm to other people. See: the trolley problem. This is totally okay, but the point made above is that

>That's a tradeoff you can choose to make

is not correct: It is a tradeoff that one specific person can choose to make, but not one that I or we can choose to make, because we don't control facebook. Mark Zuckerberg controls facebook. He alone can choose to make that tradeoff, or not, on behalf of society.


He can't do it alone because Facebook is under US jurisdiction.

Yes, facebook is under US jurisdiction.

Yes, he alone can make the choice. Not you or I.


> Server moderators should be legally responsible for content on their server.

And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.


>And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.

This is already the case. There is a lot of lawful, useful, medical or educational content that is actively censured on social medias because they include words or pictures of organs while same social medias actively encourage and develop algorithm to push underage girls (and possibly boys) posting pictures of themselves in sexual poses, attires and context.

Big tech and social media networks love and push CSAM, they just hide the genitals but the content really is the same.


> a lot of lawful, useful, medical or educational content

Like what? It’s all there on Wikipedia, and for all of Wiki’s faults, I have trouble imagining what kind of useful, educational, medical information you will find on social media that is better than that.


You don't necessarily reach the same population and some people, believe it or not, are afraid, unable or have difficulties to read yet are online.

Afraid to read Wikipedia? Unable to? What hellhole are you describing?

20% of adults in the USA read below a 5th-grade level.

https://www.thenationalliteracyinstitute.com/2024-2025-liter...


You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.

But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.


I have no deep problem with exclusion; as you say, that’s human nature and unfixable. Making mods personally legally liable for everything that appears on their board is just insane. How many minutes are acceptable for them to see and review content? Or does everything have to be pre-approved?

I know a local blog that pre-approves every comment. He lets a lot of stuff through, because he lets people be dumbasses. If he were personally liable, the conversation would get a lot quieter.


Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.

No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.

Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.

But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.


The amount of CSAM online is completely out of control. There's already nation-level and sometimes international cooperation to catch any known images with perceptual hashing (think: the opposite of cryptographic hashing) as well as other automated and manual tools.

My impression is it would take Manhattan-Project levels of effort and funds to come close to "solving" this problem, especially without someone getting on a watchlist for having a telehealth-first primary care provider insurace plan and asking for advice on their toddler's chickenpox.

Human review? Meta has small armies worth of content moderators already that tend to burn out with psychological problems and have a suicide rate where you're probably better off going to fight in a real war. (This includes workers hired by Sama in Kenya, to link back to the OP.)

I will reluctantly grant Meta that they're up against a really hard problem here.


>I will reluctantly grant Meta that they're up against a really hard problem here.

It is a problem of their own making.


They created the concept of CSAM?

No, being so large that it's such a problem for them.

Seems like your blame is quite misplaced.

It certainly is not.

Yeah, I agree with you. Of course, it’s not Meta’s blame that the CSAM actually exists, but calling the problem of filtering it extremely difficult at Meta’s scale is a problem that is easily solvable but fundamentally requires changing how the platform works, and would likely require a lot more money to be spent.

Exactly, no one put a gun to Meta's head and ordered them to make Facebook.

So what would satisfy you?

Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.

> Server moderators should be legally responsible for content on their server.

So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?

> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.

Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?

They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.

This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.


right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.

It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.

Fosstodon fediverse server doesn't have this issue, for example.

I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.

The problem is the people who participate in this (the illegal and immoral), and not "the network."


> well, that ought make enforcement easier.

Because of course the people congregating to do illegal stuff online are going to do it in your jurisdiction where prosecution is guaranteed


Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.

If car manufacturers cannot bring car related deaths to zero, they too should no longer be legitimate companies.

A better comparison would be that if a car company can’t meet preexisting crash/safety standards, they need to shut down.

These are pretty clear laws established by a democratic government with a pretty good record for rule of law.


Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.

The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.


Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?

Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?


We're talking about CSAM right? Which all platforms remove proactively, build models to remove and essentially always respond to when informed.

Demanding some perfect immediate magic response there is the equivalent of asking car manufacturers to prevent all deaths.


Do they remove it and respond really though?

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,

https://www.cnbc.com/2025/10/22/meta-layoffs-ai.html

https://www.zdnet.com/article/us-ai-safety-institute-will-be...

https://www.lesswrong.com/posts/dqd54wpEfjKJsJBk6/xai-s-grok...

The list of not even trying anymore goes on and on. Mechahitler was also fun


When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.

So yes, yes, let's do like we did with cars.


I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.

I am not sold on the federated thing to solve CSAM or similar issues.

Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.


>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.

No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.

That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.

Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.


What stops Humbert Humbert from joining hundreds of small servers?

You now have 100x the total human effort for mods to review and ban him.


The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.

Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.

One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.


That seems reasonable though. The FBI isn’t interested in busting one perv in a closet, they want the ones making the stuff.

The FBI is interested in busting perverts in closets. That's often how they work their way up the "supply chain" when it comes to CSAM. Consumers lead them to distributors, who lead them to producers.

A fair point. But it still seems reasonable that only about 1% of suspect posts lead to a formal inquiry. Doesn’t mean they aren’t taking the report into account. You have to figure that they already have leads on most of them.

Do we have to figure that?

Do we really have to give the benefit of the doubt to the agency that was literally running one of the largest CSAM distribution outlets in the world for years as a honeypot?


If you want ro argue that the FBI is a fundamentally flawed agency that on balance is a net negative, I won’t fight you that hard. But during the civil rights struggle, they were the only force that could be trusted at all.

Of course, that was 60 years ago.


Wasn't the FBI doing some pretty questionable stuff with regards to MLK during said civil rights movement?

Eg https://en.wikipedia.org/wiki/COINTELPRO

I guess "other forces were worse" can certainly be true, but then how low are we holding the bar?


As I said just above, I am not a fan. They are nor completely evil, though.

Yes, that was 60 years ago. No one involved at that time is still there - and in fact, most of them have passed. I don't know why you think there's a shred of relevance there.

If I didn’t mention it, someone would complain that I was ignoring the one time they were on the side of the angels.

> Banning people is way easier on small servers

Big “citation needed” here. My bet is that Meta have far better moderation systems than any other social media company on the planet.


when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.

That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.


These workers prepare data for AI. I don't think the need for them will go away anyway soon.

Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.


This is the swedish newspaper report quoted in the sumitted article: https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-...

There are more reasons why these jobs are located in developing countries, it's not only the price of labour. Imagine for a second, these annotations would have to be done in the US. The public outrage would probably be audible across the Atlantic. This is another form of imperialism.


I agree that there’s no good way to do this other than like… no user generated content ever or just ban everyone for their baby pics and etc….and nobody can post them.

Granted the latter is kinda happening distantly on YouTube where you can’t talk about “ suicide “ so everyone self censors…


I don't understand why their size is an excuse for them to not remove and prevent CSAM.

Couldn't you just use multiple classifiers? Like "is a minor" classifier coupled with "is sexual content" classifier?

How would you test that that works?

There are databases of known child porn available for this kind of work.

> Sama (the outsourcing company)

If script writers gave the company this name in a fictionalization it would be rejected as too on the nose.


Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?

They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.

Yes and no.

Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.

I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.

I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.

I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.

Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.


Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)

But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.

Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.

In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.


> Companies of the 20th century certainly weren't more ethical.

Disagree, not on average. There was a non-trivially higher % of decisions made based on "what's good for the customer" or "what's good for the product" or "I would be ashamed to do this" and a lower % of decisions made based on "what maximizes profit in the next quarter". I think that is more ethical. To take it to an extreme, using slave labor because it's good for the customer is more ethical than using slave labor to maximize profit in the next quarter.


Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.

There's a name for these people, glassholes

[flagged]


> It is not up to you to deprive anyone their right to use them.

I don't see anyone saying that people don't have the right to use them. I see people saying that they have the right to avoid being anywhere near the people who use them and to disapprove of those people. Which is just as much of a right as the right to wear spy glasses.


I'm glad to see opinion seems to be swaying back in this direction. It was only a few months ago that the general sentiment seemed to be "times are different than the glasshole days, it's fine now."

It is unfortunate that a large number of users here are not hackers, not even in an idealistic philosophical sense, and will betray the public good for their own short-term gain.

>I don't think that's fair. Smartglasses have legitimate purposes.

I think that's true in principle, but in practice there are going to be two kinds of smart glasses users; extraordinarily annoying kids or you adults acting annoying in public so they can post videos to social media, and then normal people who have no clear sense for how much they're violating the privacy of those around them, and just like cool tech.

Very, very few users are going to be an interesting or valid use case -- eg: someone who is using them to assist with a disability, or for research, or something.

Even most dash cams don't stream to Meta -- they just record the last _n_ hours and you need to know to save off the video if you're in an crash / incident. In other words, most of the time no privacy is violated, and the only potential privacy violation occurs during an incident.

Even policy body cams, which I wholeheartedly support, have some pretty strong downsides: currently, if you're at the end of your rope, having the worst day of your life, and in your dishevelment turn a speeding ticket into a BATLEO, you're famous forever for being a lunatic. Maybe the rest of the time you're a good person, and you can learn from this and move on. Except now you have a permanent albatross around your neck. This is a secondary penalty that the justice system did not intend, and has no answer for.


I saw there is at least one company working on offline smart glasses for disabled users. I don’t have such a problem with this, and I wonder if the industry as a whole could be nudged in this direction. Offline glasses seem more ok to me.

It makes a lot of sense for actual accessibility devices to be offline-capable. You don’t want to lose your “sight” when you step into a metal building or elevator.


Another bad thing about the privacy invasion glasses is that they’ve added a stigma to these potentially-useful offline ones.

Vacations smart glasses are great at translating signs, historical plaques, and even ancient inscriptions on walls. (That last one surprised me.)

For parents smart glasses are awesome, no need to pull out a phone to take a picture. No need to view the world through a phone screen.

They are also useful as being regular BT headphones as well. Podcasts while walking w/o tiny earbuds to lose.


Gross to all three. Slight convince at the cost of full surveillance. No deal.

> full surveillance

You realize smart glasses have a battery that allows for all of 15 to 20 minutes of recording right?

Hell just turning on wake word detection for asking it questions murders the battery life and it is one of the first things people turn off.

The phone in your pocket reports your position to multiple ad agencies throughout the day. Stores track individual's movements throughout their buildings and see what aisles people linger at.

15 minutes of video recording via glasses (versus on a smart phone, or go pro, or drone) is not some huge mass surveillance issue.


That's all pretty cool, but unfortunately the trade-offs do not justify it

> Very, very few users are going to be an interesting or valid use case

You then list a mere two categories.

Would your argument have been similar in 2008 if told that in ten years, everyone in the economic first world would be carrying multiple cameras including a dedicated "selfie" camera at all times?


You say that like it's assumed that ubiquitous smart phones were obviously a good thing, when it sure seems like there's an increasing number of people questioning that assumption.

This is a specious argument because selfie cameras are not pointed at people and on at all times, recording whoever somebody is looking at.

None of the cameras you're mentioning are pointed at people all the time.

When you are wearing Meta glasses, they are.


I'm not sure I understand the point about a dedicate "selfie" camera, however I think we're conflating "percentage of users" with "varieties of use cases." I think there could be quite a cornucopia of potential use cases, but I think per capita most people will not actually be making use of these. As other commenters have pointed out, I'd be a lot more tolerant if the data were not constantly piped to Meta.

The point about a dedicated selfie camera was that in 2008, few would have considered taking selfies to be a major use case that would drive >90% of teens and adults to have a camera which has no other reasonable purpose. In the age of FaceTime calls, it would seem absurd to question why it's needed, but nothing like that was mainstream in 2008, which would lead to the same argument of "there are very few legitimate reasons to want such a camera (and it will enable creepshots)".

My wider point is that there are already many obvious use cases, and as adoption of cameras which are always on or plausibly always on rises, there will be a lot more, including augmented reality, translation, context hinting, AI agent awareness for assistants and personal security, and at least dozens of others, some of which I am sure no one has started building for, yet.

Meta is probably not the winner in this space (or, I hope not, at least, so we agree there!). However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.


Thanks for clarifying, I appreciate it. I'm so burnt by the potential downsides (and by the last ~19 years of smartphones) that I don't think we can see eye to eye, but I really appreciate you taking the time to expand on your point so I could understand your perspective.

> However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.

If that's what we were talking about, I'd be much less bothered. But it's not. What we're talking about is people recording others and feeding that data to a third party.


would your argument be similar if I told you that everyone in the economic first world is influenced by signals beamed down from space?

no? You think what I wrote is just a scary way to frame GPS? Maybe that's because you're part of the conspiracy!


"secretly"

I can't deprive someone of their right to use them, but I can refuse to interact with someone who's wearing them. This seems like a fair natural consequence. Feel free to wear them, but I won't speak to you when you do.

So happy to live in Germany. I couldn’t care less if your gadget can be useful in some cases. I don’t want it close to me

dash cams are local and pointing at the road, not everywhere.

body cams are local and mostly used by law enforcement to guarantee they are not abusing their power.

glassholes are connected to the cloud. you may have the right to record on public space, i have the right to remain anonymous in the crowd and not be constatly targeted by an advertisement company.

Even if 1% of the corner cases are legit uses (blind people having the glasses describe the world around them is fantastic.) 99% of the people using them are assholes that deserve to be put in the ground and the glasses smashed.


Yes and those blind people are easily recognised and I'm sure there will be a lot more understanding of them using such products.

What are the reasonable and legitimate uses of smart glasses with cameras in that can record without the subject being aware?

I am blind, and I could imagine several usecases which would make my life a lot easier by using glasses like this. But because of their reputation I will most likely never use them, and especially not in public. I'm already afraid enough people will think I'm recording them when I use my phone to get info about what's around me, definitely don't need to get punched in the face for wearing meta on my face.

Edit: Not that I would want Meta to get all that data anyway. But even if glasses exist which are more privacy conscious, I think Meta and Google Glass thoroughly ruined the reputation of any kind of wearable like this.


I can imagine there are many use-cases for blind people, but I also think having some kind of visual indicator that "these glasses are recording" would be good, and I don't know what tools you use in public at the moment, but if you use, for example, a white cane, it might help people to understand "this person is using a camera for assistance". But yes, the fact that glasses manufacturers have already demonstrated they want to take every frame of data they can does sour their reputation

They have such an indication already, an LED light on the other side of the frame.

Of course you have to be able to spot that. And trust that it really doesn't record when it's off (note that it simply may be covered by the user)


I seem to recall that when the snapchat glasses were a thing, they had a very bright an obvious ring of LEDs around the camera itself, that were bright enough to shine through a sticker placed over them. Sure, there are still ways to defeat that, but it makes it a bit harder.

Also I just googled for what the light actually looks like when it's recording, and it's not even really that visible...


I'm sorry you are dealing with the social repercussions of assistive technology. I really wish companies weren't so gross and that they did not endanger some of the advantages of advances like this by being gross

A parent wanting to record a fleeting moment with their child without the potential distraction of pulling out a phone or other camera.

This alone doesn't outweigh all of the negative uses, but I would argue that it's reasonable and legitimate.


I have 2 kids in single digit ages (1 under 5). I bought meta gen 2 last month and I cannot describe how many sweet moments I have captured. My kid loves to sing while playing with dolls and stops as soon as I flip my phone out to record.

I hope you can appreciate that you're capturing this data for Meta and their contractors and that they have the capability of doing whatever they want with this data. My spouse and I ask everyone taking pictures of our kid to never post them to social media because Meta et. al. create a shadow profile using those pictures, and they can share those photos with contractors and with other people and we don't want a company like that to have my son's data without his 18-year-old self's consent.

I get this argument and largely agree with it in regards to these meta glasses. Its why I don't currently use them.

But I'd like to have some smart glasses that do respect my privacy and offer this kind of functionality. Honestly, most of the things smart glasses do today are stuff I'd really like. Having my glasses just be the bone conduction headphones I often wear anyways? Check. Easy access to taking photos and short videos of life experiences? Love it. Integrated into the thing I'm often wearing on my head anyways? Perfect.


If the "subject" is human, those seem rather few. Surgeries come to mind, though smart glasses would be more a convenience there. Maybe some psychiatric patients, where a doctor wants to review snippets of his interactions with lower-level staff or his family members? Law enforcement trying to record interactions between informants and targeted criminals - though the latter might wise up pretty quick. Security staff at some very-high-security facilities.

I already noted it in the answer. If a person feels at risk, or even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public, just as they could with a phone.

Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.

There are dozens if not hundreds of cameras pointed at the street that record people every time they go out in public in any urban setting.


If someone is recording you on video with a smartphone, you are generally aware of it, because it has to be pointed at you. Sure, you have a right to record people in public, there is no reasonable expectation of privacy in a public place, but I would quite like to know if you are recording me. I'm also not terribly worried about people recording me having sex or being naked in public without my knowledge...

> they have a right to record something/everything and someone/everyone around them in public

Subject to local law. It's an offence to make indecent images of children, for example.

However, it is absolutely not the case that Meta has a right to that data, as a data controller under GDPR.

> feels at risk

This is a red flag phrase: it's a justification that people whip out for all sorts of unjustified things up to and including murder.


> Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.

At least this says something about the intention. Someone who films with a hidden phone implicitly shows that they intentionally hid this from the people being filmed.

Filming with glasses is hidden by design. It gives plausible deniability to the person filming, so they can film covertly but pretend they weren't hiding anything.

In most cases this doesn't make a difference but there are some cases where the premeditation can make it worse for the person doing the "abusive" filming.


>> even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public

Big assumption here that the place you're on vacation doesn't have different laws. You may have absolutely no right to record "everything and everyone" around you.


> Or are you new to how phones work?

Ease off the gas


I think the only legitimate use is for spies? And by commoditizing them it makes spies slightly less obvious?

Oh blind people too. That one makes sense.


If you walk up to me and shove a camera in my face I'll get very loud and very angry with you very quickly. That's kind of paradoxical, if you intended the camera to make you feel safer. I don't think I'm in the minority.

I could see an argument being made for smart glasses that keep everything local.

But smart glasses that send everything to The Cloud? Burn them all. Especially if they're from fricken' Meta.


> Smartglasses have reasonabl eand legitimate uses. People also use bodycams that record continuously, such as for legal reasons. People have a right to record in public, such as if they feel at risk. Are you going to go after car cameras next?

None of those default to sharing your recording with anyone else, let alone with no practical way to opt out.


>It is not up to you to deprive anyone their right to use them.

Why is it a right?

>Are you going to go after car cameras next?

No. A car cannot follow me into a building very easily. It cannot turn as quickly as a human head.

>Any American who has any opposition to public recording is violating the First Amendment and doesn't even deserve to be an American.

lmao


I have no idea why you are downvoted.

I do not want my employees recording their day job and selling it, or the creepy dude next to me in the bathroom filming my goods or the log jam flying out of my butt so meta can try to sell me pepto.

I also don't want that one time I did something minor illegal like jay walking get auto fed into palantir so they can ship me to the latest internment camp.

Or someone stealing my biometrics by just walking past me.


Downvoted because I was flippant about the American comment (because it was _insane_)

> People have a right to record in public

I do not want to live in such a dystopian country. No this right shouldn't exist and I'm glad it doesn't in my country.

> If none of this makes sense to you, wait till standalone cameras become much smaller to where they become a smartbutton -- what will you do then?

Why are you against killing? Wait till you don't need to hit them but can accelerate metal pieces at them -- what will you do then?

> Any American who has any opposition to public recording is fighting the First Amendment and doesn't even deserve to be an American.

Anyone who is against X deserves not to be protected by law. "First they came for the communists..."


That seems backwards to me. In your country, if you were to record someone committing a crime against you in public, you’re the one who will go to jail?

Is the law applied equally, so that businesses, police officers, and government agencies are also not allowed to record in public?


> No this right shouldn't exist and I'm glad it doesn't in my country.

Smartphones are illegal in your country? I am skeptical.

The right to record is the right to remember.


Recording people without consent is illegal.

> I wouldn't even recommend being in their presence.

Great! Now do people with smart TVs and people with smart phones


I'll grant you smartphones, but smart TVs usually don't have cameras/microphones. The problem with smart glasses is that they constantly capture video and upload it to $VENDOR like in this case.

Don’t we already hate the invasive ad tech industry?

Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?


Are GoPros acceptable?

I went to the beach, jet skiing. One of the guys had Meta glasses.

I liked the footage.


The problem is there's places where you'd get noticed and probably removed for filming with a gopro, or even a smartphone. My local "wellness center" and pools have you deposit your smartphone before you exit the changing area into the showers.

The danger with creep glasses is that many people don't know what they are, they can be used with the LED disabled so they're perfect for filming people without their knowledge, and "these are prescription glasses" has a good chance of working. In a place with a "no recording devices" policy, "could you put that gopro away" has wide social acceptance/support, "take those glasses off" less so.


I get the problem.

I don't think ostracizing users of Meta Glasses (or Google Glass before that) is the answer.

But I get the problems of hidden cameras.


>I liked the footage.

So did the Meta's LLM training model as well as the contractor across the globe reviewing your footage.


People with GoPros are more likely to send, resulting in entertainment value.

[flagged]


You're aware of the privacy implications but think people talking about avoiding people who use them are proposing dumb arguments? I don't follow your logic.

I want to get the Oakley Meta ones so I can record bike rides easier, should I not be tolerated?

Wear a GoPro on your helmet like the rest so you can be shunned.

If you insist on the glasses, wear a fake GoPro.


A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.

[flagged]



How is a GoPro better than wearing these glasses while cycling?

People recognize GoPro cameras for what they are. They are easily understood as a camera. Glasshole devices are not as easily recognizable and people honestly may not realize they are being recorded especially when the glasshole does not inform everyone they are being recorded.

Now, for your "while cycling" qualifier, why does it matter? Again, if you stop to talk to people while recording and it is not obvious you are recording, you're a glasshole. Personally, I have no experience with camera quality from the devices, but I do know what a GoPro can do. My gut instinct is that the GoPro will be superior footage.


I have a go pro but it's a bit of hassle to setup, I tried a chest mount and the angle wasn't great and I think the eye level view would look better. Also more convenient to record on glasses which I'll have to wear anyways.

Yes, I could record while talking to people but I wouldn't get the point of that, I want to record descents and pretty views.

My main point is someone owning smart glasses doesn't mean they automatically suck and should be ostracized.


It's a big obvious doofus camera vs a tiny spy camera, pretty simple!

No. Fuck off

Also make sure to avoid people with smartphones and places with video surveilence.

Don't let perfect be the enemy of good.

There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.


I think smartphones are a lost cause. Even at the gym, there are guys in the locker room taking pics of themselves in the mirror. Meanwhile I'm walking ass-naked out of the shower. There is just no sensitivity to appropriate time and place anymore.

> Meanwhile I'm walking ass-naked out of the shower

is this a Western/American thing about no shame regarding one's body in public places in the presence of other people, be it male or female?

I can never imagine this happening in my country.


Man I don't even want to know how many photos I'm in.

And the people who act annoyed because you are disturbing their film set as if they are James Cameron are the funniest.


Is this an honest argument? Surely you can think of how glasses might be ... in a different league than the two items you mention?

Unless you are using these during sex I consider a microphone to be 10x more privacy intruding than a camera.

Security cameras afaik usually don't record audio, but all phones can. And they don't even need to be pointed in any specific direction.


Many security cameras have the ability to record audio. Depending on where you are, it might be illegal to use it. All the cams I have purchased have it. That would include ReoLink and a recommended model from the Frigate site.

Because person wearing glasses usually can move and video surveillance cameras usually can't? If that's not it then spell it out for me, please. Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.

Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.

On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?

I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.


Someone pulling up a smartphone on me would feel hostile because it's violating a social contract. Maybe I'd feel betrayed and attacked if it turned out someone was recording me using glasses, but I don't know, I don't care about dashcams and this is not that much different. I imagine it feels bad and scary for women when someone takes creepshots of them, and this tech does open opportunities for that. Maybe that would be enough for me to hate glasshats if I had a bit of empathy. But isn't the genie already out of the bottle with 'deep nude' models available for everyone forever?

No, i don't think I've heard about NCII before, and Trust and Safety sounds like some corporate PR whitewashing term to me.


1> Genie out of the bottle: Yes and no. Nudification is a growing problem, non consensual intimate imagery is a current problem. AI related tools for image gen still require some amount of skill, and that is reducing its blast radius.

2> NCII: Years ago, I was scoping reddit to identify content that was harmful from an Indian perspective. By far the largest category was NCII. This could range from morphed images, to intimate images reshared, to images from their socials reshared in thirst communities. This included images of underage children.

Removing NCII is rough. First the victim has to be willing to come forward and get over the shame. Then they have to deal with a near impossible system and get someone to help. The more conservative the nation, the less likely the support networks will be forgiving or helpful. Finally, once the data is out there, it’s going to be reflected across multiple sites which are in international jurisdictions.

This is one of the situations where, I fear, your life is simply hosed.

Korea is another country which has a severe problem with NCII, and I believe they even instituted laws against deepfaked porn.

>PR whitewashing: Heh. Well thats the division that deals with online safety, fraud, content moderation, policy and the rest. I believe eBay was the first firm to use that term when they were handling fraud.


Have you heard the term non consensual intimate fantasies? I've heard it's an even bigger problem.

Well, you would fortunately be wrong. Fantasies are commonplace and well studied in society, psychology and even in the law.

The issue is when you go from fantasy to actually enacting it, which is usually when you earn the epithet of “Creep”.

Also, why make a throwaway for this line? I take it you haven’t heard of NCII?


They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.

Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.

https://www.tabletmag.com/sections/news/articles/the-borg-of...


I admit it's hard to care for what you people can't even articulate

Why dont you state what it is you think isn’t clear?

That way we can figure out what it is that’s confusing or unclear and then see it you find it has any moral moral significance.


Not meaningfully. Anyone holding a smartphone might be recording you. You’d better avoid them if you don’t want to be recorded.

Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.

Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.

The general populace is much more likely to notice the former recording rather than the latter.


I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.

Don’t forget that audio recording is a thing. The camera doesn’t have to be pointed at you to violate your privacy. Plus I bet you walk past 90% (or more) of all cameras without ever noticing them. You only notice someone’s glasses because they are novel, not because they are more likely to record you.

This line makes a valid point. People record strangers all the time. In an obvious way or trying to be sneaky.

Just because you don’t notice it doesn’t mean it doesn’t happen.

However, this is still a different thing than smart glasses which can further be segmented into who designed the smart glasses.


Someone has to hold smartphone and point it at you.

If somebody was pointing a camera on me all the time? I would definitely avoid them.

People do that on my subway all the time.

It's the camera of their smartphone.

Not sure if it's ON though.


They point the camera of their smartphone directly at you?

At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.

When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing


Usually they are pointed at the ground when they're reading off them.

Mark Zuckerberg and disrespect for user privacy.

Name a more iconic duo.


Whistleblower protection is key for any working society. Only dictatorships and oligarchies protect criminals while shaming whistleblowers.

I do not care which country the outsourcing company is in. When criminals go global, protection whistleblowers should go global too.


> the content they were paid to classify

  A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.

  Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...

Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.

Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.

I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.

EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.


What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.

Why reflexively defend a massive tech corporation caught repeatedly violating the law?


> Why reflexively defend a massive tech corporation caught repeatedly violating the law?

Because it is the natural expansion of the quote attributed to Upton Sinclair:

> Socialism never took root in America because the poor see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires


There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.

Beautifully and succinctly put.

You would help conceal a crime against the people just because it's good business??

Congratulations, you have a bright future in politics and/or tech CEOing.


More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere

Proactively address the issues? Are you kidding me? This is not an issue that just happened to slip by; it is 100% by design. You're fooling no one.

What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.

Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.

Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.

I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.

If only we had the technology to record video without sending it to Meta's servers.

Main character syndrome? Lots of people seem to act like they are in a 24/7 live stream with 50 million followers.

The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.

The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.


Even if so, it doesn't matter, because 4 - 8 years later it'll be reversed again. And because it takes longer to rebuild than dismantle, it will never be the same.

This is the cycle now. 180 degree turns in policy every 4 or 8 years. There's no long term planning.

China and Russia must be enjoying this.


When I worked at a company that was using Palantir's software about 15 years ago the average age of a Palantir employee was in the early 20s in my experience.

It was almost certainly everyone's first job.

It's not too hard to think of ways you can get a bunch of young folks do your bidding without them questioning the motives or what kind of moral challenges the job has.


Zuck and his minions are also responsible for https://en.wikipedia.org/wiki/Rohingya_genocide

Your examples pale in comparison.


It's a documentary


It really isn't.

"Poor, dumb people outbreed rich, smart people and make the whole world dumb" is not real. And the mechanism by which our world harms people is not because everybody involved is an idiot. Executives of corporations that are destroying the environment aren't just doing it because they don't know better. Leaders within the Trump admin and the GOP more broadly are often extremely well educated at top universities. Ignorance does not drive our politics. Resentment does.


I agree it isn't a documentary.

However, modern politics of the right absolutely prey upon, and encourage, ignorance. Ridicule of intelligentsia and advanced education (often by Ivy League graduates!) is a key part of the strategy.

That smart people are cultivating an ignorant voting bloc doesn't negate the fact that ignorance is fundamental to the plan.


Sure, the GOP ridicules advanced education.

But Trump went to Wharton and Vance went to Yale. Educated people leveraging anti-intellectualism for political gain is not even remotely the same thing as what happens in Idiocracy.


Physical therapists will be replaced by AI?

Kindergarten teachers will be replaced by AI?

Politicians will be replaced by AI? (they could, they will just never allow it to happen)


Yes, but if you know the general direction of where it's going that reduces the search area quite a bit.

In this case, for example, the French Government publicly announced where it's going.


"Our next-generation AI uses multi-sensor fusion and live sentiment analysis to track military assets to meter-scale accuracy anywhere in the world"

"Upon closer inspection, the neural network is just scraping public information from the French Ministry of Defense"


> Last year this podcast said that nobody wants to solve this because solving it is going to eliminate (IIRC) hundreds of thousands of jobs. Which is a point to consider.

Yet we're ok with spending trillions on AI to eliminate jobs everywhere, including healthcare.

I don't think that's the reason.

Personally I'm of the opinion the reason it isn't being solved, is because the people whose job it would be to solve it get to keep their jobs due to donations from pharma and insurance companies.


Well right, people lobby not to change anything because they have giant companies that make them money. They need all those people in jobs to help them deny claims, identify fraud, waste, etc.


If Intuit and other tax preparers can protect their tax preparation rents at the expense of all income earners, then it is not difficult to believe that the medical industry is also able to protect its own rents.


Feeling very conflicted right now.

On the other hand, it'd be absolutely hilarious if they succeeded with this argument. VPN vendors would not find that as hilarious I bet.

And on another the hypocrisy is mindboggling. I guess you can't blame the lawyers from going after every angle, but this is quite creative.

But really I do just want to find out if money continues to buy justice.

I sincerely hope Facebook loses and is found to have knowingly infringed on copyright of all the books in the lawsuit. At $150K per violation, I'd almost feel bad for the poor shareholders. Zuck would probably take full responsibility and fire tens of thousand of workers.


It's a win-win situation. Either pirates win or Meta loses.


Ordinary piracy would still be illegal since it's not for AI training.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: