No need for this kind of attitude here. If you can extract more value out of a customer using more data points, wouldn't you? Or will you simply leave money on the table for someone else to take.
I agree, if you can take advantage of a situation in some way, it's your duty as an American to do it. Why should the race always be to the swift, or the jumble to the quick-witted? Should they be allowed to win merely because of the gifts God gave them? Well I say, cheating is the gift man gives himself.
Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.
I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
Whether they should or shouldn't, you have to expect that your company has root on your work device or at least some sort of corporate admin profile that gives them access to everything on the device and all attached peripherals. This has been pretty standard at IT / tech companies for as long as I've been in the workforce. I personally wouldn't do anything personal on a work computer, from sending personal E-mails all the way up to storing nudes on it. Why do that when a separate personal computer is cheap and solves the problem entirely?
EDIT: I remember, an example of this actually came up a while ago on HN. An Apple employee had to return a device unwiped, due to legal discovery, but the device had intimate pictures on it[1]. Oops! Don't do that, people.
Ask your IT department what they're tracking and they'll tell you. And yet I assume you still continue to go to work or do not actively seek out non-surveiling companies. By "everybody," maybe iI should clarify that it’s "majority" instead.
That's fine but realize you are not representative of the average tech worker or indeed any white collar worker such as those we are talking about in this post.
You're right, maybe they should put cameras in there too. But there's a reason we don't yet every worker still explicitly or implicitly knows not to use their work computer for personal tasks, as people can and do get fired for doing so.
This is a ridiculous statement. Everyone I know at my company uses work laptops for personal stuff. It's not in the land of freedom though, so great leaders like yourself can't fire people at will.
TBH at this point I don't believe you are a real person.
I stopped doing any personal stuff on a work laptop long time ago, like 10+ years ago. There is absolutely nothing on my work laptop which is not work related. Working from home though helps, I always have my laptop next to me. Same with the phone, under no circumstances I will do anything work related on my personal phone (and yes I do have a company provided phone with MDM and etc).
Consider, do they ever go on explicit websites on that computer? No? Because they know that's surveiled while a personal computer for the same purpose is not. As I said, people do know the difference and might do light personal things like googling something unrelated to work but wouldn't do e.g. banking on a work computer. If they do, well, it'll be their fault if they ever get fired for doing so.
The fact that you don't believe people who don't share your same opinion on mixing work and personal stuff are somehow not "real" is part of the problem.
The semi-official policy of my employer in Denmark is you can watch porn on a work computer, so long as you're paying for it. (This reduces the risk of malware etc.)
I say semi-official because someone asked the question at a Q&A training thing with IT, and that was the IT manager's response.
> Limited private use of these tools is often permitted, generating a level of expectation by employees for privacy: employers should not routinely read employee' emails or check what they are looking at on the internet.
Most companies just don't have a reason to look through the computer they're letting you use to do your job. Don't give them a reason.
Maximizing shareholder value by observing you doing job in the pursuit of replacing you with a very small shell script is a great reason that they've just discovered.
Get your own laptop, pay for your own cellphone, use your own internet service, etc. If you create anything of value on their property or with their property or during times they're paying you in any capacity, expect them to use it for profit.
Exactly, no one is stopping one from using their personal devices for any personal purpose, and the fact that somehow people are defending wanting to do personal things on a work laptop is utterly baffling to me. Like another commenter said, I always grew up with the notion, legal and social, that a company laptop is absolutely not your property and companies can and will look through it. Use your own devices for your own tasks.
But the legal notion from where you grew up might not apply worldwide right? People aren't saying you are wrong, they are just saying things are different in other places.
Where I grew up you do have legal right and social expectation not to be under surveillance at work. You even have an expectation of privacy in public spaces - I know this is not the case in other countries, but I accept/know that and it would be senseless to imply this is expected everywhere.
> I mean I have my own laptop and phone, why would I use a work device for that stuff?
Because you're traveling for work, and carrying two separate laptops eats into your limited baggage size/weight. Things are marginally better now that everything uses the same standard charger, but not much.
I make it a point to use the office bathrooms only to excrete food I ate from the work cafeteria. Personal food I ate at home I excrete in my personal bathroom.
It might surprise you, but culturally, not all companies are this way. I know some are, but some are very different.
100% of the people at my company use their computer for personal tasks, and this is permissible under our policies. Our company is fully BYOD and owns zero computers, and zero cell phones.
That sounds like a truly dystopian take to me, but suppose you're right and nobody should ever use their work computer for anything personal.
Per TFA, this thing is literally taking screenshots of what is on the employee's screen. At work my screen sometimes had things such as: performance data on other employees, my own PII from HR systems, PII from customers, password managers, etc. It's also logging keystrokes. How many times do you type passwords a day.
Collecting that kind of information on purpose is truly wild. Imagine the security safeguards you would need just to prevent it from leaking. Wait what, they're explicitly collecting it to train LLMs with it? God help us all.
Your screenshots go to your managers, not just anyone in the company. At Meta there are very strict safeguards for preventing employees e.g. stalking their exes, so I'd assume the same security is used for even PII filled images.
Im pretty surprised you're getting so much flak for this. This is the least controversial opinion I've seen on HN. I've been working for ~30 years, and every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff. It wouldn't even occur to me to cross the streams. Carrying a second phone for personal stuff is a trivial burden.
I'm also very surprised, so much so that one of my comments got flagged for it. Seems like it's a few dissenters while others have mentioned concurring with this fact as I also have always been under the impression that work hardware is for work only. And then some people are talking about how it's authoritarian or anti human, like, it's not that deep.
> every job I've had, if you actually looked at the IT policies, they were all very clear that work devices were for work, personal devices were for personal stuff
There's quite a difference between that and zero privacy, and there's also quite a difference between "IT policy says" or "the law permits" and "this is how things ought to be".
That said, between necessary endpoint security and the potential to get caught up in corporate legal disputes I feel like maintaining a strict separation is advisable. But that doesn't mean I support unnecessarily invasive surveillance or think it's a good thing.
You already do and your consent is part of your employment. Check your employee handbook, search for things like "data privacy" and understand how https://www.copyright.gov/circs/circ30.pdf applies in the modern world, especially around AI. TL;DR companies can do whatever they want with your work / observe you and you have no real meaningful recourse.
/facepalm If we're going to debate norms and ethics, sending one liners into cyberspace won't get far. There are better ways. Invest in your conversational skills and listening skills, please. Otherwise you are a moth and HN is a streetlamp.
> Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.
> I work at a tech firm in India
First I wondered how can you have such a low expectation on privacy, then you answered my question. What you need in India is more unionization and fight against corruption. It is becoming worse here in Europe but in India you do not have the protections that we have. Without that you will have no rights.
You will have to fights to get rights at your job. In the same way that Europeans are going to have to fight to keep them.
I am a European in Europe and I expect the same. Why would I assume otherwise? The company laptop is full of spyware, starting from the OS. I have no reason to consider it "mine", and no desire to do so. If I want to do anything private (including things that my company would not like) I can do so from my private devices.
Europe is a big place, but in my area of Europe it is very illegal to monitor employees this way. If you were to be fired for something that illegal surveillance turned up, I would consider it a good thing - with the settlement money you could take a couple years of vacation.
> with the settlement money you could take a couple years of vacation.
In many EU countries even if privacy protection is strong on paper, the settlement will be so low compared to US that you won't afford to take any vacation.
I've never worked a software development job where I didn't have a company-provided machine that I installed Linux on. I installed the OS, I have root on the machine, I wiped it and returned it empty when I was leaving the job.
Lucky you, I guess. In all the companies I worked for I have had a company-provided Windows laptop where the OS was managed by IT. The degree of freedom (e.g. what software could I install, what websites were blocksd) varied.
Strong disagree (especially under US law). Consider what this means for union organizing in the context of this 2022 NLRB memo.
> Under settled Board law, numerous practices employers may engage in using new
surveillance and management technologies are already unlawful. In cases involving
employer observation of open protected concerted activity and public union activity like
picketing or handbilling, the Board has recognized that “pictorial recordkeeping tends to
create fear among employees of future reprisals.”10 The Board accordingly balances an
employer’s justification for surveillance “against the tendency of that conduct to interfere with employees’ right to engage in concerted activity.”11 In that context, “the Board has long held that absent proper justification, the photographing of employees engaged in protected concerted activities violates the Act because it has a tendency to intimidate.”12
> A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
I know you’re in India, but in the US, could this not be considered intellectual property theft on “right of publicity”? Your persona and working style is one of your core values you bring to market; building a simulacrum of that is not something I expect to be part of the “your output is the company’s IP” in an existing contract.
I will give a company the right to try to reproduce my output. But my very likeness and modus operandi? No.
Here's how a refusal to them doing whatever they think would maximize shareholder value with any of your output or data they collect from your company computer would actually go down: the company would do something you didn't like, you'd try to complain about it, HR would listen and document everything. In the best-possible case, they'd let you personally opt out. More likely, since you're likely very easy to replace in their minds, they'd refer you to their data privacy clauses in their acceptable usage policy section of the employee handbook, maybe reference the notice sent out to everyone about how they're doing this, then fire you for performance reasons a few months later. You'd be given an NDA and a very average severance, then you could choose to try to hire a lawyer (who would take at least a third of any pre-tax settlement amount) and fight them, in which case they'd settle for more or less the same as the severance package (and keep in mind both that and any court settlement are both taxable income, so you're not getting a windfall in any case), or you'd just sign the NDA and take the severance with no admission of wrongdoing on their part and no legal recourse.
Large companies employ entire orgs of lawyers who specialize in these matters, and it is literally their job to protect the company, not the employees, from lawsuits like this. Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course. Welcome to America, land of the free for corporations which are legally people, just ones with infinite lives who cannot be arrested / imprisoned but can make legal decisions but cannot be subpoenaed. See eg https://www.theverge.com/policy/886348/meta-glasses-ice-doxx... for how the C-suite thinks about this type of thing.
> Is it fully legal and in the clear? Probably not. Will they still 100% get away with it and leave employees with no realistic options or upside attempting to fight it? Of course.
I am aware of "how the C-Suite thinks about this type of thing", but this is also a good example to surface here of what to redline in future employment contracts. Yes, that will likely shut you out of a lot of places, but the opposite is beyond learned helplessness: it is capitulation to a future that will not end well for the tech worker.
Wait so the engineers doing novel work are ousted; you fire the engineer that had the skill set to produce the work in the first place? Surely this is creating a Stasi-like neighbour snitching environment with chilling effect where the better you do the faster you become a target for replacement by engineer's incentivized to win points by replacing you. Even being very charitable where the scenario is the code was so poor that the code the employee is working on is so entrenched in domain knowledge they've become a huge bus factor, an LLM is going to make that kind of code worse. I'm struggling to imagine the subset of people this replaces that is not a long term detriment to everyone working there. Those people became "key personnel" for a reason no?
No to disagree with you here because I wholly support this position. But I can see the problem from both angles. The problem, it seems to me, is that, and Im not sure which came first, employees started being reckless at work, probably because employers stopped caring about the treatment of their workers, which ramped up the viscous cycle to where we are now.
I can see an argument for companies not trusting there employee's because most employees harbor borderline corrupt thinking in their work place and have terrible work ethics, of course all of this is brought on by corporate culture so its there fault in the first place, but im not exactly sure what started where.
If "most" employees are corrupt and have terrible ethics, why is the company hiring them in the first place? I don't think I've ever worked anywhere I thought that a majority of my coworkers fit this description. This sounds pretty much identical to what the parent commentee said: it's a hiring problem. Either the company is bad at hiring people who don't have these traits or they're actively selecting for it.
Just speculating, but the intention wasn't reducing key personnel risk. It was so that your employer could fire them and replace them with an agent running off of their associated skills.md.
Well, no, there should be an expectation of privacy; an employer shouldn't just be able to have a palantír for their employees.
>I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.
Okay, now this sounds like satire. But I suppose that's the way the world is going.
At the risk of sounding like an LLM, a laptop is not just "something you get at work", it's literally your work tool. If you were hired at Shit Producers Inc as a defecator, you'd damn bet they would surveil the bathroom stalls there.
The proposed industry solution is to use agents to review PRs, as not to slow down the velocity of delivery...
My current workplace is going through a major "realignment" exercise to replace as many testers with agents as humanely possible, which proved to be a challenge when the existing process is not well documented.
The fact that anyone in leadership would ever think this is even remotely possible - given my experience in the general state of requirements / contracts / integrations / support - makes me bleed from my earholes just a little bit.
It's starting to just feel a little like an excuse to call everyone on deck for "a few weeks trying 9-9-6". But even then the lack of traction isn't between the eyeballs and the deployment. You'll still be spinning wheels in that slippery stuff between what a customer is thinking and what the iron they bought is doing.
So you essentially trust the output of the model from beginning to end? Curious to know what type of application you're building where you can safely do that.
Edit: to clarify, I know these models have gotten significantly better. The output is pretty incredible sometimes, but trusting it end to end like that just seems super risky still.
I did an experiment today, where I had a new Claude agent review the work of a former Claude agent - both Opus 4.6 - on a large refactor on a 16k LOC project. I had it address all issues it found, then I cleared context, and repeated. Rinse and repeat. It took 4 iterations before it approached nitpicking. The fact that each agent found new, legitimate problems that the last one had missed was concerning to me. Why can’t it find all of them at once?
It is more like a wiggly search engine. You give it a (wiggly) query and a (wiggly) corpus, and it returns a (wiggly) output.
If you are looking for a wiggly sort of thing 'MAKE Y WITH NO BUGS' or 'THE BUGS IN Y', it can be kinda useful. But thinking of it as a person because it vaguely communicates like a person will get you into problems because it's not.
You can try to paper over it with some agent harness or whatever, but you are really making a slightly more complex wiggly query that handles some of the deficiency space of the more basic wiggly query: "MAKE Y WITH NO ISSUES -> FIND ISSUES -> FIX ISSUE Z IN Y -> ...".
OK well what is an issue? _You_ are a person (presumably) and can judge whether something is a bug or a nitpick or _something you care about_ or not. Ultimately, this is the grounding that the LLM lacks and you do not. You have an idea about what you care about. What you care about has to be part of the wiggly query, or the wiggly search engine will not return the wiggly output you are looking for.
You cannot phrase a wiggly query referencing unavailable information (well, you can, but it's pointless). The following query is not possible to phrase in a way an LLM can satisfy (and this is the exact answer to your question):
- "Make what I want."
What you want is too complicated, and too hard, and too unknown. Getting what you are looking for reduces to: query for an approximation of what I want, repeating until I decide it no longer surfaces what I want. This depends on an accurate conception of what you want, so only you can do it.
If you remove yourself from the critical path, the output will not be what you want. Expressing what you want precisely enough to ground a wiggly search would just be something like code, and obviates the need for wiggly searching in the first place.
What an incredibly diverse and inclusive UI design. I often find that Indian mythologies tend to be overshadowed, but with the advent of AI generated art and media there's been a resurgence of Indian-centric stories.
The internet being flooded with AI slop masquerading as devotional artwork has been among the most depressing things about GenAI. It has no meaning or intention or devotion behind it, it’s just engagement farming. Nothing of value is added by having Devi with extra fingers on each hand and completely blurred messes for all the affects in her hands. Or pictures of Rama shooting a bansuri out of his bow. It’s just tripe. We could have told the stories with an overlay of open source artwork from Raja Ravi Varma or Gita Press or old Tanjore paintings or Chola bronzes or whatever if we couldn’t afford to hire an artist who knows what items Vishnu is supposed to be holding in each hand.
It’s not a problem just for us Hindus either. I see so much terrible Jesus/angel “artwork” everywhere. It makes me start to wonder if maybe the Wahabbis were onto something with their complete taboo around depictions of God or the prophets.
>Nothing of value is added by having Devi with extra fingers on each hand and completely blurred messes for all the affects in her hands
South Asian religions are in an especially bad position because so many works related to them have never been digitized (and quite frankly, in some cases what's available on the internet is of extremely low quality) [1]. I'd be pretty concerned if someone were to rely on entirely on these models since the probability of hallucinations (or at the very least, erasure of regional/ideological diversity) probably skyrockets because the information was never actually there in the training data to begin with.
Some references have been digitized but “Hinduism” is a broad collection of religious traditions with many different stories and folk practices and depictions of various deities and tales. Many of the depictions are considered “valid” only in the specific context of a particular temple or for a specific community and it becomes completely nonsensical once you start randomly jumbling up elements of all the Gods from across all of India over all of time.
I've been using GitHub for over 2 decades now so its not that I don't still love them. I just worry that GitHub will become just another arm of Microsoft's AI strategy. It FEELS like the platform is being reshaped around monetizing AI rather than serving developers but that's just my opinion.
I mean no offence to anyone but whenever new tech progresses rapidly it usually catches most unaware, who tend to ridicule or feel the concepts are sourced from it.
ai is actually useful tho. idk about this level of abstraction but the more basic delegation to one little guy in the terminal gives me a lot of extra time
That would be the dream, it has been stated by many AI leaders that AI is the key to UBI, democratising that capability will prevent specific monopolies having a stranglehold on our future.
reply