I had a puzzle game were all of the solutions it would show were playbacks of my keypresses as I solved it myself. As the puzzles got more difficult it got harder and harder to record a solution without having pauses to think about what to do next.
I don't mean this in an "I know better" way, just genuine curiosity: why couldn't you record a solution with pauses and then strip them from the replay file?
I find the notion odd that this is even a problem to be solved.
It suggests a level of control way below what I would ordinarily consider required for game development.
I have made maybe around 50 games, and I think the level of control of time has only ever gone up. Starting at move one step when I say, to move a non-integer amount when I say, to (when network stuff comes into play) return to time X and then move forward y amount.
Not an expert in the field but it seems to me the key points are.
Generating any wavelength. (this article)
Accurately measuring wavelength. (otherwise there's no information benefit to arbitrary wavelength generation)
Wavelength insensitive holographic gates. (If they work on that frequency, and in a way that does not change the frequency) I don't know what properties such devices currently have
Assuming all of those, your ability to compute increases to your ability to distinguish wavelengths.
You could theoretically calculate much more in a way you could never detect, but then you get into some really interesting tree falling in a forest issues.
It would be interesting to see what regulations and ethics rules this comes under. Frequently these sorts of rights can be signed away in the US, but there are also academic bodies with their own rules that might have a say.
Sort of, I'm hesitant to say surveillance, since its less of 'running spyware' or something similar and more 'tracking student commit history'; where it gets weird is this section of the paper:
In our system, the Makefile or Project file that compiles the
project contains Git commit and push commands to automatically
commit changes into the student repository. Using this system,
changes are tracked every time the project is compiled. When a
student modifies a source file as a part of the program-build-test-
debug cycle, the Makefile commits and pushes the recent changes
into source control. This creates a fine-grained sequence of commits
that tell the story of how the program was developed.
They basically force-commit to your repo whenever you build your code, so they are able to 'track' your development?
>They basically force-commit to your repo whenever you build your code, so they are able to 'track' your development
I think there is a difference between accessing a record that someone has chosen to make, and causing a record to be made.
I think that should be the distinction between search and surveillance. I think both need regulation but surveillance should require a higher standard of of regulation
Yea this makes sense to me; if they were just analyzing student commit history, but 'hiding & executing code' is conceptually dangerous, facilitates their surveillance, even if they might say "It's just git commit & git push".
The shovels and labour used to make those things where not depreciated.
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
Calling the GPUs the shovels is bonkers because a) shovels are cheap, GPUs are not. And b) when you build a bridge the bridge doesn’t need shovels to be passable. Without GPUs, the datacenter is useless, the model is useless, etc.
If anything, the GPUs are the steel that the bridge is made of. Each beam can be replaced, but if too many fail the bridge is impassible. A bridge with a 6 year lifespan for each beam is insane.
You’re taking the metaphor way too literally. The people who made the most profit weren’t literally selling shovels, they were the ones providing logistics and support services to the gold miners, like hauling tons of equipment over tens of miles of mountain or providing the sales channel for the gold. They siphoned off most of the profit from the ventures that depended on them (like LLMs depend on GPUs) because the miners had no other choice, to the point where even the most productive mines often weren’t profitable at all.
A less literal example is the conquistadors: their shovels were ships, horses, gunpowder, and steel. You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures. I.e. the cost of a ship capable of a cross Atlantic voyage going from 100k pieces of eight to over a million in the span of only a few years (predating the treasure fleet inflation!)
Gold rushes create demand shocks, and anyone who is a supplier to that demand makes bank, regardless of whether its GPUs or “shovels”.
> You can look at Spanish records from the Council of the Indies archive and any time treasures were discovered, the price of each skyrocketed to the point where only the wealthiest hidalgos and their patrons could afford to go on such adventures.
Today this is real estate. And it's something people keep forgetting when arguing that ${whatever breakthrough or just more competition} will make ${some good or service} cheaper for consumers: prices of other things elsewhere will raise to compensate and consume any average surplus. Money left on the table doesn't stay there for long.
GPUs don't really have six year lifespans, though. The hardware itself lasts far longer than that, even hardware that's been used for cryptomining in terrible makeshift setups is absolutely fine for reuse.
Each of these GPUs pull up to a kilowatt of power. The average commercial power cost is 13.4 ¢/kWh. That means running a single H100 full tilt 24/7 is a power operationing cost of $1,100 per card per year.
In three years the current generation of GPUs will be 50% or more faster. In six years your talking more than 100% faster. For the same energy costs.
If you're running a GPU data center on six year old GPUs, your cost to operate per sellable unit of work is double the cost of a competitor.
One thing I am not entirely sure if there will be huge efficiency gains. Just looking at TDP that is the power consumption of say 3090 and 5090 and the increase is substantial then compare it to performance and the performance lift stops looking that great...
3x increase in compute for a 1.5x increase in tdp is pretty good considering the underlying process had barely changed. In anycase, consumer GPUs aren't a good metric as they operate with different economic constraints.
H100 to GB200 saw a 50x increase in efficiency, for example.
Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.
But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
If my data center sells a pflop at $5 because of our electricity use and the data center a state over with newer GPUs sells it at $2.50/pflop, it doesn't matter how much economic benefit it generates, my customers are all going to the data center a state over.
In context of datacenter using AI workloads, it's cheaper to replace them after few years with faster, more energy efficient ones, because the power cost is major factor
"Inference consumes 60–90% of total AI lifecycle costs." So shovel is not the right analogy, more like GPU = coal burning engine. And yes, coal was a big railroad expense, more so than financing construction debt.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
Karpathy - and others - consider the pre-training knowledge as much a liability as an asset. If we could just retain the emergent reasoning and language capability without the hazy recollections the models would likely be stronger.
I would prefer to have a plumber with some kind of reference that doesn't just make shit up 10% of the time -- plumbing mistakes are insanely costly (i once owned a house that was destroyed by a plumbing mistake that was made by a previous owner)
You have to go in with your eyes open wth SBCs. If you have a specific task for it and you can see that it either already supports it or all the required software is there and it just needs to be gathered, then they can be great gadgets.
Often they can go their entire lifespan without some hardware feature being usable because of lack of software.
The blunt truth is that someone has to make that software, and you can't expect someone to make it for you. They may make it for you, and that's great, but really if you want a feature supported, it either has to already be supported, or you have to make the support.
It will be interesting to see if AI gets to the point that more people are capable of developing their own resources. It's a hard task and a lot of devices means the hackers are spread thin. It would be nice to see more people able to meaningfully contribute.
Judging by the relative scarcity of instances like this being reported, I would guess that they are successful enough to be a ongoing source of intelligence.
Apart from the subject matter (which also points to Russia) if it were not Russia doing this successfully, they would be motivated to do this at a much larger and more obvious scale.
Fake news is obvious and pervasive, because it is trying to be obvious and pervasive. The goal is not to make people believe the falsehoods (although they occasionally have luck there) but to make people doubt the truth.
If impersonating reporters was not working for them for intelligence gathering, or they knew someone else was doing it. I think they would apply some of their misinformation resources to do massive wide scale obviously bad impersonation of reporters. It would create an atmosphere of suspicion that would dry up sources everywhere.
In rpn notation you just put the input on the stack, right? The encodings seems like they could get pretty big, and encodings certainly wouldn't be unique, but you should be able to encode pretty much any constant you could think of.
How can you hope for anything better if you consider it an us versus them situation? When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
It seems like a lot of people want a revolution so that they can rotate who will be able to take advantage of the vulnerable.
What are the suggestions for something better? I don't see a lot.
I'd like to see more suggestions of how things could work.
For example:
The Government could legislate that any increase in profits that are attributable to the use of AI are taxed at 75%. It's still an advantage for a company to do it, but most of the gains go to the people. Most often, aggressive taxation like this is criticised on the basis that it will stifle growth, but this is an area where pretty much everyone is saying it's moving too quickly, that's just yet another positive effect.
> When they say "We don't want to increase inequality" and the response is "We don't believe you". Where do you go from there?
The response is "we don't believe you" because their actions show that they are hellbent on accelerating inequality using AI and they have offered absolutely no concrete plan or halfway convincing explanation of how, if their own predictions of AI's future capabilities are correct, we're supposed to go from here and now to a future that isn't extremely dark for the vast majority of humans on Earth (to the extent that said humans continue to exist).
The work they have done in this direction so far is not serious, so it's not taken seriously. They obviously care much more about enriching themselves than slowing or reversing current trends.
If they want to be taken seriously, maybe they should start acting like they're serious about anything besides their own wealth and power. And I do mean acting---they need to show us through their actions that they are serious.
Seriously. They can say they want to share their gains all they want, but I don't see them spending any lobbying money on things like universal income (and if Altman can afford to lobby for age verification laws he can certainly afford to lobby for things that actually benefit society). The reality is they don't lobby for anything that would take wealth away from them, and any redistribution of wealth (such as a s 75% tax rate) would by definition take wealth away from them.
You can, but then what? Do you judge what they say as if their perspective is the same as yours, and then conclude from that context that what they suggest could only come from an evil person. That seems to be what a lot of people do. What if they actually think what they are suggesting is the best thing for the world? How can you tell what is in their minds?
Alternately you could criticise their arguments instead of the people, and suggest an alternative.
I'm also not entirely certain that influencing public policy is something that is inherently bad. I know if I were deaf, I would like to have some influence on public policy about deafness issues.
The idea that we cannot possibly use people's actions to judge them is ridiculous. Musk thinks that the world would be a better place if the races were separated and if all charitable giving was ended. I think that's monstrous.
The problem is that people have a million stories to explain the observed actions, most of those stories are bullshit, and people repeating them know fuck all about the decision-space in which these actions were chosen and taken.
This is a accidentally good example, we don't know what motivated him, while your ridiculous reason is unsound because it would be also a bad thing to do if he were clearing a wasps nest on someone else's property in the middle of the night.
I suspect that they are not a bad person but someone radicalised by the media they consume.
Firebombing someone's house is a bad thing to do. It doesn't mean they are necessarily a bad person. Anger and confusion can make good people do bad things.
I don't care if Altman is secretly a good person. I care very deeply that he is taking actions to harm the world in grievous ways and is not doing any visible thing to mitigate the extreme damage he will do.
"Altman is secretly a good guy" doesn't pay people's mortgages.
I doubt it nets positive or even cancels out the damage, but if we're taking a fuller picture, then we shouldn't also assume Altman / other AI company CEOs are "taking actions to harm the world in grievous ways" for shits and giggles, or for large payday. Despite what skimming HN would make one believe, AI tools are actually useful in science, technology, and all kinds of productive work.
So the silver lining is this - they're not risking to burn the world down for porn or bitcoin, but for general improvement in everything across the board, that happens to have an unfortunate side effect of destroying value of labor.
I don't think that Altman is a Dr. Evil level villain who just wants to hurt people. I instead think that he does not care about the damage he causes on his path to personal wealth and glory and I think that this is precisely as terrifying. I'm sure that the machines made of my corpse would be used for productive purposes too.
Altman probably won't torture my cats to death. What a guy.
>How can you hope for anything better if you consider it an us versus them situation?
Because it IS an us vs them situation.
They're awfully good at turning it into an us vs us situation whether it's blaming our parents' (boomers), blaming immigrants, blaming muslims or (their favorite), blaming the unstoppable forward march of technological progress (e.g. AI).
The media organizations they own are constantly telling these stories because it protects them.
>The Government could legislate that any increase in profits that are attributable to the use of AI are taxed
Nothing a billionaire loves more than misdirection and a good scapegoat. This is why Bill Gates made the exact suggestion you just did.
When THEY are the problem they love a bit of misdirection, especially when the "problem" is a genie that cant be put back in its bottle.
They're terrified that we might latch on to the solutions that actually work (i.e. tax them to within an inch of their life) and drive a populist politician to power which might actually enact them.
Thats coz my statement wasnt intended to be scientific proof of anything it was an explanation as to the function of the propaganda that got recycled through you and the intent behind it.
The billionaires could start to earn trust by lobbying for laws and programs that help the poor and displaced. Put money in to retraining programs to help people who lose their jobs. So far they seem to be doing the opposite, CEOs are publicly declaring ‘fuck you, got mine’ and leaving it at that.
Nick Hanauer has lobbied for higher minimum wages.
Michael Bloomberg has lobbied for healthcare.
Pierre Omidyar has spent about a billion on economic advancement non-profits
Gates Foundation - Bunch of stuff.
Warren Buffet - Too much to count
George Soros - For all the antisemitism, the kernel of truth in the lie is that he spends a lot of money trying to make the world better.
Chuck Feeny gave away $8B I'm sure some of it went to lobbying for better policies
A large number Advocate for a Universal Basic Income.
More advocate for things that they clearly think are good things for the world, even if you, personally do not.
Jack Dorsey, Reid Hoffman, hell even Elon Musk (he may be wrong about everything, but he's openly advocating for what he believes is good)
Sam Altman has done WorldCoin and is heavily invested in Nuclear Fusion. You can criticise the effectiveness or even the desirability of the projects, but they are definitely efforts that if worked as claimed would be beneficial.
Many billionaires spend money on non-profits to push for change, often they do not put their name on it because it makes them a target for attack, or simply that by openly advocating for something the lack of trust causes people to assume whatever they suggest has the opposite intention.
I'm not arguing that they are doing the right thing. I'm arguing that for the most part they are advocating for and investing in what they believe to be the right thing. Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
>Why treat them as the enemy, when a dialog might cause them to reach common ground about what is the right thing.
People like Elon literally are the enemy. He used his wealth to literally change our government in his favor. The idea that we need to go and have polite discussions to maybe change his mind, while he gets to stomp all over us (his DOGE efforts literally resulted in people dying). If a dialog with them was going to work it would have happened a long time ago, but the more we learn about these people the more obvious it is that they believe themselves to be smarter and better than the rest of us. They aren't going to listen to others, and pretending that they will seems like deflecting and giving up in advance. Our best hope is that people can get enough power to regulate billionaires out of existence before a revolution does it instead.
Please consider your biases. Musk could not have “changed” the government if the DNC didn’t hand it to Trump on a platter. Republicans took over because serious people had had enough with the DNC’s full-throated embrace of two things: race-based selection (with the unpopular Harris’s undemocratic coronation as the flagship example), and the relentless focus on trans ideology (to the point anyone not endorsing the fullest embrace of that idea has been declared equivalent to the worst racist). Without that, Democrats would have remained a powerful and relevant party and Musk would have gotten nothing he wanted.
reply