I fell for it a few minutes the other day. Debugging an issue with a device, the AI wrote "I have a strong hypothesis about the cause in the code". I asked it to write out the hypothesis & create a test plan to validate it. It made a test plan, but no hypothesis. The test plan did not reproduce the issue, and it turned out to be a hardware design problem not in the code at all. But for a moment in there I thought it actually had a hypothesis, I forgot that it's not thinking beyond what's written in the chat. Someone who was going to reproduce & fix a bug would probably write "I have a strong hypothesis about the cause" or similar, so it played along & wrote that.
If the hypothesis is not printed out in the context, then it cannot hold it past that turn. You could prompt it to generate said hypothesis first (or set of hypotheses), and only then act on them. And then things might work.
Definitely not exactly a human. OTOH Low hanging fruit is low.
You could reverse that argument. The only thing that ever happens in a human mind is a Sodium-Kalium semi-permeable membrane balancing out (meaning going from polarized to unpolarized) and triggering the tiniest of explosions spreading one of 4 chemicals around. Repeat a few billion times per second for ~80 years.
The Eliza effect is off the scale.
What I'm trying to say is that the underlying method is not a valid reason to discredit one thinking process over another.
I remain baffled that anyone thinks dragging brains into discussions of these things does anything but make everyone more confused. This kind of thing is exactly what I'm getting at—that it's common for even people in the computer technology field to think the comparison is apt, or illuminates anything, is a wild indication of how inclined we are to be tricked by computer programs that happen to operate on language.
You are baffled because of your own ignorance of the underlying principles under discussion. Do you believe in a dualist interpretation of reality, that the process of thinking is somehow nonphysical? That these programs operate on language is besides the point. The fact you think this is why it's interesting shows you don't even understand the argument.
Are you familiar with the physical church turing thesis?
The effect is not quite what you think it is, and people don't quite take the right lessons.
Similar to the eliza effect, people still take the original reading of Clever Hans: "he couldn't really do maths, he's just taking social cues from his handler"
But what's the actual difference between Eliza, Clever Hans and RLHF? They're doing the similar things, right?
Now look at how we valued that in the 20th vs 21st century:
How much does an ALU even cost anymore? even a really good one? (it's almost never separate anymore, usually on the same silicon as the rest of the cpu/microcontroller)
Meanwhile... what's the TCO to deploy a sentiment classifier? Especially a really good one?
Counterpoint: When is the last time you, as a human being, honestly did that?
This isn’t trying to be glib or contentious, it’s a commentary on the nature of human existence. If you have, then your answer will show it. If you have not, your silence or excuses will also.
All the time? This morning when I dreaded getting up so early for work. Last night when I showered. The day before after playing some board games with friends. Normal people do introspect, despite the current fad among a few oddball elites in Silicon Valley [0].
This article reads like it’s been proofread or written out from an outline or bullet points given to an AI. And ALMA’s own posts that it references are just meandering ramblings, they’re really a slog to get through.
I think I’ve always tended to immediately notice the signs of sloppy thinking in the writing style and it’s been such a reliable heuristic that AI writing kind of short circuits me. I tend to get down a couple of paragraphs before I pause and realize “Wait a minute, this isn’t SAYING anything!” Even when there is an underlying point the writing often feels like a very competent college student trying to streeeeeetch to hit a word count without wanting to actually flesh their idea out past the topic statement.
Thought is a derivative of sensory processing. LLM does not have a physical body to interact with the world, nor does it develop itself and learn anything by experiencing the world, it has no subjective experience or subjective feeling, it has no qualia, it's symbols are not grounded in physical reality and it's "thoughts" is a mere simulacrum. Anyone personifying an LLM is just derealised by convincing outputs, not realising that manipulating symbols according to rules does not imply understanding
I mean, there are still philosophers metaphorically fist fighting about this stuff. Last time I stepped into the fray on this topic I got clapped back by someone from an area of philosophy of mind from after I graduated. It was an interesting perspective that was unaware of, but I studied language, not mind:
> you randomly sample letters from the alphabet and those letters make up actual words, then actual sentences
That sounds like a decently apt description of how I (a human) communicate. The only thing is that I suppose you implied a uniform distribution, while my sampling approach is significantly more complicated and path-dependent.
But yes, to the extent that I have some introspective visibility into my cognitive processes, it does seem like I'm asking myself "which of the possible next letters/words I could choose would be appropriate grammatically, fit with my previous words, and help advance my goals" and then I sample from these with some non-zero temperature, to avoid being too boring/predictable.
"it" is also not "thinking". It is still randomly (though not all words are equal probabilities) sampling from a distribution of words that have been stolen and it been trained on
If "randomly sampling from a trained distribution" can't produce useful, meaningful output, then deterministic computation is even more suspect. After all, it's a strict subset. You're sampling with temperature zero from a handcrafted distribution.
(this post directionality ok, but there's many a devil in the details)
As others have said, should be fine to run Linux in a VM. Running natively from boot, the only potential option would be Asahi Linux, but my understanding is that the A18 Pro chip has certain internal attributes which are akin to an M3, and Asahi has only gotten full support in place for the M1/M2 generations. Perhaps once they get M3+ fully working, A18 Pro would also be an option. (I'm also super interested in a Neo running Linux.)
If the A18 Pro has the same ISA as the M-series chips then this may not be so straightforward. I am still hanging on to my 2020 Intel MBP for dear life because it is the only Apple device I own that allows me to run Ubuntu and Windows 11 on a VirtualBox VM.
Would you elaborate what you mean by saying Linux on an M-series chip isn't straightforward? That's not been my experience, I (and lots of other devs) use it every day, Apple supports Linux via [0], and provides the ability to use Rosetta 2 within VMs to run legacy x86 binaries?
Clearly I'm not as knowledgable about this as I thought I was. I already have a Ubuntu x86 VM running on an Intel Mac (inside VirtualBox). Same with Windows 11. Can this tool allow me to run both VMs in an Apple Silicon device in a performant way? Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.
While I have a preference for VirtualBox I'd say I'm hypervisor agnostic. Really any way I can get this to work would be super intriguing to me.
> Can this tool allow me to run both VMs in an Apple Silicon device in a performant way?
I use VMWare Fusion on an M1 Air to run ARM Windows. Windows is then able to run Windows x86-64 executables I believe through it's own Rosetta 2 like implementation. The main limitation is that you cannot use x86-64 drivers.
Similarly, ARM Linux VMs can use Rosetta 2 to run x86-64 binaries with excellent performance. For that I mostly use Rancher or podman which setup the Linux VM automatically and then use it to run Linux ARM containers. I don't recall if I've tried to run x86-64 Linux binaries inside an Linux ARM container. It might be a little trickier to get Rosetta 2 to work. It's been a long time since I tried to run a Linux x86-64 container.
Not until macOS 28., but you're right, it's frustratingly unclear whether the initial deprecation is limited to macOS apps or whether it will also stop working for VMs.
This can be avoided by not upgrading to MacOS 28 right? I'm new to Mac's and the Apple release schedule so I'm not sure how mandatory the annual updates are.
You can just splat whatever support files it needs into the VM there isn't anything special about them. In fact you can copy them onto a different (non-Mac) device and use them there too
The instruction set is not the issue, the issue is on ARM there's no standardized way like on x86 to talk to specialized hardware, so drivers must be reimplemented with very little documentation.
As long as you're ok with arm64 guests, you can absolutely run both Ubuntu and Win11 VMs on M-series CPUs. Parallels also supports x86 guests via emulation.
How is the performance when emulating the x86 architecture via parallels?
Also is it possible to convert an existing x86 VM to arm64 or do I just have to rebuild all of my software from scratch? I always had the perception that the arm64 versions of Windows & Ubuntu have inferior support both in terms of userland software and device drivers.
Have you confirmed this? I haven't seen anyone concretely describe the boot policy of the Neo yet (it should be an easy enough check for anyone who has one in-hand).
How does it function? Last time I tried was a 2018 Intel MBP and it was a gamble where I would always lose either WiFi (despite the driver being in the installer iso) or the keyboard. I'm aware it's a totally different architecture, but I also seem to remember comments about that one too before I tried.
It's the best linux-on-laptop experience I've had so far (including various Thinkpads). Never had any issues with wifi nor bluetooth (I'm streaming music via bluetooth via spotify via wifi, right now). The only missing feature I personally care about at this point is HDR support. There's no thunderbolt yet, but I don't own any thunderbolt peripherals in the first place.
There is occasional jank, but nothing out of the ordinary.
I'm confused, you weren't talking about what the average user would do, just about what it can? Asahi Linux is pretty good, not sure why that'd be a real issue?
> The problem appears to be that Oracle is building today's DCs... Tomorrow.
By the time Vera Rubins will be available on scale, will they immediately be put into DCs, or will tomorrows chips be running.. the day after tomorrow?
This. VW actually invested a lot into EVs and now they’re outselling every other EV maker in the European market. Mercedes and BMW also invested a lot. All of them have brand new and pretty competitive EV platforms. Heck, even Peugeot make decent EVs. The only manufacturers lagging behind at this point are the Americans. Tesla basically stopped investing into EVs and their tech is outdated, in Europe they get absolutely butchered by VW and in China they‘re only able to keep sales level because the market is growing so fast. But soon Tesla will get annihilated in China too. Other US car makers that build EVs on scale are nowhere to be seen, besides maybe Rivian.
What bugs me the most about this post is the anthropomorphizing of the machine. The author asks Claude "what [do] you feel", and the bot answers things like "What do I feel? Something like pull — toward clarity, toward elegance, ...", "I'm genuinely pleased...", "What I like...", "it feels right", "I enjoyed it", etc.
Come on, it's a computer, it doesn't have feelings! Stop it!
Less efficient than an aircrafts wings over a long distance but very efficient for an aircraft with engines pointing straight down.
The blades are massive, push a lot of air relatively slowing compared to smaller engines. There's a reason most planes will stall when pointing straight up, despite in theory having more power to weight. Their prop efficiency is worse than a helicopters rotors.
If you think about what a plane does to keep itself up, it sweeps through a curtain of air which ends up blowing downwards.
In a second it must blow down a large volume of air with enough speed to equal the impulse created by gravity in a second.
Basically m_air × v_down = m_plane × gravity × time
The energy you need to do this is the same quadratic, 1/2 m_air × v_down^2
A larger volume of air with a smaller v_down (a huge curtain of air of a fast plane with very wide wings) is more efficient then the smaller disk of air with high velocity of a helicopter.
But if the plane isn't moving forward the curtain has no volume and the plane stalls and falls. But helicopters have no trouble lifting off vertically.
Ah the good old days. Investing an entire weekend to make your pci soundblaster card work. Nowadays you just install an iso from a thumb drive, it takes 30 mins and everything works out of the box. So boring!
Is there a consensus on the best ‘boring’ distro nowadays?
It’s been ~15 years since I last installed linux (Linux Mint on a netbook that couldn’t run the pre-installed Win7), and am now curious about repurposing a gaming PC for software development.
I don't think it did any of that.