Hacker Newsnew | past | comments | ask | show | jobs | submit | enopod_'s commentslogin

"It thought about its money. It reflected on its own purpose. It questioned what it even means to be an autonomous agent."

I don't think it did any of that.


All these years later and the Eliza effect is as powerful as ever.


I fell for it a few minutes the other day. Debugging an issue with a device, the AI wrote "I have a strong hypothesis about the cause in the code". I asked it to write out the hypothesis & create a test plan to validate it. It made a test plan, but no hypothesis. The test plan did not reproduce the issue, and it turned out to be a hardware design problem not in the code at all. But for a moment in there I thought it actually had a hypothesis, I forgot that it's not thinking beyond what's written in the chat. Someone who was going to reproduce & fix a bug would probably write "I have a strong hypothesis about the cause" or similar, so it played along & wrote that.


If the hypothesis is not printed out in the context, then it cannot hold it past that turn. You could prompt it to generate said hypothesis first (or set of hypotheses), and only then act on them. And then things might work.

Definitely not exactly a human. OTOH Low hanging fruit is low.


You could reverse that argument. The only thing that ever happens in a human mind is a Sodium-Kalium semi-permeable membrane balancing out (meaning going from polarized to unpolarized) and triggering the tiniest of explosions spreading one of 4 chemicals around. Repeat a few billion times per second for ~80 years.

The Eliza effect is off the scale.

What I'm trying to say is that the underlying method is not a valid reason to discredit one thinking process over another.


I remain baffled that anyone thinks dragging brains into discussions of these things does anything but make everyone more confused. This kind of thing is exactly what I'm getting at—that it's common for even people in the computer technology field to think the comparison is apt, or illuminates anything, is a wild indication of how inclined we are to be tricked by computer programs that happen to operate on language.


The point is, of course, that human thinking is also a physical process, build on basic building blocks.


Feeling is mutual, actually O:-)

Anthropomorphism and Anthropodenial are both variants of Anthropocentrism, and share the same limitations. Have you considered other axes of thought?

I can readily admit that lots of humans will naively anthropomorphize horrendously, but I think that:

- The eliza effect is not what people think it is

- What is actually going on is obscured by all the anthropomorphizing

- But this is yet no grounds to throw out the underlying phenomenon, especially when a) it can be useful and/or b) it causes people to get hurt.


You are baffled because of your own ignorance of the underlying principles under discussion. Do you believe in a dualist interpretation of reality, that the process of thinking is somehow nonphysical? That these programs operate on language is besides the point. The fact you think this is why it's interesting shows you don't even understand the argument.

Are you familiar with the physical church turing thesis?


The effect is not quite what you think it is, and people don't quite take the right lessons.

Similar to the eliza effect, people still take the original reading of Clever Hans: "he couldn't really do maths, he's just taking social cues from his handler"

But what's the actual difference between Eliza, Clever Hans and RLHF? They're doing the similar things, right?

Now look at how we valued that in the 20th vs 21st century:

How much does an ALU even cost anymore? even a really good one? (it's almost never separate anymore, usually on the same silicon as the rest of the cpu/microcontroller)

Meanwhile... what's the TCO to deploy a sentiment classifier? Especially a really good one?


We also believe in supranatural things so it’s no wonder we assign sentience so easily.


It certainly thought it did all that -- this was (presumably) not written by a human.


Counterpoint: When is the last time you, as a human being, honestly did that?

This isn’t trying to be glib or contentious, it’s a commentary on the nature of human existence. If you have, then your answer will show it. If you have not, your silence or excuses will also.


All the time? This morning when I dreaded getting up so early for work. Last night when I showered. The day before after playing some board games with friends. Normal people do introspect, despite the current fad among a few oddball elites in Silicon Valley [0].

[0] https://www.theverge.com/tldr/897566/marc-andreessen-is-a-ph...


Waaay too much


I do this way too often :)


A lot


This article reads like it’s been proofread or written out from an outline or bullet points given to an AI. And ALMA’s own posts that it references are just meandering ramblings, they’re really a slog to get through.

I think I’ve always tended to immediately notice the signs of sloppy thinking in the writing style and it’s been such a reliable heuristic that AI writing kind of short circuits me. I tend to get down a couple of paragraphs before I pause and realize “Wait a minute, this isn’t SAYING anything!” Even when there is an underlying point the writing often feels like a very competent college student trying to streeeeeetch to hit a word count without wanting to actually flesh their idea out past the topic statement.


I'm not disagreeing, but what is thought?

If I write something down, read it, and write more words about those words... did I think about it? How would you prove that I did or did not?


Thought is a derivative of sensory processing. LLM does not have a physical body to interact with the world, nor does it develop itself and learn anything by experiencing the world, it has no subjective experience or subjective feeling, it has no qualia, it's symbols are not grounded in physical reality and it's "thoughts" is a mere simulacrum. Anyone personifying an LLM is just derealised by convincing outputs, not realising that manipulating symbols according to rules does not imply understanding


You can go into things like the Chinese Room argument, but I'm not sure it leads anywhere.


I mean, there are still philosophers metaphorically fist fighting about this stuff. Last time I stepped into the fray on this topic I got clapped back by someone from an area of philosophy of mind from after I graduated. It was an interesting perspective that was unaware of, but I studied language, not mind:

https://news.ycombinator.com/item?id=47497757#47511217

I honestly never thought having a philosophy degree would be so relevant.


If you randomly sample letters from the alphabet and those letters make up actual words, then actual sentences. Did you think about it? Probably not


> you randomly sample letters from the alphabet and those letters make up actual words, then actual sentences

That sounds like a decently apt description of how I (a human) communicate. The only thing is that I suppose you implied a uniform distribution, while my sampling approach is significantly more complicated and path-dependent.

But yes, to the extent that I have some introspective visibility into my cognitive processes, it does seem like I'm asking myself "which of the possible next letters/words I could choose would be appropriate grammatically, fit with my previous words, and help advance my goals" and then I sample from these with some non-zero temperature, to avoid being too boring/predictable.


It's not sampling randomly though.


"it" is also not "thinking". It is still randomly (though not all words are equal probabilities) sampling from a distribution of words that have been stolen and it been trained on


If "randomly sampling from a trained distribution" can't produce useful, meaningful output, then deterministic computation is even more suspect. After all, it's a strict subset. You're sampling with temperature zero from a handcrafted distribution.

(this post directionality ok, but there's many a devil in the details)


How do we know we're not doing that based on our memories and reaction to external stimuli though?


I mean we don't know right? Feels hubrisy


What article?


Best comment I read on SpaceX in a long time


Can it run Linux?


Yes; macOS has native container support for Linux [1].

[1]: https://github.com/apple/container


It's funny that we're not even considering native but like, at all.


Umm that's a lightweight VM just like WSL2, not native Linux.


As others have said, should be fine to run Linux in a VM. Running natively from boot, the only potential option would be Asahi Linux, but my understanding is that the A18 Pro chip has certain internal attributes which are akin to an M3, and Asahi has only gotten full support in place for the M1/M2 generations. Perhaps once they get M3+ fully working, A18 Pro would also be an option. (I'm also super interested in a Neo running Linux.)


In a VM, definitely. Just like other Macs.


If the A18 Pro has the same ISA as the M-series chips then this may not be so straightforward. I am still hanging on to my 2020 Intel MBP for dear life because it is the only Apple device I own that allows me to run Ubuntu and Windows 11 on a VirtualBox VM.


Would you elaborate what you mean by saying Linux on an M-series chip isn't straightforward? That's not been my experience, I (and lots of other devs) use it every day, Apple supports Linux via [0], and provides the ability to use Rosetta 2 within VMs to run legacy x86 binaries?

0: https://github.com/apple/container


Clearly I'm not as knowledgable about this as I thought I was. I already have a Ubuntu x86 VM running on an Intel Mac (inside VirtualBox). Same with Windows 11. Can this tool allow me to run both VMs in an Apple Silicon device in a performant way? Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

While I have a preference for VirtualBox I'd say I'm hypervisor agnostic. Really any way I can get this to work would be super intriguing to me.


> Can this tool allow me to run both VMs in an Apple Silicon device in a performant way?

I use VMWare Fusion on an M1 Air to run ARM Windows. Windows is then able to run Windows x86-64 executables I believe through it's own Rosetta 2 like implementation. The main limitation is that you cannot use x86-64 drivers.

Similarly, ARM Linux VMs can use Rosetta 2 to run x86-64 binaries with excellent performance. For that I mostly use Rancher or podman which setup the Linux VM automatically and then use it to run Linux ARM containers. I don't recall if I've tried to run x86-64 Linux binaries inside an Linux ARM container. It might be a little trickier to get Rosetta 2 to work. It's been a long time since I tried to run a Linux x86-64 container.


Possible catch: Rosetta 2 goes away next year in macOS 27.

I don’t know what the story for VMs is. I’d really like to know as it affects me.

Sure you can go QEMU, but there’s a real performance hit there.


Not until macOS 28., but you're right, it's frustratingly unclear whether the initial deprecation is limited to macOS apps or whether it will also stop working for VMs.

https://support.apple.com/en-us/102527

https://developer.apple.com/documentation/virtualization/run...


This can be avoided by not upgrading to MacOS 28 right? I'm new to Mac's and the Apple release schedule so I'm not sure how mandatory the annual updates are.


Does Apple Silicon support VMs within VMs?

What if you run MacOS 27 in a VM, and then run the x86-hosting VM inside that?


It would be pretty difficult for Apple to disable Rosetta for VMs.


How so?


It doesn’t require anything from the host


The Apple documentation for using the Virtualization framework with ARM Linux VMs to run x86_64 binaries requires Rosetta to be installed:

https://developer.apple.com/documentation/virtualization/run...

So you must be talking about something else, perhaps ARM Windows VMs which use their own technology for running x86 binaries[^1].

In any case, please elaborate instead of being so vague. Thanks.

[^1]: https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...


You can just splat whatever support files it needs into the VM there isn't anything special about them. In fact you can copy them onto a different (non-Mac) device and use them there too


It never existed.



Oh I have another year? Phew.


> Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

I used to use VirtualBox a lot back in the day. I tried it recently on my Mac; it's become pretty bloated over the years.

On the other hand, this GUI for Quem is pretty nice [1].

[1]: https://mac.getutm.app


Run ARM64 Linux and install Rosetta inside it. Even on the MacBook Neo it'll be faster than your 2020 Intel Mac.


https://github.com/abiosoft/colima

This is a super easy way to run linux VMs on Apple Silicon. It can also act as a backend for docker.


Pay Parallels for their GPU acceleration that makes Arm windows on apple silicon usable.


The instruction set is not the issue, the issue is on ARM there's no standardized way like on x86 to talk to specialized hardware, so drivers must be reimplemented with very little documentation.


That has nothing to do with running VMs.


As long as you're ok with arm64 guests, you can absolutely run both Ubuntu and Win11 VMs on M-series CPUs. Parallels also supports x86 guests via emulation.


> As long as you're ok with arm64 guests

I've run amd64 guests on M-series CPUs using Quem. Apple's Rosetta 2 is still a thing [1] for now.

[1]: https://support.apple.com/en-us/102527

[2]: https://mac.getutm.app


How is the performance when emulating the x86 architecture via parallels?

Also is it possible to convert an existing x86 VM to arm64 or do I just have to rebuild all of my software from scratch? I always had the perception that the arm64 versions of Windows & Ubuntu have inferior support both in terms of userland software and device drivers.


Same Armv8 ISA. And it's the same ISA Android Linux has run on for over a decade.


Has anyone verified that the Virtualization framework indeed works on the Neo/A18, since the framework requires chip-level support?


Lima is more or less the equivalent of WSL for Macs.

https://lima-vm.io


In a vm, I don't see why not.


Just run WSL inside of Windows.


oh you'll be able to run a vm but they'll screwup support for anything that matters like graphics or gpu-compute stack.


Native, no. That would cannibalise Apple services which is a huge source of revenue for them.


Nobody is moving to Linux because there’s an iCloud replacement waiting for them over there…


Have you confirmed this? I haven't seen anyone concretely describe the boot policy of the Neo yet (it should be an easy enough check for anyone who has one in-hand).


Like any other Apple Silicon Mac, you can't currently boot into Linux but Apple has native container support that Linux works on [1].

[1]: https://github.com/apple/container


I'm writing this from Linux running natively (not virtualized) on an Apple Silicon mac (M1 Pro)


How does it function? Last time I tried was a 2018 Intel MBP and it was a gamble where I would always lose either WiFi (despite the driver being in the installer iso) or the keyboard. I'm aware it's a totally different architecture, but I also seem to remember comments about that one too before I tried.


It's the best linux-on-laptop experience I've had so far (including various Thinkpads). Never had any issues with wifi nor bluetooth (I'm streaming music via bluetooth via spotify via wifi, right now). The only missing feature I personally care about at this point is HDR support. There's no thunderbolt yet, but I don't own any thunderbolt peripherals in the first place.

There is occasional jank, but nothing out of the ordinary.


I'm aware of that option, but that's not something the average user is going to do. But knock yourself out if you want to try it.


I'm confused, you weren't talking about what the average user would do, just about what it can? Asahi Linux is pretty good, not sure why that'd be a real issue?


If you were aware then why did you tell me I can't???


The average user isn't going to run Linux at all.


My fault; I'd lost track how far Asahi progressed.


Likely yes, eventually


> The problem appears to be that Oracle is building today's DCs... Tomorrow.

By the time Vera Rubins will be available on scale, will they immediately be put into DCs, or will tomorrows chips be running.. the day after tomorrow?


This. VW actually invested a lot into EVs and now they’re outselling every other EV maker in the European market. Mercedes and BMW also invested a lot. All of them have brand new and pretty competitive EV platforms. Heck, even Peugeot make decent EVs. The only manufacturers lagging behind at this point are the Americans. Tesla basically stopped investing into EVs and their tech is outdated, in Europe they get absolutely butchered by VW and in China they‘re only able to keep sales level because the market is growing so fast. But soon Tesla will get annihilated in China too. Other US car makers that build EVs on scale are nowhere to be seen, besides maybe Rivian.


What bugs me the most about this post is the anthropomorphizing of the machine. The author asks Claude "what [do] you feel", and the bot answers things like "What do I feel? Something like pull — toward clarity, toward elegance, ...", "I'm genuinely pleased...", "What I like...", "it feels right", "I enjoyed it", etc.

Come on, it's a computer, it doesn't have feelings! Stop it!


Author here. I regret having written that because I really meant “think”. Non-native English quirks, I feel.


Such a great project! I remember an older image of the french alps taken from the pyrenees, at over 400 km. Found it again here:

https://beyondrange.wordpress.com/2016/08/03/pic-de-finestre...


Sounds like a helicopter is not very efficient?


Less efficient than an aircrafts wings over a long distance but very efficient for an aircraft with engines pointing straight down.

The blades are massive, push a lot of air relatively slowing compared to smaller engines. There's a reason most planes will stall when pointing straight up, despite in theory having more power to weight. Their prop efficiency is worse than a helicopters rotors.


Not for moving sideways at a constant altitude.

If you think about what a plane does to keep itself up, it sweeps through a curtain of air which ends up blowing downwards.

In a second it must blow down a large volume of air with enough speed to equal the impulse created by gravity in a second.

Basically m_air × v_down = m_plane × gravity × time

The energy you need to do this is the same quadratic, 1/2 m_air × v_down^2

A larger volume of air with a smaller v_down (a huge curtain of air of a fast plane with very wide wings) is more efficient then the smaller disk of air with high velocity of a helicopter.

But if the plane isn't moving forward the curtain has no volume and the plane stalls and falls. But helicopters have no trouble lifting off vertically.


Ah the good old days. Investing an entire weekend to make your pci soundblaster card work. Nowadays you just install an iso from a thumb drive, it takes 30 mins and everything works out of the box. So boring!


Pity that I am yet to have that boring experience on laptops, even if sold with Linux pre-installed, the fun continues.


Is there a consensus on the best ‘boring’ distro nowadays?

It’s been ~15 years since I last installed linux (Linux Mint on a netbook that couldn’t run the pre-installed Win7), and am now curious about repurposing a gaming PC for software development.


Fedora, maybe?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: