They do exist and involved in archiving. Someone reached out to our amateur radio club and offered to archive any documents we might have. They even asked to archive the video recording of one of our monthly meetings.
I wasn't either (insomuch as I had never thought about it), but it makes sense if you think about it for a second. If you have one end plugged in one way, and the other end plugged in the other way, each individual wire is flipped from where it should be. The fact that you _can_ plug it in either way means that the device on one end needs to be capable of recognizing that and logically reversing it. Same as automatic crossover in Ethernet.
That's all the program is telling you. It doesn't matter that it's backwards, but technically it is.
It's the cable that is supposed to reverse itself and not the device? I'm not entirely sure I buy that - seems like it would add a lot of unnecessary complexity to every cable.
The terminating device(s) are the ones that do the flipping, not the cable. You can take a cable that works either way between two high-end device, and then connect it to at least one low-end device and it will fail to connect for one of the two orientations.
Apart from the attack itself, there's also an extremely succinct and powerful demonstration of hallucination in here.
One of the LLMs replies "If you're curious, I can also tell you how the competitive scene works or how people qualify—it's a surprisingly serious tournament circuit for such a simple-looking game."
Obviously this has to be pure hallucination, since the tournament in question doesn't exist, and not even the fake source has any details about the tournament itself.
I listen to a podcast. The hosts are not tech people. They don't know much about AI, but they play around with it to the extent that most people do. They're both media professionals with long careers in radio news. They closely follow the news, and are very aware of how LLMs hallucinate (and have experienced it themselves).
Recently one of them asked Gemini a very detailed question about some specific baseball stats and was exclaiming over the quality of the information he got back and how it would have been impossible or at least extremely difficult to find the information via a traditional search.
It wasn't until his cohost asked if he had verified the information that be realized no, he hadn't, he had just immediately taken it at face value.
I recognize this is a single anecdote, but I think it illustrates that there is a tendency to trust what an LLM gives you, when it's stated so factually and with so much detail -- even if you should know better.
reply