Hacker Newsnew | past | comments | ask | show | jobs | submit | dolmen's commentslogin

I remember fondly of a raster talk at FOSDEM about 20 years ago: playing videos inside a terminal. Amazing!

Wow, I think I remember that talk, too. And I remember thinking, "why would anyone want to run a video inside a terminal?!" I still don't want to do that, but it was cool that enabling that feature only required a few lines of code, since EFL(?) already supported it, was already linked in, and the code to start it was minimal.

i found this one from 2012:

https://video.fosdem.org/2012/maintracks/k.1.105/EFL.webm or https://youtu.be/HfcFbHQWqu8?list=PL31210579EDD785E7

in an interview from that time he says the previous time he was at fosdem was 10 or 11 years earlier. there seem to be no recordings from that time.


I would question the framework design: the method is called "UpdateUser", so it should be executed in a transaction, so it should be a parameter of the service, and the transaction logic handled by the framework.

  func (s \*Service) UpdateUser(ctx context.Context, tx models.Repo, userID string) error {
        user, err := tx.GetUser(ctx, userID)
        if err != nil {
            return err
        }
        user.Name = "Updated"
        return tx.SaveUser(ctx, user)
  }

In that instance, you are right, but there are often cases where you need to do multiple queries / updates spanning multiple tables in a single transaction, then you do need a generic transaction wrapper.


Anyway, when I first saw the VeraCrypt thing this morning my initial reaction was “I wonder if Iran uses VeraCrypt”

On a GitHub project, agents must just be considered untrusted external contributors.

Or ask the agent to write a Dockerfile (to abstract the build environment) that builds CUPS and all your stuff around it directl in WASM, instead of targeting x86 and then emulating x86 with WASM.

Is there a Docker-to-WASM pipeline, and how does it do anything differently from emulating x86?


There is a 8 months old open ticket, with an official answer, here: https://gitlab.opencode.de/bmi/eudi-wallet/wallet-developmen...

Yes, hence me saying duplicate above

It would be much more interesting/efficient if the LLM had tokens for machine instructions so extracting instructions would be done at tokenizing phase, not by calling objdump.

But I guess I'm not the first one to have that idea. Any references to research papers would be welcome.


As an experiment, I just now took a random section of a few hundreds bytes (as a hexdump) from the /bin/ls executable and pasted them into ChatGPT.

I don't know if it's correct, but it speculated that it's part of a command line processor: https://chatgpt.com/share/69d19e4f-ff2c-83e8-bc55-3f7f5207c3...

Now imagine how much more it could have derived if I had given it the full executable, with all the strings, pointers to those strings and whatnot.

I've done some minor reverse engineering of old test equipment binaries in the past and LLMs are incredible at figuring out what the code is doing, way better than the regular way of Ghidra to decompile code.


I have no doubt that LLMs can be as good at analyzing binaries than at analyzing source code.

An avalanche of 0-day in proprietary code is coming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: