Individuals with Photoshop making obvious fictions for entertainment is different from funded entities producing clips at scale and passed off as real.
That's one of the main reasons to not install an app. Extremely few apps are able to limit their notifications to actually transactional events. As soon as they have the capability they start spamming away.
The dream is probably that the inference software then writes and executes that script without using text generation alone. Analog to how a human might cross off pairs of parentheses to check that example.
ChatGPT already does this, albeit in limited circumstances, through the use of its sandbox environment. Asking GPT in thinking mode to, for example, count the number of “l”s in a long text may see it run a Python script to do so.
There’s a massive issue with extrapolating to more complex tasks, however, where either you run the risk of prompt injection via granting your agent access to the internet or, more commonly, an exponential degradation in coherence over long contexts.
I bought one of their machines to play around with under the expectation that I may never be able to use the NPU for models. But I am still angry to read this anyway.
AMD/Xilinx's software support for the NPU is fully open, it's only FFLM's models that are proprietary. See https://github.com/amd/ironhttps://github.com/Xilinx/mlir-aiehttps://github.com/amd/RyzenAI-SW/ . It would be nice to explore whether one can simply develop kernels for these NPU's using Vulkan Compute and drive them that way; that would provide the closest unification with the existing cross-platform support for GPU's.
reply