Cyber Resilience Act [1], which is well-intentioned, and doesn't outright forbid user access to firmware, but most vendors will take the easy road and outright block user-modifiable software (if they didn't already), so that their completely closed source, obfuscated and vulnerable version is the only version allowed on their devices.
Well... if you look behind anything that plugs into a wall socket you will see that it has ( among many other things) a CE mark. Even things in the USofA have a CE mark.
If your new product cannot have its CE mark for whatever reason, you will not have the approbations to sell in the USA either.
What the CRA will do, is if you do not have a "CRA" compliant product, you will not have the CE mark. Which means you will not (with very high probability) have the other marks needed to sell outside Europe.
Maybe then you can just sell to your close family members who like you, but good luck if you get caught and it can be proven that your shitty device caused a fire ...
We don't place any value on the CE mark in the States.
A lot of consumer electronics need to be FCC compliant, which involves a process of proving that the device doesn't emit too much of the wrong EMI/RFI in the wrong places.
And safety-wise, we use tend to use ETL, UL, and CSA for testing. These are third-party Nationally Recognized Testing Labs, and their own marks are used on devices they approve. But they're only really concerned about the safety of a product. In very broad strokes: If the device is proven to be unlikely-enough to burn a house down or cause electrical shock to humans, then it gets approved.
CE is a whole different thing. No government body in the USA requires or respects a CE mark on consumer goods; that mark doesn't hold any legal weight here.
Whether good or bad, CE is just not how we roll on this side of the pond.
(Of course, none of that means that laws in the EU don't affect product availability and features here. Globalization be that way sometimes.)
I'd like to reiterate that a CE mark means nothing to us here.
If my house burns down and a widget with only a CE mark is blamed as the source, my insurance company will consider that to be the equivalent of it having no marking at all.
If a company wants to sell a product globally including the USA, then CE isn't enough to satisfy the safety boffins.
The world is a big place, and the US isn't alone in this way: Lots of other countries also don't care about an isolated CE mark, like Canada and Mexico here in North America.
Some other large, important markets like Japan and Brazil are this way, too.
That's not what I'm saying. I'm not saying that a device sold anywhere in the world can't have a CE mark -- that's not it, at all. I'm also not saying that a person or company can't seek to get a CE mark for their product from wherever they are in the world (they certainly can do that).
There's a lot that I'm not saying.
What I am saying is that there are places in the world where the CE mark (and the presence or absence of it) means nothing, and that Canada is one such place.
Y'all have your own safety marks up there.
CSA is a big one -- you've had that organization up there and doing great work for over a century. cUL is another very common, accepted mark in Canada.
That's not what they're saying. They're saying that in the US, a device can have the CE mark, but that's not indicative of it passing US safety standards.
Also, I'd be surprised if all those Chinese devices have actually earned that CE mark.
>If your new product cannot have its CE mark for whatever reason, you will not have the approbations to sell in the USA either.
I worked for a US manufacturer that only sold directly in the US, and we never bothered getting CE certification on anything, just FCC. Lots of Europeans imported our products, but we left EU compliance up to them.
The size of the EU market didn't justify the costs of regulatory compliance.
> You would have to be a Hotz tier hacker if you wanted to do anything close to this only last year
This isn't true at all. Yes, LLMs have made it dramatically easier to analyse, debug and circumvent. Both for people who didn't have the skill to do this, and for people who know how to but just cannot be bothered because it's often a grind. This specific device turned out to be barely protected against anything. No encrypted firmware, no signature checking, and built-in SSH access. This would be extremely doable for any medium skilled person without an LLM with good motivation and effort.
You're referring to George Hotz, which is known for releasing the first PS3 hypervisor exploit. The PS3 was / is fully secured against attackers, of which the mere existence of a hypervisor layer is proof of. Producing an exploit required voltage glitching on physical hardware using an FPGA [1]. Perhaps an LLM can assist with mounting such an attack, but as there's no complete feedback loop, it still would require a lot of human effort.
The hacking aspect has been hit and miss for me. Just today I was trying to verify a fix for a CVE and even giving the agent the CVE description + details on how to exploit it and the code that fixed it, it couldn't write the exploit code correctly.
Not to say it's not super useful, as we can see in the article
CVEs and all, but I just can't wait for firmwares for cheaper modern cameras from Sony, Nikon and Panasonic getting hacked and modified too add features from more expensive models.
They're all firmware restricted to justify buying more expensive models, in one way or another way.
>... but as there's no complete feedback loop, it still would require a lot of human effort.
Not for long. Picture this: a robot receives instructions on what to physically solder in order to complete the desired modification task.
However, before it can send an image back to the vision-aware LLM guiding it, the PCB lights on fire along with the robot because said LLM confidently gave the wrong instructions.
Then, the robotic fire brigade shows up and mostly walks into walls unable to navigate anywhere useful.
I'm already having lots of success letting the agent loose on the arduino or rpi and figuring out all the annoying i2c bits and having me try different pinout and wiring combos until it works. Even with a human in the loop agents are useful right now for electronics. On one occasion I did give it a camera feed so it could check for itself if the LEDs were doing as expected.
Minor correction. At 27c3's "Console Hacking 2010" talk. Geohot's Hypervisor work is mentioned at 4:25 or so. Described as "really unreliable" and "eh whatever" due to requiring hardware modification and only granting rudimentary hypervisor access.
These were the same people that then went on to explain how they reverse-engineered the encryption keys of the PS3 to enable "fakesigned" code to be installed
didn't PS3 have a hardcoded nonce for their ECDSA impl that allowed full key recovery? I would agree that I doubt LLMs let people mount side-channel attacks easily on consumer electronics though.
Yes indeed, that chain of exploits was all software and not hardware. Developed after the Hotz exploit and Sony subsequently shuttering OtherOS.
It didn't directly give access to anything however. IIRC they heavily relied on other complex exploits they developed themselves, as well as relying on earlier exploits they could access by rolling back the firmware by indeed abusing the ECDSA implementation. At least, that turned out to be the path of least resistance. Without earlier exploits, there would be less known about the system to work with.
Their presentation [1] [2] is still a very interesting watch.
Creating the PR, doing the explanation you just did, and closing it yourself might be a good option. Then at least your code lives somewhere that someone else can reuse if desired. Ideally combined with a linked issue that you do keep open.
The report you're referring to by the European Commission [1] shows that the mass surveillance of Chat Control 1.0 is probably not very proportional. They even note themselves that "The available data are insufficient to provide a definitive answer to this question".
However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.
Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.
The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.
So just a recap of what happened between the European Commission and the European Parliament and why the regulation has expired (it's a long story, I'm probably missing many nuances):
- In 2021 the European Parliament voted in favor of a temporary regulation that allowed companies to (i.e. voluntarily) scan private communications. Let's call it Chat Control 1.0. They chose to enact this because US companies were already scanning private messages in violation of the ePrivacy Directive which had come into force in the previous year. Instead of enforcing this directive, they chose to (temporarily) legalize the scanning of private messages while preparing more permanent legislation.
- In 2024 Chat Control 1.0 was extended for another 2 years. An amendment was adopted that explicitly noted that after this time "[the regulation] shall lapse permanently".
- From 2022 to 2025 the European Commission (together with member states) has proposed mandatory scanning, later updated with a proposal for client-side scanning (defeating end to end encryption), AI classification of image and text content, age verification and a lot of other invasive measures. This is what is known as Chat Control 2.0. The European Parliament has again and again voted against this proposal.
- In 2025/2026 the European Commission finally (temporarily) backed down from Chat Control 2.0 and instead proposed to extend Chat Control 1.0 for another 2 years, but has completely failed to negotiate with parliament to adopt a text that explicitly puts fundamental rights up front, something that a majority of the European Parliament had asked for since 2021.
- In response to this, the Civil Liberties Committee of the European Parliament tabled amendments [1] that explicitly limits the regulation to the subject matter and prevents it from being used to weaken end-to-end encryption. Many of these amendments were adopted.
- Consequently, many conservative members of the European Parliament voted down the entire extension of the regulation. They apparently felt that it was better to let the regulation expire so that they gain more negotiation power to adopt a version of the regulation that the has less safeguards or contains measures like in Chat Control 2.0.
I think your recap is missing a pretty large step at the very beginning, which is that AFAIR, the EU Parliament put together this temporary regulation to a posteriori allow the scanning that was already being done, outside of the law, by those US companies on EU citizen messages ; and the temporary regulation was put in place until a proper framework could be agreed upon.
Yes indeed, thanks for the correction. It has been a complex story, and I already forgot that chapter. I edited it into my post (also modified a wrong date of the first derogation), although I'm probably missing more nuances.
Basically the EU had voluntary scanning, but that wasn't enough for "child safety" idiots who wanted to spy on everyone, all the time. They got greedy and tried to go full authoritarian by targeting encrypted messaging. The resulting backlash has resulted in these wannabe authoritarians having nothing, which is pretty funny.
It also seems conceptually wrong to refer to a process of ordering and cleaning up notebook facts as 'dreaming'. If I collect and clean up my notes of the day, that's a very conscious task. Actually dreaming seems more analogous to a training or fine-tuning step where you modify the model weights.
(while hallucinating the events of the day in a very weird way; it would be fun to 'wake up' the agent in the middle of such a session and commit the 'dream' to a notebook again)
I use Big-AGI [1] as selfhosted open source LLM workspace, and it's quite telling that when adding API keys for Anthropic, it presents a note inbetween reading "Experiencing Issues? Check Anthropic status" that it doesn't for any other model provider.
> OpenCloud is the "open-source" fork but they are already in legal trouble with OwnCloud due to industrial espionage claims.
Can you expand on this or source this? I'm quite interested in OpenCloud, and haven't heard anything about this. I searched for a few keywords (espionage, legal, lawsuit), which only lands your comment on top.
They seem to avoid openly discussing and comparing products to avoid further action. Apparently some of the former members of OwnCloud have switched to Heinlein (the maker of OpenCloud) and Kiteworks isn't happy about this.
I briefly scanned the paper. The above summary is garbage.
For a biologist, a summary might be like this: pcr fragments are generated with short reverse complementary sequences added to the end of one fragment that match that at the begining of the next to-be-joined fragment.
These will anneal to create a cross-shaped DNA molecule. The short arms of the cross being the complementary sequences. Like so:
======∥=====
The short arms can then be processed-off to leave behind the now-longer fragment. The process can be repeated using different reverse complementary sequences between each fragment, the "page numbers" referred to.
So do the complementary sequences naturally bind to their neighbors? So you just mix the “pages” in a soup for a while until they all find their friends. And then the custom enzyme (or what is it) just slices off the three way junctions?
"...You are licensed to use the source code in Admin Tools and Configuration Files (server/templates/, server/i18n/,
server/public/, webapp/ and all subdirectories thereof) under the Apache License v2.0...."
[1] https://en.wikipedia.org/wiki/Cyber_Resilience_Act
reply