Yep that is annoying. There are USB-C magnetic charge adapters. It will prevent shit from getting into the slot, and easy to charge magsafe style. And of course you can easily take it out temporarily to use a standard USBC charging cable.
Time to migrate off Atlassian, and ban it for any use in the company. You cannot just help yourself to customer data like that. The data is not yours, never was, and never will be. Pay for a service that blatantly rips of our company IP? Nope.
Thanks for showing your colors so clearly Atlassian. Good riddance.
A decent amount of software developers and gamers do spend 3000 USD on a PC. That kind of hardware is going go get more and more capable over time wrt genAI models.
Of course there will always be a gap to frontier closed hosted models. It is not an either or proposition.
Nice specs! Looking forward to seeing how this and the other projects on Waferspace goes. Being able to produce 1k chips at a reasonable price will hopefully do wonders for open hardware / open silicon.
Yep, Aegis's Terra 1 is designed to be "good enough" for the first generation. I do plan on expanding the Terra family of FPGA's if there's enough interest. I do want to work my way up to 100k LUT's.
What are your thoughts on including a RISC-V hardcore along with the gates? Because for almost all projects I can imagine using FPGA for, I would want a microcontroller also. This might however be slightly colored by me being a software/firmware first type of electronics engineer. Thinking especially for the smaller gate counts, like under 10k - because there a soft CPU takes up very precious resources.
1k chips for $4000 or $7000 at 180nm is (a lot) more expensive than 180nm at MOSIS or Europractice, I wound not call it reasonable, especially because the EDA software tools and PDK used are inferior.
I went though the list of prices at Europractice. Waferspace is 7000 USD for 1k of 20mm2. That is a per mm2 price of 350 USD. I could not find any offering at Europrice that matches that?
Chip fabs do not publish prices. First of all, the cost price of making a wafer is not a single item. What node, on what chip machine are they going to be made, what process, what PDK, are you breaking any of the PDK limits, what testing has your design went trough, what types and numbers of slices to chip the wafer, are there test before the chips get chipped or only after they are chipped, what packages the chips are in. Insurance types and fees, locations, what batches. All these steps can be performed in different fabs with different companies and subcontractors, between them they might have to ship your wafer under clean room conditions, sometimes flow around the world.
A wafer batch price is a very complex multi-party negotiation under NDAs, none of them has ever been made public. Show me any credible price quotes from the last 55 year (fe few million chips). You can't.
On these multi party shuttle projects this gets simplified into a price list where they quote you a high ball-park number that covers your test chips cost by a wide margin. The actual cost is never disclosed, certainly not on price lists.
A mask set maker and a chip fab create half of your product, they own that intellectual product and they won't even tell you what it has cost them. They merge their product with yours, now thyey co-own your product. There are only a few competing companies world wide (and getting fewer every year) and they compete on all this non-disclosed stuff. Prices above all.
Never belief what you read on the internet, especially in the chips war industry.
You are the one that claimed the prices of those shuttle services were lower than that of WaferSpace. 7k USD for 1k chips of 20mm2 at 180nm. Is it not the case?
There are over a hundred [1] shuttle services (group purchasing of test chips on a multi-project wafer) in they world. Several are even free, academic and universities offer them to students or PhD, some state sponsored in China and Europe, some start at $100, some very specialized, others as a 'sample' from big chip fabs, some offered on the cheap to get you hooked, tied in to a chip fab's PDK onder several NDA's.
There are a few EDA companies, all with ancient software tools but kept up to date with the changing parameters and algorithms. You use the tools the insurance companies tell you or the mandatory tools of your chip fab suppliers. They use a lot of software tools on your design files you never get to see.
If you want to make better chips, like the low power Apple Silicon for example, you create your own EDA software tools to make the innovation. Creating a new transistor like the CFET [1] means writing new physics simulation tools, for example.
The outdated 1990's and buggy Open Lane software for example limits what kind of RAM transistors you can make or the complexity of your design.
My team makes asynchronous chips, free space optics photonics, ultra dense 2 transistor SRAM, niobium SQF chips, wafer scale integrations. All require bespoke software simulation tools, netlist rewriting tools, cross-reticle stepper exposure software (a software change in a $400 million dollar machine), etc etc.
Making hardware near atomic size structures is mostly a software job. Hardware is software crystalized early, Alan Kay quips.
Glad to see this. At least there is one player in Europe which does a full vertical integration around LLM/AI - from datacenter to LLM models and applications (Mistral Vibe).
On the data center part Europe seems to be doing OK, and also ok on applications. It would be nice to see more players focusing on the LLM model building - though it legitimately seems like a very tough (maybe even bad) business to be in.
Have been playing with Qwen3.5 35B. Runs OK nicely on a RTX5060Ti, though I would have liked to have a bit higher thoughput (a 5080/5090 would do). It is seemingly close-but-not-quite-there for code generation / agentic coding. So I am actually quite hopeful that in a few years time, using local LLM models will be quite feasible.
A AMD Ryzen AI Max Pro 396 will get 50t/s with Qwen3.5 35B.
In addition, the these local models are very, very, very sensitive to the template used. Make sure it is correct. I was using the wrong template and it would answer but felt like it had a brain worm.
The parameters must also be what is recommended, otherwise they go off the rails.
I get great results now after messing with it for a while. I prefer the 35B model because I enjoy how fast tokens appear at 50t/s, but at around 20-25t/s with the 122B model, it is also completely usable. And that one is very smart.
Very much looking forward to play with the BIO functionality on the Baochips that I have ordered. Thanks for the nice write up!
It is fascinating to see how widely applicable the "just throw a RISC-V core or 4 in there" design pattern is. The wide range of CPU designs that are standardized, the number oc mature open source implementations, and the lack of royalty fees, and the ready-to-run programming toolchains really drives this to a new level. And CPUs are small in die area anyway compared to SRAM! Was cool to see on the RPI2350 how they just threw in another two RISC-V cores next to the ARMs.
For these reasons specified above, I think that this trend will continue. For example, in my specialization of edge machine learning, we are seeing MEMS sensors that integrate user programmable DSP+ML+CPU right there on the sensor chip.
Highly recommend Statistical Rethinking for anyone looking for practical/applied/intuitive approach to Bayesian Statistics. For example the 2023 lecture series:
https://youtu.be/FdnMWdICdRs?is=KycmwPL-cn8clOK5
reply