There were 387 co-pros, just like the 287s (ad 8087s). You could actually use a 287 to provide floating-point instructions to a 386, albeit more slowly than a 387.
Very little, if any, “home” or small-business software would make use of a floating-point unit though (maybe some spreadsheet apps did?). The most common use for them was CAD/CAM, and those doing scientific modelling without a budget that would allow for less consumer-grade kit.
For a time systems with a 386SX were significantly cheaper than those with a 386DX because the 16-bit data-bus mean cheaper motherboards could be used.
If you were running 16-bit software they were little slower than a 386DX at the same clock and significantly faster than a 286 because of higher clocks (286's usually topped out at 12MHz though there were some 16MHz options, the slowest 386s were running at 16MHz with some as fast as 40MHz), but also in part, when not blocked by instruction ordering issues, to the (albeit small by modern standards) instruction pipeline which the 286 lacked.
32-bit software was a lot slower than on a DX because 32-bit data reads and writes took two trips over the 16-bit data bus, but you could at least run the code as it was a full 386 core otherwise (full enhanced protected mode, page based virtual memory, v8086 mode, etc).
The SX also only used 24 bits of the address bus, limiting it to 16MB of RAM compared to the original's 4GB range, though this was not a big issue for most at the time.
I've got an AMD branded 286 chip, from my first owned-by-me PC, bluetac-ed to the case of my home desktop PC, powered by a Ryzen something-or-other from a few years ago (with a 1060/6Gb card from a few years before that because I wasn't gaming enough to justify a new graphics card along with the other updates at the time).
Not complaining about the particular presenter here, this is an interesting video with some decent content, I don't find the presentation style overly irritating, and it is documenting a lot of work that has obviously been done experimenting in order to get the end result (rather than just summarising someone else's work). Such a goofy elongated style, that is infuriating if you are looking for quick hard information, is practically required in order to drive wider interest in the channel.
But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.
MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).
The video definitely wouldn't be over 50m if she was targeting views. 11m -15m is where you catch a lot of people repeating and bloviating 3m of content to hit that sweet spot of the algorithm. It's sad you can't appreciate when someone puts passion into a project.
This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.
Yes, I do want the summary because my time is (also) valuable. There is a reason why book covers have synopses, to figure out whether it's worth reading the book in the first place.
I don't consider AI to threaten "damage to society" the way you seem to, but I did find it interesting to think about how ridiculously well-produced the video was, and what that might signify in the future.
I kept squinting and scrutinizing it, looking for signs that it was rendered by a video model. Loss of coherence in long shots with continuity flaws between them, unrealistic renderings of obscure objects and hardware, inconsistent textures for skin and clothing, that sort of thing... nope, it was all real, just the result of a lot of hard work and attention to detail.
Trouble is, this degree of perfection is itself unrealistic and distracting in a Goodhart's Law sense. Musicians complain when a drum track is too-perfectly quantized, or when vocals and instruments always stay in tune to within a fraction of a hertz, and I do have to wonder if that's a hazard here. I guess that's where you're coming from? If you wanted to train an AI model to create this type of content, this is exactly what you would want to use as source material. And at that point, success means all that effort is duplicated (or rather simulated) effortlessly.
So will that discourage the next-generation of LaurieWireds from even trying? Or are we going to see content creators deliberately back away from perfect production values, in order to appear more authentic?
Similarly, I always leave some space unallocated on LMV volume groups. It means that I can temporarily expand a volume easily if needed.
It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.
--------
[1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.
I do that + script to auto resize within sane limits. So for most servers the partitions will automatically fit the usage while still leaving some spare space.
Usually something like "expand if there is less than 5% left, with monitoring triggering when there is 4% free space left", so there is still warning when the automatic resize is on limit
carving space per PV like that is pointless
> It also serves to leave some space unused to help out the wear-levelling on the SSDs on which the RAID array that is the PV¹ for LVM. I'm, not 100% sure this is needed any more² but I've not looked into that sufficiently so until I do I'll keep the habit.
YMMV but most distros set up a cron/timer that does fstrim monthly. So it shouldn't be needed, as any free space will be returned to SSD.
> [1] if there are multiple PVs, from different drives/arrays, in the VG, then you might need to manually skip a bit on each one because LVM will naturally fill one before using the next. Just allocate a small LV specially on each and don't use it. You can remove one/all of them and add the extents to the fill LV if/when needed. Giving it a useful name also reminds you why that bit of space is carved out.
other options is telling LVM this LV is striped (so it uses space from both drives equally), or manually allocating from drive with more free space when expanding/adding LV
Not needed. All your unused/unfilled space is that space for wear-leveling. It wasn't needed even back then besides some corner cases. And most importantly 10% of the drive in ~2010 were 6-12GB, nowadays it's 50-100GB at least.
But even ignoring the wear-levelling issue, the spare space still fulfils a need in providing the ballast space which is the main thing we are talking about here. Of course there are other ways to manage that issue¹ but a bit of spare space in the volume group is the one I go for.
In fact since enlarging live ext* filesystems has been very reliable² for quite some time and is quick, I tend to leave a lot of space initially and grow volumes as needed. There used to be a potential problem with that in fragmenting filesystems over the breadth of a traditional drive's head seek meaning slower performance, but the amount of difference is barely detectable in almost all cases³ and with solid state drives this is even more a non-issue.
> And most importantly 10% […] nowadays it's 50-100GB at least.
It doesn't have to be 10%. And the space isn't lost: it can be quickly brought into service when needed, that is the point, and if there is more than one volume in the group then I'm not allocating space separately to every filesystem as would be needed with the files approach. It is all relative. My /home at home isn't nearly 50GB in total⁴, nor is / anywhere I'm responsible for even if /var/log and friends are kept in the same filesystem, but if I'm close to as little as 50GB free on a volume hosting media files then I consider it very full, and I either need to cull some content or think about enlarging the volume, or the whole array if there isn't much slack space available, very soon.
--------
[1] The root-only-reserved blocks on ext* filesystems, though that doesn't help if a root process has overrun, or files as already mentioned above.
[2] Reducing them is still a process I'd handle with care, it can be resource intensive, has to move a lot more around so there is more that could go wrong, and I've just not done it enough to be as comfortable with the process as I am with enlarging.
[3] You'd have to work hard to spread things far and randomly enough to make a significant difference.
[4] though it might be if I wasn't storing 3d print files on the media array instead of in /home
Empty space is good for wear-leveling but enforcing a few percent extra helps.
> And most importantly 10% of the drive in ~2010 were 6-12GB, nowadays it's 50-100GB at least.
Back then you were paying about $2 per gigabyte. Right now SSDs are 1/15th as expensive. If we use the prices from last year they're 1/30th, and if we also factor in inflation it's around 1/50th.
So while I would say to use a lower percentage as space increases, 50-100GB is no problem at all.
Only if you fill the drive up to 95-99% and do this often. Otherwise it's just a cargo-cult.
> So while I would say to use a lower percentage as space increases
If your drive is over-provisioned (eg 960GB instead of 1024GB) then it's not needed. If not and you fill your drive to the full and just want to be sure then you need the size of the biggest write you would do plus some leeway, eg if you often write 20GB video files for whatever reason then 30-40GB would be more than enough. Leaving 100GB of 1TB drive is like buying a sneakers but not wearing them because they would wear.
> If your drive is over-provisioned (eg 960GB instead of 1024GB) then it's not needed.
I disagree. That much space isn't a ton when it comes to absorbing the wear of background writes. And normal use ends up with garbage sectors sprinkled around inflating your data size, which makes write amplification get really bad as you approach 100% utilization and have to GC more and more. 6% extra is in the range where more will meaningfully help.
> Leaving 100GB of 1TB drive is like buying a sneakers but not wearing them because they would wear.
50GB is like $4 of space the last time most people bought an SSD. Babying the drive with $4 is very far from refusing to use it at all. The same for 100GB on a 4TB drive.
Nah, we used some consumer SSD for write heavy but not all that precious data, and time to live was basically directly dependant on the space left free on device.
Of course, doesn't matter for desktop use as the spare on drive is enough, but still, if you have 24/7 write heavy loads, making sure it's all trimmed will noticably extend lifetime
Yes, this is the reason why 0.3 DWPD drive is 10 orders smaller than 3 DWPD. I know the horror stories of using Samsung EVO for the SQL loads, especially < 512GB.
But yes, without the actual use-case it's just speculations.
NB QVO drives I mentioned a year ago in the comments are still running, but I do make sure they are never used more than 80%
Yep. That is why doing both can be beneficial. Alerts are more proactive if acted upon, but often too easy to ignore meaning ballast is more fail-safe in that respect.
I didn't read Idiocracy as eugenics/anti-eugenics. It wasn't saying that stupid people breeding made the population stupid, it was saying that the less educated breeding resulted in the more educated being pushed to the periphery and eventually fading out.
The people of the film's future were not stupid, just massively uninformed and misinformed. They were able to grasp the problem and solution in the end.
Unless I'm misremembering, and it did make direct reference to intelligence rather than education and access to it. It is a good few years since I last watched it. There is the title, of course, but educationally-disasavantaged-ocracy would not have been catchy enough!
> Unless I'm misremembering, and it did make direct reference to intelligence rather than education and access to it.
You are misremembering; they had a scene of an intelligence test that had adults matching shapes (stars -. tars, squares -> squares) and getting it wrong.
I'm afraid you are misremembering. The movie is explicitly eugenicist. The people of the future are explicitly biologically stupid. The opening transcript is unambiguous:
[Man Narrating] As the 21st century began… human evolution was at a turning point.
Natural selection, the process by which the strongest, the smartest… the fastest reproduced in greater numbers than the rest… a process which had once favored the noblest traits of man… now began to favor different traits.
[Reporter] The Joey Buttafuoco case-
Most science fiction of the day predicted a future that was more civilized… and more intelligent.
But as time went on, things seemed to be heading in the opposite direction.
A dumbing down.
How did this happen?
Evolution does not necessarily reward intelligence.
With no natural predators to thin the herd… it began to simply reward those who reproduced the most… and left the intelligent to become an endangered species.
What is "explicitly eugenicist" in observing that the unprecedented way mankind has dominated its environment has changed the selection pressures we are subject to?
My quest to survive to adulthood and pass on my genes looked nothing like the gauntlet an Homo erectus specimen would have run.
Yep. The studio didn't know what the hell to do with it.
I'm guessing that we (those of us who have seen it despite the lack of promotion) are lucky that they didn't just can it completely, or demand it get cut to ribbons and reformed as something else.
I think director's in that era can avoid this by not doing extra takes for scenes that would never make it anyways. Mike Judge did not have the budget for that anyways.
Nowadays they just change the scenes in post anyways, leading to some of the worst and most atrocious continuity errors.
Yes, but platforms need to make a case for why we should feel inclined to make the effort.
Is the ROI from the potential audience going to be worth it?
Or does it help some people enough that I might want to do it despite no real ROI (i.e. I make the effort to make things not compatible with common assistive tech, even where there is little or no end benefit for me so it is sunk time in that respect, does this new platform quality for my time & attention enough in that way?).
Or is it simply cool enough for me to be interested in playing with it myself?
If none of the above, is there any other reason we should care?
This is especially true if building for the platform means building something new, not just making tweaks to ensure your existing output is compatible.
> First, referencing "Nazi" has an age old tradition of immediately meaning you lose the debate.
True. Though to be frank, before typing my longer response I did consider just telling you the same about the “but forget everything else and think of the children” line of reasoning.
Very little, if any, “home” or small-business software would make use of a floating-point unit though (maybe some spreadsheet apps did?). The most common use for them was CAD/CAM, and those doing scientific modelling without a budget that would allow for less consumer-grade kit.
reply