MCGA was basically a cut down VGA on the motherboard of the low end 8086 / ISA PS/2s, and both chipsets were introduced at the same time. Next-to-no-one would want a not-quite-VGA that lacks high res colour or EGA modes, so nearly nobody bothered cloning it, only the higher end option. The only known clone is in a couple of Epson models, and like the PS/2s it’s a motherboard integrated chip, not an expansion board.
Because it was the only "standard" 256-color mode back in the day, supported by both mcga and vga. MCGA cards had just enough ram to support this mode (64kb) while vga cards were a strict superset and had a full 256k of ram.
This is where something like "const bool isAdmin = true, sendWelcomeEmail = false" helps. Now your literal values aren't in the function call arguments anymore, but instead their meaning is, you just need to look elsewhere (probably the line right above it) to find their values.
It's that RICH header that you need to exclude. I just tested my copy of MSVC 2019, and `/emittoolversioninfo:no` will exclude the RICH header from the binary. Supposedly also works in MSVC 2022.
The build timestamps in the PE header and export table are also a problem as well.
Adult Swim Games was its own publisher and Robot Unicorn Attack was their breakout, but they kept shipping past Flash with stuff like Duck Game and Headlander. Worth its own exhibit, honestly.
The Basic was SO BAD that I had to learn Z80 assembly to make anything good. Really.
No sane Basic should leak stack memory just because you exited an "If-Then" block without reaching the corresponding "End". Yes that's a thing. If you use "If-Then" and the code never reaches the "End" because you used "Goto" to leave the block, a few bytes are leaked every time that happens, and eventually the program stops with "ERR: Memory". You needed to use "If" then "Goto" on the immediate next line, and that would avoid the leak. Exiting the program or stopping it will give you back all the leaked memory, including seeing that error.
Then you have the lack of actual subroutines or functions. All you can do is call into a separate program, and return things by putting them in specific variables. But the Basic doesn't even have "Gosub".
That wasn't "leaking stack memory" except in a very literal sense: the BASIC language keeps a stack of the control structures you're inside, so that when you hit an "End" or "Else" statement it knows where to go next. This "stack" of control structures isn't lexically scoped; it's dynamic, based on what control flow commands you've hit. So yes, if you use "Goto" to set up a situation where you're hitting "Then" over and over without ever hitting a corresponding "Else" or "End", the control flow stack will just keep getting deeper and deeper. That's not a "leak" per se: all those "Then" structures are still there, waiting for their "End"s, and will do the natural thing if you give their "End"s to them — even somewhere someone used to lexically scoped languages wouldn't expect. Sometimes you can do cool things with this.
(I should add that the first image on that page shows one neat effect of non-lexicality: you can put an "Else" statement as the body of an "If", so that it's skipped when the "If"'s condition is false.)
But isn't that just how the world works? As kid I got ERR:MEMORY allot while trying to create games. It was until I started to read a C book which said "We will not use goto in this book. It is a dangerous function that can lead to memory leaks. For example if you jump out of a function, said function will stay in memory because it never finishes." That was the light bulb moment for my TI-Basic problems.
For me the bad part is that the official TI-83 manual has a code example for the GETKEY function that is using GOTO to jump out of a loop.
No, it's not how the world works. The warning about "goto" in C is about memory leaks due to misusing malloc/free. The issue with TI-Basic is about the interpreter using a stack for if/else/end blocks.
In normal programming languages, If-Then-Else is made up of a conditional branch to get you into either the "If" part or the "else" part, and a jump to skip you past the "else" part to the "end if" part. There is no stack used for that.
In PGB mode 2 the CPU is still able to run (within a limited address range) and can use register FF75 "PGBIO" for limited input and output on some cartridge pins (and/or the link port for IO).
Reading arbitrary process memory can be done as a standard user. No admin needed. Any Win32 program can do it. You just can't access the memory from processes that are admin-level.
This is not true. The canonical way to prevent access is via PAGE_NOACCESS[1]. Obviously, running as admin or in kernel mode breaks the whole thing since you can re-call `VirtualProtect` on that page and open it up.
This is accurate as far as page protection goes. The problem is the largest threat model.
If Process A and Process B are running in the same user context on a desktop OS, PAGE_NOACCESS is not a strong boundary by itself. Process B may be able to obtain PROCESS_VM_OPERATION/PROCESS_VM_READ, change the page protection with VirtualProtectEx, inject code that calls VirtualProtect inside Process A, load a DLL, attach as a debugger, duplicate useful handles, or tamper with the executable. That's the problem with same-user process isolation, it is a hugely leaky abstraction. There is no magical "just set this bit" fix.
On a desktop OS, once an evil process runs under the same user context, you are relying on process DACLs, integrity levels, code-signing, anti-injection hardening, and file-system protections. You can plug one path and still have several others.
This comment feels like it's written by AI. Anyway, PAGE_GUARD helps you get around VirtualProtectEx, which is a very common way of detecting userspace cheats.
I'm not the other commenter (and I believe you that it's not AI), but I'd guess it's mostly the first line: a short affirmation followed by "The problem is ...." feels like the sort of formula the LLMs love to use. (Not trying to imply that there's anything inherently wrong with it, of course.)
While we're at it, I'm under the impression that the recent LLMs have also co-opted "genuinely", which I'll never forgive them for—first they stole my em-dashes, and now they're stealing my adverbs too?!
I do see how your comment is similar to AI writing (a couple other comments explain) but it did NOT set off my AI trigger. I think it's just clear writing.
Without context, sentences like this mean nothing. So it's borderline a non sequitur. A threat model can be literally anything. Me giving my PC to someone at Best Buy, letting my grandma write assembly, or throwing my PC out the window can be a "large threat model." Nonsense sentence.
> If Process A and Process B are running in the same user context on a desktop OS, PAGE_NOACCESS is not a strong boundary by itself. Process B may be able to obtain PROCESS_VM_OPERATION/PROCESS_VM_READ, change the page protection with VirtualProtectEx, inject code that calls VirtualProtect inside Process A, load a DLL, attach as a debugger, duplicate useful handles, or tamper with the executable.
To the uninitiated this seems right, but really there's so much glossing over, it feels written by a non-expert that just read the first chapter of a "hacking for dummies" book. I've written anti-cheats and have even done some some hardware stuff, so I say this with some degree of experience: writing a userspace hack/cheat is pretty hard without a zero-day. Most stuff won't easily get PROCESS_VM_OPERATION permissions, also those are (afaik) logged by the kernel, so you can easily see if some weird "DefinitelyNotACheat.exe" executable or "NotABadLibrary.dll" requested them, so it's a pretty janky way of getting access to memory you shouldn't.
> That's the problem with same-user process isolation, it is a hugely leaky abstraction. There is no magical "just set this bit" fix.
Again, this is a non sequitur. No one said (or at least I didn't) that there's a "magical" bit. You're not even arguing against a strawman, it's almost like we're having two different conversations.
> On a desktop OS, once an evil process runs under the same user context, you are relying on process DACLs, integrity levels, code-signing, anti-injection hardening, and file-system protections. You can plug one path and still have several others.
Also seems right, and it kinda' is, but code signing is notoriously easy to circumvent, "anti-injection hardening" can mean like three million different things, etc. I dunno, just sounds like someone that's never done this stuff before. Like, not bringing up Detours[1] when talking about "anti-injection" just seems like weirdly avoiding the ONE canonical way of doing this, which just about every single hacking/cracking book covers. Idk, weird omission.
Also, no one in their right mind would attach a debugger, as that's trivial to detect[2]. I guess it could be a decent proof of concept, but no serious hacker would ever go that route. (Also, if I remember correctly, you also need to ship some special DLLs that have the actual debugging helpers—and same with Detours, so might as well do that).
Just wanted to give my justification for the accusation. Maybe I'm wrong and maybe that's why I'm getting the downvotes, so my bad.
I think you are viewing this with your anti cheat experience where detection is key. Can a regular process protect against another regular process reading its memory through PROCESS_VM_READ or can it at best only detect that it happened?
They also act as access alarms[1]. Why even comment if you didn't bother to read the docs?
> The PAGE_GUARD protection modifier establishes guard pages. Guard pages act as one-shot access alarms. For more information, see Creating Guard Pages.
Absolutely wrong. Are we writing the same code here? Page guards are for all userspace access. (In fact, I think kernel space might also trigger them, but can be circumvented. PS: I'm being polite :) Kernel space 100% triggers them, but can be cleverly circumvented by fucking with logs.)
Thankfully our recent experiences with OpenClaw have given us all a lot of faith that users are extremely diligent in what processes they allow access to what information.
A correct ROM packaged using the emulation community's standard file format will exactly match a pirated copy. There is no story here. After the article came out, Nintendo even created their own incompatible ROM packaging format just to spite that article.
reply