Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why bother with the acronyms? Just look at the problem and figure out a beautiful solution. It's a lot tougher than just picking a design philosophy, but the result justifies the mental effort.


That was a bit of my reaction too. But then I thought:

Object oriented programming solves a great many problems with the construction of large systems.

However, when you're writing real-time or interactive systems, there's no escaping the fact that you must understand how CPUs, memory, and caches work.

If your game turns out to be successful and you need to fit its frame updates in 16 milliseconds (60 frames per second), then you'll need to optimally map your algorithms to the hardware.

However, most startups and most games fail. So why not optimize for whatever it takes to prove a product and scale an engineering team? As long as you understand the optimal capacity of the hardware, is initially writing your system with OOP so bad? I don't think so.

On the other hand, these types of discussions are a great way to teach people about the realities of modern hardware.


Why did you conflate startups with games? A game ships once(unless it's online), a startup ships endlessly.

The article is a bit confusing, but the way I took it when it ran, and now, is that there aren't just performance benefits to thinking "data flows" vs "objects," there's source readability benefits too. If you can define a bespoke data structure that manages state in exactly the way you want it, that's far better than a cluster of objects that mostly do the job but need a little massaging at key points. Better on the hardware, simpler to read, less likely to cause bugs. A 5% improvement in low-level state management multiplies many times over, because the management pattern is likely to be replicated over hundreds or thousands of slightly different game features that all rely on that data model.


I'm not talking about shipping, but instead about development risk. I've seen too many teams start projects with lots of low-risk but high-cost "engine" work like the example given in the article, when they don't even know if the game will succeed in the market.

... crap, I confused the linked article with a very similar article which I read today: http://research.scee.net/files/presentations/gcapaustralia09...

Anyway, my point stands. When starting a project, you should understand the eventual end state (high-performance algorithms making effective use of cache and memory) but don't think you need to implement it all up front.

If a data flow or procedural approach is clearer and easier to maintain, then by all means. But don't discount OOP as an intermediate state simply because you'll eventually have to translate the code to fit better on the hardware.

That's all. :)


The trouble with this ubiquitous argument is that it is a cost-benefit argument that simply ignores one of the major costs, that of changing the design later. Yet that cost can easily be as high or higher than the cost of building the original system.

Treating this sort of design change as an optimization problem (i.e., we'll measure and fix the bottlenecks later) is a category error. There are many OO systems that simply can't be refactored to solve the problems the OP is talking about.

Does this turn out to matter? Sometimes yes, sometimes no. Is there any way to measure it in advance? I doubt it. But that means there's no real cost-benefit argument here at all, only gut-feeling judgment and confirmation bias.


You should definitely weigh the cost of doing the extra work now versus doing it later. In profitable, stable ventures, time now and time later have similar costs. However, in new projects, time now is dramatically more expensive than time later.

Can you give an example of an OO system that can't be refactored to a data-driven system later? I ask because I've made very similar changes to Cal3D, converting overly-object-oriented code to memory-efficient data transformations and, thanks to unit tests, it wasn't hard at all.


Can you give an example of an OO system that can't be refactored to a data-driven system later?

The systems I was thinking of are ones I've worked on or consulted on. Mainly, they were just big and hard to change. The OO aspects didn't help, mainly because of their tendency to object-graph spaghetti.

There was an inaccuracy in what I said (mainly for brevity). It's not true that you can't refactor such systems to solve their design problems. Technically, you can refactor anything into anything. What I mean by "can't be refactored" is "can't be refactored at a cost less than writing a whole new program". Even then, that's too strong, since you can't prove that. So strictly speaking I should have said "There are many OO systems where nobody who works on them can think of a way to refactor them to solve the problems the OP is talking about in a way that is easier than just rewriting the program." :)

I agree that test coverage makes this easier, although it also adds a maintenance burden.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: