I tend to use the "model" and "view" concepts a lot when discussing architecture, but in my experience it's almost always a mistake to try and reference any specific MV* pattern for explanatory purposes - it does not have the effect of making the discussion clearer.
The issue is that there isn't actually a consensus about what constitutes the definitional features of these patterns, especially when it comes to how the concepts involved actually translate into code. For any sufficiently notable discussion of an MV* pattern, you're going to find an argument in the comments about whether the author actually understands the pattern or is talking about something else, and typically the commenters will be talking past one another.
Note that I'm NOT claiming that there's anything wrong with your favorite explanation of MV* - it may be perfectly well defined and concrete and useful once you understand it. The issue is a feature of the community: lots of other people have a different (and possibly worse) understanding of what MV* means, so when you start talking about it and your understandings don't align, confusion arises. Getting those understandings back in alignment is more trouble than the acronyms are worth.
I've seen enough conversations about concrete development issues suddenly turn into disagreements about the meaning of words to realize that nothing useful is conveyed by mentioning MV* and assuming anyone knows what you're talking about - it's better to spell out exactly what you mean in reference to the code you're actually talking about, even if you have to use more words.
I think there is an essential core of an idea in "MVC" which sustains across all of its interpretations/implementations.
The core idea is: you want to keep a firm separation between UI code (view) and object-oriented business logic (model).
Other objects will need to reuse your model as your application grows; UI code will clutter that reuse. For example, your business logic should return an array of Customers, not an array of UI labels. The business logic shouldn't directly refer to UI code at all.
In addition, you'll probably want to reuse some of your UI code in multiple contexts. A fancy resizable scrollable table shouldn't directly refer to Customers, because you might also want to use it to display a list of Purchases.
But if the business logic shouldn't directly refer to UI code, and UI code shouldn't directly refer to business logic, then you'll need some code to bridge the gap. That's the controller.
But this idea still leaves a great deal open to interpretation. How much work should the controller do, vs. pushing that work to the model or the view code? What's "business logic" as opposed to "view logic"? Should the controller code be a single object? Is your view logic complicated enough (and reusable enough) to justify separating it out into another object (a "view model")? If you never write any reusable view code, do you even have/need MVC? All of these questions are unresolved by "MVC."
But I doubt that this core idea is what the original MVC idea had in mind.
I believe the original idea was that the MV and C are separated but coupled. So all views that bind to a model will update at the same time when a controller calls a change in the model.
On the web this concept is strange. I think this is why there are so many different MVC concepts.
We should simply stop referring to MVC as a "pattern", and call it what it is instead, a high level, abstract, general idea on how to structure an application. There isn't a right or wrong interpretation or implementation, just different applications of the same idea.
I tend to think of MV* as a misnomer, because people generally seem to understand now that "Model" constitutes the business logic and any associated business data (be it a database or document).
But every single environment is going to have a different circle drawn around what a View is - what is provided, what you are expected to build yourself, and how to interface the two. For instance, the V and C are partially a separation assuming it is appropriate to have a decoupling of input and output.
The most important concept is that your model should be separate from all that. If you want to show the model in some new clever way, that should be via some View-ish functionalities manipulating data gained from the model. If you want some cool new behavior or functionality though, you should put that business logic entirely within the model. Then, you can properly control the boundary between the two sides - if all you do is add new methods or expose new data without changing behavior, you know that everything outside the model will still work without change. Start changing existing behavior, and your UI pieces will likely need to change to accommodate that.
In my experience, the most common way people get MVC wrong is equating "model" to "data model", and "controller" to "business logic". Since the controller is responsible for also interacting with a specific view, you haven't made your software decoupled, rather just spread logic into three buckets.
IMHO, the real goal is establishing a somewhat stable boundary between the business logic and state and your UX, both to allow logic to be reused in other presentations and to allow both to evolve without constantly breaking code.
MVC vs MVP or MVA is a difficult distinction IMHO, because the UX portion is really split between the view and controller, which may also include interactions with a more sophisticated UI system (for example, with its own input event handling and data binding). For a lot of these systems, what constitutes a "View" or a "Controller" really depends on the infrastructure provided to the developer.
Template-based web frameworks can actually be fairly close to MVC - the controller handling the input request, translating that to commands against the model, and then taking data from the model and handing that off to the view to present to the user.
A component-based system has components which constitute the UI presentation, but also handle user input and may have data binding. At this point you are far closer to "Model View Adapter", which as the article points out is what you'll see in most componentized frameworks.
Again IMHO, but the difference between MVA and "Model View ViewModel" (MVVM) are really around implementation strategies for the adapter layer. That there are different strategies is one of the reasons that attempts by frameworks to simplify this adapter layer result in different terminology (for instance, ViewControllers and Delegates and DataSources, vs ViewModels)
> If you design a system, it is not helpful to consider options like MVC vs MVVM vs MVP. Instead, identify potential problems and if necessary solve them.
I think that's a pretty good take on it. There's an annoying tendency for most conversations including the notion of MVC to devolve into nitpicking about which variant is actually being discussed and whether some architecture is 'really' MVC or something else that looks like it.
Its main value is in an abstract principle that it references: when modeling something, it's useful to separate a thing's essence from representations of that essence (the Model in MVC would be the essence and Views representations). This is an idea that shows up all over the place; it's almost certainly related to human cognitive structure in some way. (The Controller component is more arbitrary; you can see it pointed out in the article also that it was more related to implementation than the more general Model/View concepts.)
One way I've found useful for thinking about the principle is in terms of causality within a system. When you're separating out the model/essence from the view/representation portions of a system, the most important distinction is: the essence carries all the information necessary to determine (a configuration of) the representation. In this case, you can set up a mapping from the essence to the representation once; after that, you can stop thinking about the view/representation because changes to the model/essence already cover everything.
It's a very general kind of information compression that helps us manage system complexity by creating causal hierarchies.
(An example of another kind of 'causal hierarchy' I was thinking about recently: the decomposition of changes in position in Newtonian mechanics ultimately to forces (rather than working with e.g. velocity directly). If you know about the forces (and mass), they imply everything you need to know about acceleration, from which you may deduce, velocity, and then position. Once you have that mapping from force->acceleration->velocity->position your mind is freed up to only thinking about forces.)
> When you're separating out the model/essence from the view/representation portions of a system, the most important distinction is: the essence carries all the information necessary to determine (a configuration of) the representation. In this case, you can set up a mapping from the essence to the representation once; after that, you can stop thinking about the view/representation because changes to the model/essence already cover everything.
How does this work when you have to deal with animations, or state that you don't want to commit until the user completes a gesture or activity, or complex transitions between screens? It's not necessarily a good idea to shove all that stuff directly into the model.
I think if you treat the Controller as a mere implementation detail, your interfaces will lack liveliness. Simulation time and running user input are just as essential to the view as any changes to the model. That's (IMO) what the Controller is there for: to encapsulate that sort of temporal complexity.
TL;DR: I'm experimenting with that problem now. Currently I'm trying out this idea of 'transient state' described more below.
-------
I've actually been working on something that deals a lot with that problem. It's a game with all the rendering done in a single pixel shader.
All the gameplay logic is in a class called GameStateTransformer. The idea was to treat the game like a dynamical system where the state evolves as a function of 1) time 2) a sequence of 'events'. The core of it is a loop that runs 60 times per second, evolving state via time-dependent logic and by grabbing/processing any events in the queue at that time.
The rendering (i.e. view) is connected to the state (i.e. model) by executing a function `mapStateToUniforms()`[0] every frame. It always works the same way regardless of whether there is an animation going on or whatever; animation logic just updates parts of the state that will get mapped to GLSL uniforms (mostly it's things like `deathAnimRatio` taking on values between 0 and 1.)
The situation with events taking place over time was certainly the most complex architectural aspect I had to deal with. There were two main things I used:
1) 'transient state'; I would set up a bit of temporary/transient state by making a call like `runTransientState('player.dying', {deathPosition: blah}, 1.5)` —first arg is a path into the state object, second is a 'data' object, third is the amount of time it should remain in the state. After that call my state object would contain something like `{..., player: {..., dying: {position: blah}}}` So other code could check `if (state.player.dying)` and access related sub-data (e.g. `state.player.dying.position`); and that part of the state would be deleted after 1.5 seconds.
2) Emitting events at key points during animations/transitions (generally when they finish). Also the 'transient state' system always emits an "[transientStatePath]_finished" event when a bit of transient state's time runs out (e.g. "player.dying_finished").
There was another subsystem I put in place related to this which I haven't had to use much so far though: 'contingent evolvers', which are objects that define a `condition(state)` function and an `evolve(state, deltaTime)` function, and are automatically only executed while their condition is met.
Not positive how well this would scale to larger applications, but it has worked well for the game so far.
[0] For those unfamiliar with GLSL 'uniforms' they are variables passed into your shader from the application code. Since all of my rendering is in a pixel shader, the code there is function of the current values of the uniforms + x,y coordinate of some pixel.
Something not mentioned much when discussing MVC* is that each part often runs in its own process and each have distinct constraints, or performance characteristics. View is dependent on graphics rendering, Model is often dependent on IO, and Controller is typically dependent on User.
In contrast to the author, I consider MVC to be a very well defined concept. However it is a concept and the implementation details are left to each specific application. In fact in any GUI there are multiple incarnations of the MVC paradigm each designed to their specific responsibility and happily ignorant of the others.
I've always found MVC or any flavor of it to be a flawed paradigm. Instead, a separation of APIs, and clients that consume them, is a more pleasant way to work with things that need a view layer. "Controllers" are undefined things. We get frameworks like Rails that encourage spaghetti code placement (and make spaghetti a first class citizen), because the fundamental concept of a controller doesn't make sense.
MVC actually works really well for a HTTP framework like rails, because the separation of input (the controller) and output (the view) fits well when your model is based on request/response.
Rails goes a bit far in pushing for your model to be built around your database-backed data model (but does properly push you to include data integrity and business logic there as well).
Since there is not object classification in rails called Spaghetti, I can't really speak to it being a first class citizen in rails - nor can I guess which of Rails' shortcomings you are referring to.
Even with a client/API separation, your API still has input, output, and business logic - the three components of MVC. An API can benefit from having MVC once it has multiple views; for instance, if you later find you must support different clients utilizing multiple concurrent versions.
In that vein, I consider a framework like Sinatra to be controller-centric - with both the model and view layers being optional and bring-your-own. I've certainly used it as MVC.
I think the OP is probably complaining about people tending to put all of the business logic into the controller. This essentially leads the the controller functions being modular/procedural in nature rather than object oriented. And I think that's what they mean by "spaghetti" -- you have either a big honking function, or you have a whole bunch of functions with no clear place to put them. This is made worse with the strategy of "service objects", which are usually not objects, but rather highly coupled modules of functionality that expose all of the properties in a way that is global to the entire module.
However, IMHO the OP is incorrect that the problem is with MVC. The problem is actually with the active record pattern (or, again in IMHO, anti-pattern). In active record, your "model object" has a one to one relationship with the (usually relational) table where you are storing the data. The problem with this is that what makes sense from an OOD perspective does not make sense from a relational perspective. This is why there are entire books written about the problems of object relational mapping.
From a relational perspective, we want to store things in a way that makes the relational calculus cleaner. We want to be able to efficiently store and query the data. From the OOD perspective, we want to increase cohesion for the functionality by grouping common state together. These two goals are usually at odds.
Just like we have a view layer in MVC with respect to the UI, so too do we need a view layer with respect to other kinds of external representations. Probably people don't remember the old, old frameworks where you designed forms graphically for the UI and the system automatically generated C++ code for the "model objects" that could be used to populate those forms. Most people don't know about these because they were an incredibly bad idea that thankfully died out. No sane developer would build their model objects to be one to one relationships with their UI view objects. The same should be true for other representations like the DB layer, or indeed communication layers (serialising model objects so you can ship out the data for a different application is incredibly common, but also incredibly bad). You want view objects to translate the data to and from the model objects.
Once you've done that, then you can design your model objects so that they can sanely calculate your business logic. Your controller object become the thing that they were meant to be in the first place: objects that forward incoming requests on to the model layer and put the result into the view layer.
Rails (at least if you are using ActiveRecord) is just broken. It's not a representation of how MVC should work. It works well enough for some cases, but gradually the stink gets worse and worse until you start asking yourself if there was some advantage to using Rails in the first place.
My understanding of the Model-View-Adapter pattern is that the adapter is the dependant of both the view and the model, which are disjoint objects. The diagram shown here has the arrow inverted between the adapter and the view, and the author mistakenly assumes it is isomorphic to the MVVM pattern. This might seem nitpicking, but it's an important distinction. In the MVA pattern, the adapter is the only thing that knows about both the model and the view, ensuring that the two are cleanly separated.
The MVA pattern is underused. People have a tendency to make their views dependant directly on the model, or vice-versa, and often don't keep a clean separation.
I'd say the MVA (or MVP, or Apple MVC) pattern is overused.
There is usually neither a point to separating the view and the model like that, nor is it usually possible. In practice, you get lots and lots (and lots and lots) of glue code that is (a) boring, (b) repetitive, (c) error-prone and (d) non-reusable.
The code is non-reusable because it is specific to both the view and the model. And it doesn't usually help in keeping the view model-agnostic, because after all the view is there for displaying the data that's in the model. Additional layers can't change that fact.
> "because after all the view is there for displaying the data that's in the model."
I wouldn't disagree. But I also believe there's another way to look at the view, and that is as the interface between the application and the user. That UI has or needs to have (design) patterns, a la Atomic Design for example.
For example, a list is a list is a list. In a well designed UI / UX that will follow / maintain a pattern. At that point, there's no real relationship per say to particulars in the model. The view / partial is "free standing" and can be repurposed as required. In the mind of the view, there is no model.
Maybe I'm splitting a hairs and/or have gone off on a tangent? My apologizes.
I would conceptually put reusable view components ("a list representation") completely outside of the scope of all MV* thinking. The "V" part is a composition of high or low level view components, not the components themselves (at implemention time - at runtime the view is the sum of active components). When you write universally reusable view components, you are not implementing the V of an MV*, you are working on tools to implement a V. Just like writing a database engine isn't defining a domain model.
I would think that to be useful, a list will be a list of something. And that something will almost certainly be domain/model specific. Even if you expose just a list of strings, those strings will have some sort of semantic actions associated with them.
Yes. But it's not (or shouldn't be) a one to one between view and model. At least that particular view should be independent.
Note: It was simply a basic example. The crux being to also look at the view from the frontend - user and frontend dev, and not only from the model forward.
Fwiw I've recently taken over a project where I've found the same essentially partial four times. Not fatal, but more to maintain, as well as confusing when you're trying to unscramble things.
* MVC is not only about user interaction/UI. User interaction/UI is just most used area. i have used MVC inside a compiler, and inside data-crunching-pipeline. Fits perfect, as long one abstracts oneself from what user/business means.
* MVC is not single thing, it's a fractal idea. The View (may) has its own internal MVC (e.g. in UI: a widget, i.e. scrollbar), Model has its own (e.g. somewhat around ORM), ... and this may go further down - or up.
* it's a general concept saying that 3 islands/aspects talking to each other are needed for complete system; 2 are too few, 4 might be too many.
The biggest problem is that the Web doesn't fit GUI's well. MVC and its variants are an ugly fudge, largely to make the stateless Web protocols "seem" stateful. One has to split the work up into bigger teams to do something that was easy, trivial, one-man-band work in things like Powerbulder, Delphi, Oracle Forms, etc. We devolved from the 1990's, turning bicycle science into rocket science. Our Oracle Forms team is 4x more productive than our MVC team. It's embarrassing. Sure, the web apps are arguably "prettier", but at a cost. (If not for Java security going bad, our org would keep using Forms. Oracle should have kept the "EXE" based client.)
I'm not dogmatic about MVC (or any similar acronym), but I think MVC serves as a good reminder to (follow the more agreed upon maxim to) separate your concerns. Your application logic, data, and presentation layers need not be intertwined.
Like everything it’s a tradeoff. A signup page that just collects email addresses does not need to care about architecture.
Best practices always have the hidden assumption that going deal with the same situations as 99/100 cases. But, clearly that’s not always true, so you need to keep in mind the reasoning behind them.
I think part of Controller should be in the Model, because I often found business logic is highly coupled with the data.
I've more often used the Model-View pattern, where the business logic was inside the Model.
What I have decoupled from the above is State controllers though; State controllers have to do with the application itself, they have nothing to do with the data or the presentation of them.
I like Clean Architecture. There is a particularly annoying, but powerful piece: the Presenter. While it makes for some awkward code, it does separate concerns.
The controller has a use case, which is the business logic, injected into it. The controller also has the Presenter injected into it. The controller invokes like so: UseCase.execute(input, Presenter).
The use case does its thing. Then calls presenter.present(ResultData). The presenter can be responsible for creating the actual response on the wire. It is also responsible for getting data from the database for lists, and other UI features. Finally, the Presenter invokes whatever machinations are required write update the client. In the case of HTML, the presenter knows which template to invoke. In JSON, just serialize the output.
What's awkward about it is the .present when in failure, which in Go is all the time.
if err != nil {
presenter.present(FailureStruct)
return
}
This is repeated all over the place. That return is important. Since the result is not returned from the UseCase, getting this wrong could lead to failure.
In my code, the Unit of Work is managed by two entities: the controller and the presenter. The controller gets everything warmed up. The presenter knows when all the data calls complete. If the presenter is for an HTML page that needs to get data, the presenter is essentially a micro controller that couples the use case to the view.
All of this is tucked away nicely. The domain gets to drive a lot. Everything is testable since the presenter is
type CreateUserPresenter interface {
present(Output)
}
Interfaces for everyone. Use Case has interfaces for the domain repositories it needs. Controller gets injected, or in my case calls a factory, the use case and presenter.
Things are actually decoupled. No more transaction scripts spread across multiple thinly veiled layers.
To truly seperate the controller from the process of invoking the use case and view. A better question is why does the use case have the presenter pushed as part of the invocation? You could easily have a use case take an input and nothing else. Why have the controller get an instance of the presenter, pass that to the use case, then have the use case call the presenter? The answer is that it makes explicit the fact that there will be a response. It also allows the controller to either directly, or indirectly, more likely indirectly, get the UI component ready.
In my codes case, the presenter is gained through a factory method. The factory takes the response for the request, context and that's it. It creates the REST response handler that holds onto the response struct, in Go. This allows the system to safely switch the thread underneath the actual request/response of the handler. Since the Use Case gets the presenter as an argument, it naturally carries the presenter when the thread changes without having to store it into the Context.
All of this fits nicely with Go. Go handlers return void. So in the handler, my controller, the use case kicks off, and the handler is done. The presenter sets the response code, body, and anything else.
The absolute most important thing to understand about models as they were originally intended, is that they are not descriptions of a data model per se.
This is not to say that data modeling is not important, or that your model won’t have a similar shape as your data model; it will. But that is not the original point.
In MVC as it was originally understood, a model is a list of subscribers. You can add yourself to that list, you can remove yourself from that list, and whenever a “value” changes, if you are on that list, you get a notification that the value has changed. You have a couple of different designs of these, depending on whether you want to send a current-state message as a notification on subscription, or just give everybody read access to the current state. The latter commits you to initializing your models in ways that indicate the absence or staleness of data (someone loads your app, you need to send out a network request to get a new thing) but also allows you to use variable scope to augment how much multiplexing and state tracking you need.
Models vary from data models in having view-relevant data. So for example you switch from the type
const urlList = new Model<UrlRow[]>([])
to the type
type Fetching<x> = { state: "init" }
| { state: "loaded", data: x },
| { state: "fetching", staleData: x },
| { state: "fetch error", staleData: null | x }
const urlList = new Model<Fetching<UrlRow[]>>({
state: "init"
})
Notice that these Fetching indicator statuses are a part of the data model for the UI, not the underlying data model that exists at the database level.
If this begins to feel a lot like React and Redux, that is because there is a shared lineage there. React originally made its splash as “the V in MVC” but its setState/state system, while it doesn't contain a subscriber list, effectively does something equivalent by insisting on destroying anything that has the old values and then superficially identifying things which appeared to remain the same from moment to moment, with the basic model then being a “subscription tree.”
Redux of course takes this from being a tree to being something more generic by making the store global... I think that models should not be app-global the way that Redux likes (it does not want to solve the problem of multiplexing models, which is understandable but it turns out to be a much simpler problem than you'd think) and that the pattern of reducers is more verbose than one generally needs, but I love the time travel browser features that Redux gives me. Somehow Redux has encouraged abominations like redux-thunk which complect separate concerns into the same dispatch function unnecessarily. But the fundamental workings involve that same basic structure: a subscriber list.
Could you clarify why you feel that redux-thunk is an "abomination"? What specific concerns do you have?
I wrote a post a while back that answered several concerns I'd seen expressed about using thunks [0], and my "Redux Fundamentals" workshop slides [1] have a section discussing why middleware like thunks are the right way to handle logic [2].
Hi! I would love to read your articles and discuss this further but right now your blog is not viewable by Chrome, Firefox, or Edge. Firefox gives the most descriptive error string as SSL_ERROR_RX_RECORD_TOO_LONG.
Multiplexing two models together to my mind just constructs a new model whose values are tuples of the existing models and which subscribes to both of those models in order to notify its own subscribers whenever either side of those tuples change. If you do this, you can have a bunch of local stores and still say "this component's value changes whenever either of those values change." The key is that "normal" models accept a set(value) message to set their value to something, and you might play around with "dict" models which accept insert(key, value), deleteAt(key) messages, but a multiplex model would not be easily able to abstract over all the different possible messages to send upstream and so the easiest approach is just to make multiplexes nonresponsive -- you can't "dispatch" to them, in Redux terms.
redux-thunk is nice in that it helps keep people from making the mistake of sending otherwise-no-op actions to the store which then, inside a reducer as side-effects, do a bunch of async I/O to compute events that eventually make it back to the store. I would broadly agree with that.
My basic beef with redux-thunk is that it's unnecessary and complicates what would otherwise be a type signature that has no reference to I/O, which I regard as a good thing. Developers ought to know that, to quote one of the Austin Powers movies, "you had the mojo all along." It's a sort of talisman that you are using for purely psychological reasons to reassure developers and to coax them to doing updates outside of the reducers, but it's OK because "it's in `dispatch()` so it must be a Redux thing so we'll make it work." But such a talisman is unnecessary.
Erm.... that's bizarre. Should be a normal Let's Encrypt cert as far as I know. Are you accessing it through some kind of corporate proxy that's blocking it or something? Does that happen on any other machines?
Anyway. Reading the rest of your comment...
I'll be honest and say that you pretty much lost me in that discussion, as in, I genuinely am confused what you're trying to say. I'll try to give some thoughts here, but I don't know if this is going to answer things because I don't know what point you're actually trying to make.
The point of `redux-thunk` is to allow you to write logic that needs access to `dispatch` and `getState` when it runs, but without binding it to specific references of `dispatch` and `getState` ahead of time.
If you wanted to, you _could_ just directly `import {store} from "./myStore"`, and directly call `store.dispatch(someAction)`. But, A) that doesn't scale well in an actual app, and B) it ties you to that one store instance, making it harder to reuse any of the logic or test it.
In a typical React-Redux app, the actual specific store instance gets injected into the component tree by rendering `<Provider store={store}>` around the root component. As my slides point out, you _could_ still directly grab a reference to `props.dispatch` and do async work in the component, but that's also not generally a good pattern. By moving the async logic into its own function, and ultimately injecting `(dispatch, getState)` into that function, it's more portable and reusable.
Also, have you seen the actual implementation of `redux-thunk`? It's short enough that I'll paste it in here just for emphasis:
If you can try to clarify what you're saying about "type signatures" and binding the methods from the store, I'd appreciate it. (Actually, that bit about binding the store methods doesn't make any sense, because a Redux store isn't a class - it's a closure, so there's no `this`.)
If you're available to discuss this in a venue that may be better suited for it, please come ping me @acemarke in the Reactiflux chat channels on Discord (invite link: https://reactiflux.com ).
Hm. I will have to retry. You're right that this is my work laptop and sometimes Sophos does weird crap. Sorry for alarming you without checking downforeveryoneorjustme first.
I have read the source; indeed reading the source of redux-thunk was necessary for me to conclude it was pointless. I like everyone else thought that it was doing something more than `go = fn => fn()` does.
The code that I wrote you is logic which needs access to `dispatch` and `getState` when it runs, but it is not bound to specific references of `dispatch` and `getState`. It does not use your hack of importing a store from a global location, so it does not have problems with (A) or (B).
You cannot avoid grabbing that reference to props.dispatch either way. The crux of the argument that redux-thunk is just syntactic sugar, is that dispatch is already in scope whenever it is used and can be passed as an argument or a closure.
I agree somewhat with refactoring async logic into its own component when one wants to reuse it and make it portable. The question is just, should you pass `dispatch` and/or `getState` as an argument to that function? Or should you curry that dependency to a subfunction and pass that function as an argument to `dispatch`?
I opine that the latter is objectively worse than the former. You have `dispatch`: hand it directly to the function, let people know that this function is not an actual action but an asynchronous process. We are talking about a syntactic sugar, in other words, that doesn't make anything sweeter.
I'm actually refreshed to be reminded that the Redux store is a closure, I had forgotten since I first read the Redux code several months ago. So then it's even easier; one never has to bind anything.
I will try to ping you on Discord later tonight; there is a specific reason that I am preferring asynchronous messaging systems at the moment.
I mean it is quite possible that as "originally" originally understood in the late 1970s, the model did not have subscribers, but it was a part of the system as early as Smalltalk-80.
> The dependency (addDependent:, removeDependent:, etc.) and change broadcast mechanisms (self changed and variations) made their first appearance in support of MVC (and in fact were rarely used outside of MVC). View classes were expected to register themselves as dependents of their models and respond to change messages, either by entirely redisplaying the model or perhaps by doing a more intelligent selective redisplay.
> Because only the model can track all changes to its state, the model must have some communication link to the view. To fill this need, a global mechanism in Object is provided to keep track of dependencies such as those between a model and its view. This mechanism uses an IdentityDictionary called DependentFields (a class variable of Object) which simply records all existing dependencies. The keys in this dictionary are all the objects that have registered dependencies; the value associated with each key is a list of the objects which depend upon the key. In addition to this general mechanism, the class Model provides a more efficient mechanism for managing dependents. When you create new classes that are intended to function as active models in an MVC triad, you should make them subclasses of Model. Models in this hierarchy retain their dependents in an instance variable (dependents) which holds either nil, a single dependent object, or an instance of DependentsCollection. Views rely on these dependence mechanisms to notify them of changes in the model. When a new view is given its model, it registers itself as a dependent of that model. When the view is released, it removes itself as a dependent.
Like, I'm not getting this out of nowhere; at one point I inspected the code in the Model object and that's how it works...
The issue is that there isn't actually a consensus about what constitutes the definitional features of these patterns, especially when it comes to how the concepts involved actually translate into code. For any sufficiently notable discussion of an MV* pattern, you're going to find an argument in the comments about whether the author actually understands the pattern or is talking about something else, and typically the commenters will be talking past one another.
Note that I'm NOT claiming that there's anything wrong with your favorite explanation of MV* - it may be perfectly well defined and concrete and useful once you understand it. The issue is a feature of the community: lots of other people have a different (and possibly worse) understanding of what MV* means, so when you start talking about it and your understandings don't align, confusion arises. Getting those understandings back in alignment is more trouble than the acronyms are worth.
I've seen enough conversations about concrete development issues suddenly turn into disagreements about the meaning of words to realize that nothing useful is conveyed by mentioning MV* and assuming anyone knows what you're talking about - it's better to spell out exactly what you mean in reference to the code you're actually talking about, even if you have to use more words.