Many thanks for your opinion.
I have spent 3 years on an Ember js application where my team was responsible for the front-end framework. Quite a huge project with over 20 scrums teams and around 300 folks on the project at any one time. It has gone into production in Namibia and South Africa and is being rolled out to the rest of Africa also. So I have some experience with this stuff
On a side note the company in question is moving over from a webMethods integration platform onto my FOSS service bus — woot! woot!
In any event, I really do like the module plan. I have quite a bit of experience with dependency injection containers in C# and have always missed this in Ember/JavaScript. However, I have come to the realisation that JS just doesn’t work in any way the same as a typed language. One would think I’d have gotten to grips with this by now since I have dabbled in JS since late nineties having to deal with Netscape issues.
Since JS does not have true typed classes it means that some string has to identify a module. Things are also simpler if one thinks in terms of a single file representing a module. This is something that isn’t quite true in a strongly typed language where a file could contain multiple classes. Anyway, the fact is that a file simply contains code and has no bearing on software structure other than perhaps project organisation.
“Traditionally” a plain JS file would simply add something to the global “namespace” / window. From what my brain is telling me it makes sense to think of a dependency as a singleton file. Whatever is contained in a file is represented by the variable used on the import… and that makes sense.
With DI containers as I have used then the container can be instructed to treat various implementations in different ways. Mostly an instance is resolved to be a singleton. So no matter how many times one would ask for an implementation of, say, a specific interface the same object would be returned. This seems to be what the module implementation in JS is providing.
However, with DI containers resolved instance can be set to be transient so that the DI container acts as a kind of factory by providing a new, distinct, instance each time an interface is resolved. One could get to the same mechanism if one considers that a JS module provides a factory and one could then ask the factory to provide the new, distinct, instance.
In something like C# then the code is compiled and we have the executable bits. However, JS is never compiled in this same way. It is just a bunch of files. To make things cleaner, and leaner, we combine them and minify them. It seems as though steal-tools has some funky logic to determine how to go about packaging all the files in useful bundles.
However, there is this whole progressive loading business and the <can-import>
tag and I don’t quite know how they relate to steal-tools.
From the time I asked this initial question I now get that the packaged dist
folder can be hosted by any server capable of serving up static content and that’s the way I like it — all transpiled to a bunch of normal JS files
There appears to be some bits needed to do the cache-busting but I’ll get to that. In my own implementations I would have debug
build where all script
and link
tags would include the build date as a cache buster. The release
would be similar and in some cases an MD5 hash of the file would act as a cache-buster.
I need to get to grips with how things are structured within the donejs/canjs/stealjs space as currently I am just reading water
I agree that having as much support as possible from a community (and the Bitovi folks) is quite handy as being stuck on issues means no movement. For a project such as mine where I am working on my own time it is OK but when implementing for a client it isn’t ideal.
But I like the architecture so I’ll keep chipping away