Is a DoneJS application dependent on Node for hosting/deployment?

I have slowly been going through the in-depth guide and have been wondering how a DoneJS developed app would be deployed.

Does it have to be hosted using node js or can I build it to a bunch of static JavaScript, html, stache, and css files that I can host, say, on IIS?

It needs Node.js for the build and for server-side rendering. If you don’t need SSR you can do a build on your local machine and deploy to IIS or Apache or whatever else you want. If you do a build and then start a plain http server you can open production.html and see the app in production, with no Node running. Hope it helps!

It does help, thanks Matthew.

Is there some documentation explaining the deployment artifacts? I can probably execute the deployment an check the output also. I am assuming that the relevant bits are bundled together and minified. For an ember system I worked on previously we had various modules that could be independently packaged and deployed. Is there some mechanism in donejs where one can do the same?

Since I am primarily a c# developer I am spending a lot of time in Visual Studio on my domain modeling and web-api. I currently have a site included in the project. It seems somewhat disjointed to work in donejs and Visual Studio. Currently I simply pull in the canjs bits I need. This is where the can-validate fell over since it seems to be in ES6 format. I would prefer not having any pre-compiling to get that to ES5 format but I guess I could do so. is there any tooling you guys suggest that one should use to perform a once-off conversion? Then I can get a latest version and transpile it into ES5.

As I mentioned: I prefer not working in the nodejs space. I assume this means I shouldn’t bother with donejs until I feel the urge to move across.

I know nodejs development is supported somewhat in VS but I think that is more for nodejs-proper development. I used it a bit on some nodejs components and hosting (api / static site through restify) and it was horrible enough for me to stop using.

Well, I’ve given donejs a quick spin and it doesn’t quite seem to fit. I get why these things pop up. I have a sneaky suspicion ember-cli is the same kind of thing. I prefer plain JavaScript myself where I can just include a file and start using it in a browser without any modification. I looked at Angular2 and TypeScript immediately put me off. I guess everyone’s mileage will vary.

Or perhaps I just need to get with the program :slight_smile:

Steal will transpile ES6 modules in the browser, so you don’t have to worry about the fact that can-validate is written in ES6. You don’t have to run any Node during development at all, if you don’t want. You only need it to perform a build before going to production. Just run the build donejs build and the dist/ folder is what you use for your production app.

Exellent,thanks Matthew!

I’ll have a look… the transpiling is probably fine but since there is that whole not-having-a-hundred-files thing I guess one would need some mechanism of bundling these things after transpiling?

In my ember project I wrote a c# bundler that made use of the /// <reference path="{path}" /> entries in the JavaScript files to combine files. Each Module.js would have all referenced files sucked in recursively and then minified into a ModulePackage.js file. These references would include the template files generated.from their handlebars files using our handlebars compiler.

Thanks again for the feedback. I’ll take a look. Just so much to learn while trying to get a product developed :frowning:

Steal handles the bundling, all you have to worry about is importing the modules that you depend on. You don’t need any of that reference path kind of stuff. When you run donejs build steal will analyze your dependencies and create bundles optimized for load performance.

Eben_roux, I think we are fellow travelers in the new to DoneJS world. I appreciated your comments on my other problem.

I want to comment on your assertion that you prefer plain JS compared to “get with the program”.

I am new to DoneJS but I worked on it’s predecessor on a big project for a few years and still used it in simpler applications as recently as last year. I share your inclination to use plain JS (or whatever). My feeling has always been that the amount of time and stress I spend dealing with someone else’s code is not worth it. I intentionally decided to force myself to use JavascriptMVC in 2010.

I found that the investment in learning JMVC to be worth it. Once I got it, I was able to do amazing things extremely quickly. The amount of boilerplate code and plumbing that it provided was amazing. Adding a new, major function was streamlined to only having to worry about the function, exactly that, an no other code.

I am starting a new project. It’s the first user facing app I’ve written in awhile and so I decided to check out new frameworks. As you can see from that other thread, I am having some initial pain. However, my experience with JMVC tells me that I will be infinitely more productive with DoneJS. I will also learn a whole lot of things about how to structure and execute a browser app.

That the people who take care of DoneJS seem to be extremely nice, generous people, adds the final benefit. Being part of a community of developers that is led by nice people will result in a lot of questions being answered and help that is otherwise hard to come by.

Good luck. I hope you choose to get with the program. :wink:

1 Like

Many thanks for your opinion.

I have spent 3 years on an Ember js application where my team was responsible for the front-end framework. Quite a huge project with over 20 scrums teams and around 300 folks on the project at any one time. It has gone into production in Namibia and South Africa and is being rolled out to the rest of Africa also. So I have some experience with this stuff :slight_smile:

On a side note the company in question is moving over from a webMethods integration platform onto my FOSS service bus — woot! woot!

In any event, I really do like the module plan. I have quite a bit of experience with dependency injection containers in C# and have always missed this in Ember/JavaScript. However, I have come to the realisation that JS just doesn’t work in any way the same as a typed language. One would think I’d have gotten to grips with this by now since I have dabbled in JS since late nineties having to deal with Netscape issues.

Since JS does not have true typed classes it means that some string has to identify a module. Things are also simpler if one thinks in terms of a single file representing a module. This is something that isn’t quite true in a strongly typed language where a file could contain multiple classes. Anyway, the fact is that a file simply contains code and has no bearing on software structure other than perhaps project organisation.

“Traditionally” a plain JS file would simply add something to the global “namespace” / window. From what my brain is telling me it makes sense to think of a dependency as a singleton file. Whatever is contained in a file is represented by the variable used on the import… and that makes sense.

With DI containers as I have used then the container can be instructed to treat various implementations in different ways. Mostly an instance is resolved to be a singleton. So no matter how many times one would ask for an implementation of, say, a specific interface the same object would be returned. This seems to be what the module implementation in JS is providing.

However, with DI containers resolved instance can be set to be transient so that the DI container acts as a kind of factory by providing a new, distinct, instance each time an interface is resolved. One could get to the same mechanism if one considers that a JS module provides a factory and one could then ask the factory to provide the new, distinct, instance.

In something like C# then the code is compiled and we have the executable bits. However, JS is never compiled in this same way. It is just a bunch of files. To make things cleaner, and leaner, we combine them and minify them. It seems as though steal-tools has some funky logic to determine how to go about packaging all the files in useful bundles.

However, there is this whole progressive loading business and the <can-import> tag and I don’t quite know how they relate to steal-tools.

From the time I asked this initial question I now get that the packaged dist folder can be hosted by any server capable of serving up static content and that’s the way I like it — all transpiled to a bunch of normal JS files :slight_smile:

There appears to be some bits needed to do the cache-busting but I’ll get to that. In my own implementations I would have debug build where all script and link tags would include the build date as a cache buster. The release would be similar and in some cases an MD5 hash of the file would act as a cache-buster.

I need to get to grips with how things are structured within the donejs/canjs/stealjs space as currently I am just reading water :slight_smile:

I agree that having as much support as possible from a community (and the Bitovi folks) is quite handy as being stuck on issues means no movement. For a project such as mine where I am working on my own time it is OK but when implementing for a client it isn’t ideal.

But I like the architecture so I’ll keep chipping away :slight_smile:

2 Likes