I wish I had the same enthusiasm as I did on my first blog post. As I started getting deeper into the semester, I’ve honestly been feeling different about blogging every week. It’s still a good idea…
In this article, I’d like to explore the experience of working with different three.js releases.
As I grow as a software engineer, I’m trying to be mindful of things beyond just how some code runs and I find this an interesting topic. There are two main themes I’d like to cover, the Rxxx pattern and the “three.js API/interface”.
So I’m not entirely sure what three’s versioning system is called or if it even has a name, but I can try to describe it.
The process seems fairly simple. There is some version of three.js published, lets call it R50. It possibly has some bugs, and it’s missing some features. We all know why bugs appear, the feature could have been stalled because the author got too busy to address some feedback. Therefore they’ve missed R50, but may make it to R51.
At any point in time, there will be two branches, a “release” branch and a “dev” branch. The release branch is the official latest version of three. At the time of writing, this version is R116.
The dev branch is this latest version, plus any fixes or new features that are currently being developed.
Three.js seems like it’s gone a long way over the 100 releases that happened in the meantime, but it’s still just “three.js”.
What’s interesting to note here, is that WebGL doesn’t seem like it was supported in R9, while r116 seems like it supports no other renderers, or is at least WebGL centric.
I see three.js as sort of a standard for doing 3d graphics / webgl on the Web. I feel that it is very user friendly, and does a nice and intuitive abstraction of otherwise very complex operations.
There are several things to tackle here, maybe it would be best to get a gauge of the problem that three solves at it’s core.
With three.js this would look something like:
While not doing the exact same thing, I think this serves as a good illustration. If we were to try to implement everything that these few lines of three.js code do with WebGL, the first snippet could easily get an order of magnitude bigger.
If you know WebGL three.js will take care of a lot of bureaucracy for you. If you don’t know WebGL, three allows you to render graphics, without even being aware of WebGL’s existence.
If you consult the docs, you will see that the renderer has a render()
method. You will see that the method takes scenes and cameras, that scene’s have an add()
method etc.
I want to call this “the three.js interface” or “three.js”. I think that this is how the majority of users interact with three, and this has been fairly consistent for a number of versions.
I consider this to sort of be the standard for web graphics. Being so accessible, and so consistent, means that for years now, people have been writing the same basic code, and running three.js the same way. This is good.
However, I think this only satisfies the need of people who are not aware of WebGL. Once you start using three.js to do the bureaucracy for you. Ie. “do WebGL with three.js” instead of “draw 3d stuff with three.js”, things sort of turn upside down.
Indeed, the getting started example from R55 looks very similar to R116, not much has seemingly changed. But ask anyone with experience who works with three.js to describe R55 today, and I bet a common attribute you’d hear is “ancient”.
What seems to be very consistent throughout the years is three’s scene graph. This comes down to a couple of methods:
Three didn’t reinvent the wheel here, I think almost all graphics applications will have something like this, hence this being so resilient.
Another consistent thing is that we are rendering with a renderer through a camera:
I want to say that the last one is that some Object3D
can wrap a Geometry
and a Material
There could be more but I think these are the main patterns worth noting. A graph is created (add/remove) and nodes are transformed (.position, .rotation etc).
The two thousand lines of code here are indeed a legacy, a memorial to all the methods that had to be renamed at some point:
Some signatures have changed as well, it’s not all just linguistics :)
Joking aside, and to be fair, there are a lot of signatures that changed here, but the basics of linking two nodes in a scene graph, and adding geometries and materials remains.
I want to say that this is a log of all the “advanced” features of three that changed through time.
I think it’s important to distinguish between the types of users who are affected by these (in)consistencies.
I think the wast number of three.js users are “beginners”, at least in terms of graphics. I expect the majority of them knows some JavaScript, but I’ve seen people attempt to do it even with no programming experience whatsoever.
Then there’s a smaller number of power users. I think of an experienced creative technologist, that is creating an experience for a car manufacturer, or some interactive wall for an event. This person probably did something like this with Direct3D or OpenGL. Another profile i can think of are people building other tools and software using three.js in their stack. There’s a plethora of products out there like this.
I feel that the beginner’s profile is the same today as it were in 2015 for example.
The power users on the other hand can have an expectation for bleeding edge extensions and standards to be available. There could be an extension for WebGL that in 2015 had 10% coverage and was not interesting, but today has 90% and is interesting.
The beginner will just silently get a prettier picture rendered, when moving from one version to another. The power user may be aiming to pay some technical debt, or implement some new feature that was impossible before.
For what I consider the de facto standard for web graphics, it could be said that Three.js is behind the curve.
Many articles can be written on this topic, so i’ll try to boil down the crux that’s relevant to three.js.
As we moved from building “web sites” to building “web apps” the complexity of the environment we do this in, and the tools we use grew exponentially.
While three.js is slowly catching up, lets take a look at the first step needed to run a three.js app, per official docs:
Fortunately, with the relatively recent introduction of modules, importing dependencies looks a little bit cleaner, but it’s worth illustrating what it looked like not too long ago.
This represents the “ancient” way of building a web site. The code for the website depends on some libraries. Three.js is the first import, followed by an unrelated UI library called dat.gui
.
The rest are “plugins” for three.js — without three.js they can’t be used. This is why it is important to place build/three.js
first!
If we have two “loose” script blocks with out code, both of these blocks will see all of these dependencies. But perhaps, our second block is smaller, more confined to some specific logic and only needs to see THREE
without the plugins (or not care about THREE
at all ).
Modules make importing dependencies in the HTML file a bit cleaner. This is the same as some main.js
file would look if we were using a build tool to bundle all of this.
Instead of a dozen <script>
tags, we actually use JavaScript import/export
syntax, and import the dependencies directly in our code. We only have one <script>
block with both our logic, and dependencies.
Let’s observe what is happening here.
We wrote the header of the file manually <html>...
, both examples have the same header — the basic syntax needed do declare a web page, and things like title.
In the first example, the dependencies are hard coded in this header. There are many <script>
HTML elements, that point to some files.
The file ../build/three.module.js
has to be present on the same server where the html file is! This is valid for both examples.
The second example though, doesn’t mention any of these dependencies in the HTML. It just says:
Running the script then fetches the dependencies. This means that all of the examples now have a very similar HTML template. If we take out some extra HTML that may be specific to some examples, more or less the only thing that would be different between them is the page’s title. Note these source files still have to be available. So, anything that you use must be copied over to a server. Without a tool, you’d manually have to pluck the files that are used from the complete set that three has.
This brings us to the more modern approach of building a “web app” instead of a “web site”.
If all of these examples rely on the same template, we can actually declare a single template:
What happens next is that we can treat this template as the source code. We would never directly put this template file on a server. But, using a tool like webpack, we could generate a hundred html files, each with a different title.
The next issue is this:
Not only would the dependencies differ for each file (minus three, every example uses it) but the source code is what makes the examples, not just the title :)
What if we replace this with something like:
With this kind of an approach, we would only have to maintain a single HTML file for an arbitrary number of examples.
We now have a way to take ten different JavaScript files and turn them into one. This is a really nice thing, but productivity and confidence can further be increased using these modern tools.
Lets say that out of those ten files, nine are not ours, they are libraries that someone else wrote (we just described three’s examples as such).
The code we write is obvious, it’s a file we are editing, but how do we obtain the dependencies?
The archaic way of doing this would be to take the source code of the library, copy it into your project.
If you wanted to use OrbitControls
with your source code, you can see above how it would look with <script>
tags. You have to host the file, and then you have to import it globally.
When bundling, we won’t be hosting the file directly, (although we could) and won’t be importing it globally. We will use it as source code, but we still need to obtain it.
So in order not to manually copy the contents of some file, and edit your copy whenever the version changes, we can use a package manager.
If our project depends on three, to install it with npm, we would run this command after having setup the project:
For example, three will install all of it’s examples, and you may only use one, another library may have many of its own dependencies which will also be installed. If you look inside there, you will find a lot of stuff…
To update a package with npm, you would run something like:
This automates the task of manually copying the new version of three’s source code into your project.
When you share a project that is setup like this, you only need to include your source code and a meta file describing the dependencies used. With this meta file, another user can install the same dependencies as you.
In here lies a huge gotcha, but let’s go over one more concept.
When you write something like C, the computer cannot run your code directly. It has to be compiled and turned into a language that the computer understands, and with this it becomes less comprehensible to humans.
JavaScript doesn’t work like this, and the code you write, is the code that will be executed. Of course this has to be translated to machine language further down, but the browser environment takes care of this.
All the browsers more or less understand the same JavaScript, but this ratio sort of fluctuates through time. Some experimental feature may become standard, yet some new experimental features can appear in certain browsers. Other features may have been available for a time, but had to be turned off because of some revealed exploit.
One way of going about this problem is roughly what three.js does:
Don’t use experimental features, favor the features that work across most or all the browsers.
This is a relatively safe approach, there is some standard, and it is being followed. Fancy new experimental stuff, is just that, experimental.
However, this limits how productive or confident the consumer of the library can be. For example, using an asset loader without promises, is very verbose in comparison. Extending a class in JS is also verbose if done with “vanilla” JavaScript as opposed to a modern standard. Then, something that’s not available in some browser can be implemented outside of it. This is known as a “polyfill”.
To overcome this, it’s possible to write an improved version of JavaScript, going as far as labeling it a different language (eg. TypeScript) but transforming it to the version of JavaScript that the browsers are most likely to understand. This transformed version is what ends up being served to the end user.
There are many benefits to this that are out of scope of this article, let’s just try to visualize how this may affect the workflow with three:
Let’s pick some random .js
files from this list, and call them dependencies of our project. Once a tool like webpack processes our entire project it would yield simply:
All these files can be written with modern JavaScript, and actually the .d.ts
is a TypeScript definition. But since a browser understands older JS and doesn’t understand TS, we combine all this source code, and transform it to a single “vanilla” JS file.
A common hurdle with three.js that can be solved with this is obtaining shader strings. Instead of embedding them in the HTML and extracting them at runtime, or fetching them as a resource asynchronously, they can be stored as source .glsl
code, with all the benefits of syntax highlighting and IDE support, and bundled together with the rest of the source code.
It may not make sense to combine everything into a single file, but all of this still applies when breaking this apart. Thousands of potential source files, are bundled into a few output files.
I have a feeling that my experience working with three.js within this modern environment differs from other popular libraries. React is one such library that I work with a lot, and over the years, I think i can only point out one version of interest and that would be “the hook one” (I probably wasn’t using it for that long to hit deprecated methods.)
It wasn’t even a breaking change, rather, there was a version that introduced a new interface, that was completely opt-in. If one didn’t care about it, updating react would be an automated process, without much effect. Hopefully the only effect would be that your web app runs faster after an update.
With three.js it’s a different story. R9 seems like it wasn’t a library for WebGL, R116 is a version of three that is WebGL centric. It’s still technically the same three.js.
There was no “three with webgl” or “three with PBR shaders” analogous to “react with hooks”. There is a continuous stream of upgrades. Judging from the change log, these can be quite heavy, or minor, but each gets the same weight as being the “latest” version.
Having worked with three.js for a number of years, I cannot point out a single version as being “the one with…”, with perhaps one exception.
I have a rough idea of the state where R7X versions were, roughly what happened around say R92, and some idea of the type of work that is happening in say the last five versions.
I think thus, most people will start at the “lastest”- whatever happens to be the most recent Rxxx release at the moment.
When joining a company that uses three.js in their stack, a more likely scenario is that you will encounter some version that is several, if not many iterations behind the latest.
I don’t know :)
This is one of the most interesting aspects of three.js for me. The need for upgrading always depended on the role I was working in combined with the state of three.js at arbitrary moments in time.
When I worked in an agency on many different prototypes, I did upgrade quite often since there was a lot of development happening at the time that improved the quality of three’s lighting. If an interesting version of three aligned with a start of the project, i’d use the latest, otherwise, i’d stick to a version i’m familiar with.
When I worked at startups, things were much more conservative. There would have to be a really good argument to why one would want to upgrade, and risk breaking the entire business.
The risk in such a scenario is huge. Because three is or was behind the curve, many of the tools invented to reduce such risk were incompatible. The safety of TypeScript disappears if three.js itself doesn’t have well defined types.
The pace and three’s versioning pattern IMHO make this a bit of an arbitrary process:
The severity of the bugs can differ vastly. It’s possible that some crucial feature is broken exactly in R113, but was caught and fixed in R114. But with this fix came some deprecation.
So by the time you run into the bug, and realize you have to upgrade to R114, you may encounter that Legacy file we mentioned earlier.
While this is a warning, it can get kinda annoying if you keep it around.
Maybe in development you are logging some other warnings that are more meaningful to you and you don’t want to pollute the console with this. But there is no way to opt out.
Even though three still can legitimately use both method names, (the old one calls the new one) you cannot turn off this warning, which makes it sort of impossible to use the old method, and you have to go and change all of the calls in your code.
The very basics of three.js remained consistent in between these two versions. The code adding Meshes to Scenes does not break, but anything slightly more advanced (like rendering to targets) is prone to break.
So any time you do an upgrade, you risk having to do some extra maintenance on your own code. Even though there is backwards compatibility, I think it’s weird that it comes with this mandatory console warning. It’d be fine to log it once maybe, but why thousands of times?
If your application hits a bug, that is stemming from some bug in a particular version of three, you will most likely have to upgrade just to have that fixed, but then it potentially comes with other overhead, you cannot just upgrade to have a bug fix.
The long introduction was leading to this. When three.js ended up on npm, it sort of found itself boxed into this concept.
These are the Major, Minor and Patch versions. And looks like this
The major version here would be 1, minor 2, and patch 3. There is a lot of detail but the high level description is pretty concise:
Because all of the npm libraries have to use this pattern of versioning, three.js found its own Rxxx format expressed in a semantic way:
I’m trying to wrap my head around what effect this has, and how it possibly differs from other big libraries.
My expectation is that, incrementing any MINOR version, (R116->R117) there should be no breaking changes.
If my development process is affected, ie. i use console logs for debugging, but now i can’t track them because of three’s mandatory warnings, i most likely need to upgrade my code to not trigger the warnings. Even though the old method works.
I’m not sure how, but it seems that the PATCH version is also incremented, but very few times. I think this is due to the rapid cycle, new features are constantly merged with patches, so there isn’t much opportunity to squeeze in many patch versions.
I think the patches get incremented mostly when critical bugs are encountered, since those fixes can be few in numbers and can be bundled together as a single release. But these are then made obsolete in a matter of days when the new R release comes out, thus bumping the MINOR version.
The MAJOR version has never been incremented, and is still at 0.
Had three been available on npm from release R1, i’d imagine the MAJOR version would be bumped in the ten years to follow when it grew from a non WebGL library to a WebGL centric library.
To conclude, three’s official versioning uses the release (R100) pattern, but since it is available on npm, and is quite likely to remain there for as long as npm exists, it is using semantic versioning in parallel.
For a beginner, three has remained consistent for years. The interface didn’t change, and has sort of become a standard on how to draw 3d things on the web.
For a power user though, the release cycle can be a source of headaches. While waiting for some feature to land in R200, many changes need to be accommodated if starting at R100.
I think the issue is that, as much as the interface is consistent for the beginner, it’s as inconsistent for the power user. This is really difficult to gauge though, since using the Legacy pattern, the interface is still there, but IMHO it’s dubious how useful it is.
Is changing the inner workings of the PBR lighting model a breaking change?
The material rendering this effect would still have metalness
as it’s interface, but the result rendered on screen can be vastly different compared to a previous version.
Is this considered a bug then? That’s a rather philosophical question. I’ve seen issues over and over as bugs even though they were deemed a feature at one point.
I think the biggest offender in this whole story are three’s examples.
I think this is possibly the biggest area of technical debt that three.js has.
It’s not clear if they are part of three or not. They are certainly examples of what can be built using three.js, there is no denying that.
I want to make an analogy here with various react components that can be found out in the wild. There are probably thousands of components that are open sourced, that have people maintaining them, people using them, but they are not part of react. They are things built using react.
Three’s examples all live in the same repository, even though they were similarly contributed by various different people, and possibly maintained over time by others.
The complexity and utility fall in a wide range.
Some are very advanced and hard to comprehend, and possibly not much useful outside of a tech demo, because they wont really run smooth on phones for example.
Others though, can be considered extensions of three.js, OrbitControls
being the first that comes to mind. A vast number of three.js apps uses this class, so it’s more of a plugin than an example in my mind.
Why not just stick it in and make it part of the main library?
The vast number of apps may use it, but there may be some that don’t. And why would we, being in examples, it shows that it is a tool that’s perfectly suited for being built using three’s building blocks. There is no need for such a tool to be in the same layer as three, the example proves that it should be in the layer above.
I think examples are a way that three.js cheats the semantic versioning system.
Since they are not part of the main library (technically) there is no obligation to keep them under the confines of semantic versioning.
There is no Legacy file to inform you if an interface of the example changed, the example may change so drastically that it has no backwards compatibility.
I also found myself searching for versions of three that would cover the examples i needed with modules or TypeScript, as one version had just started to enhance the examples, another provided more coverage.
For example, at one point I could already import core components such as Mesh, but not the OBJLoader since it hasn’t been modernized yet.
I’m trying to imagine a situation where three’s examples folder would just hold a page that links to various other github repositoiries and corresponding npm package names.
Maybe I don’t need to pull in all of the examples when i install three, but only the ones i’m actually using.
It would be really hard to keep these examples up to date and compatible with three, if three had this fast release cycle as it does today.
Some basic examples would probably work, but the more advanced ones would have to be checked upon every release, and probably be modified to have a corresponding version.
Maybe OrbitControls
work with R50-R100, but need to be modified at R101. The team maintaining the package wouldn’t know, they would simply have to check every version.
At a minimum, you might be publishing a package, that will cause console pollution for a vast number of developers. At worse, you’ll have a package published that doesn’t work.
Adhering to the tenets of semantic versioning seems like it would address this issue. If the maintainer of OrbitControls
could see that no breaking changes happened within the last 50 releases, they could have some confidence that their package is still up to date, because three was just fixing bugs. Upon the breaking release, the package would have to be inspected, maybe it’s affected and maybe not.
The most fascinating aspect about this situation is that three’s preferring this linear approach to versioning, we’re moving in one dimension along a line.
I’d expect that a library that is suited for doing 3d graphics would embrace a versioning pattern that is three dimensional :)
Since we can easily convert a linear index to three dimensions. Imagine if we took some factor, let’s say 5 Releases, and dedicated those to just fixing bugs.
These five releases would increment the PATCH version. Every 6th release would gather all the new features up to that point, and only merge those. Thus incrementing the MINOR version. I think this would be enough to give some stability and confidence for developers outside the core team to build reusable solutions for three.js.
The MAJOR one is a tricky one, since the Legacy method ensures that the interface at least doesn’t break. I’m sure going from non WebGL to WebGL would warrant an increment here somewhere. Perhaps overhaul of the internals of rendering? I’m not sure there absolutely has to be a breaking change. React hooks seem like a big feature but they were introduced in a minor version. Truly deprecating some of the methods in Legacy, would warrant incrementing this at a minimum.
Three’s development is hard to follow. While the basics of using three don’t seem like they have changed in years, the advanced features are being added, removed and changed at a mind boggling pace. The result is that an advanced user of three usually has to get acquainted with the code in three’s core to some extent.
In order to bump three a few versions, one would have to read the change log for each version in between. This can get quite overwhelming if you are trying to track down some high priority bug that just got reported, and you’re looking for that one potential thing that may have silently changed under the hood of three.js . Not to mention that it probably takes a few hours to isolate the bug down to a change in the library in such a case :(
When using three’s examples as essential plugins, they sometimes tend to lag behind what’s going on in the library. If they lived in their own repos and were accessible as packages, it would be easier to track their state and make the decision to upgrade something with more confidence.
The situation with NPM is weird. I believe three got compelled to utilize it, and be available on it because of where the industry is at today. Through this, it got shoehorned into the semantic versioning pattern. While it seems that it is utilizing it, I think that it’s a bit of a deception, and not much, if anything can be understood from looking at the version of three alone.
I wonder if three is drifting away from both of the main target audiences. The number of unanswered but basic questions on Stack Overflow related to starting out with three is staggering. It’s possible that modernizing three is raising the entry bar.
On the other hand, for the power users it may feel like using a time machine at certain moments. An entire stack of JS can be modern, with three being the lone offender feeling a few years behind.
I think much of this burden would go away if the examples were decoupled from three, but this seems to be extremely hard to do conceptually. For years three has been building an image of what it’s capable leveraging the examples. I think the mental model needs to shift from three being this all capable thing, to being seen as a collection of building blocks. You don’t need many of those blocks to draw 3d graphics, and some of them are very simple, but you are always using building blocks.
I think of an analogy with React again. It has a lot of official documentation, but also a lot of tribal knowledge accumulated over the years, which is still valid. Because this is so stable, a whole ecosystem can built around it. Tutorials don’t go stale as fast in this situation.
I’ve never had to read the source of React in order to work with it, I had to, and am still learning a lot about graphics in order to work with three.js. While this is on one hand a beautiful thing, it can get a bit overwhelming when you need to do this day to day, under deadlines.
I think that something can be done with the linear index that three.js adheres to. Say every 5th release is R, and any other is B (for bugs), but whatever it ends up being, it’s going to look an afwul lot like semver. Just giving users of three.js a chance to get acquainted with one state of library, while expecting some bug fixes to come in the some time frame to follow may help the overall situation. I think this could motivate people to write more tutorials and develop an ecosystem around the core of three.js.
What do you think?
Keep the initial reach out message short and concise, and don’t ask for favors unless you develop a genuine relationship.
The clangs and bells and cheers of joy and boos of disappointment were ringing through the casino and right into Deborah’s ears. She’d been playing blackjack at the casino for hours. It had been a…
Every single time he says it, I want to set it to music. “Left shifting,” the Head of Department says. But I hear “Left Shifter.” And, of course, there’s Shirley Bassey in my head. “Left shifter…