This content originally appeared on DEV Community and was authored by Mateusz Burzyński
Contrary to what most developers think, tree shaking isn’t very complicated. The discussion around the nomenclature (dead code elimination vs. tree shaking) can introduce some confusion, but this issue, along with some others, is clarified throughout the article. As JavaScript library authors, we want to achieve the most lightweight code bundle possible. In this post, I’ll walk you through the most popular patterns that deoptimize your code as well as share my advice on how to tackle certain cases or test your library.
A bit of theory
Tree shaking is a fancy term for dead code elimination. There is no exact definition of it. We can treat it as a synonym for dead code elimination or try to put only certain algorithms under that umbrella term.
If we look at the definition listed on the webpack's docs page, it seems to be mentioning both approaches.
“Tree shaking is a term commonly used in the JavaScript context for dead-code elimination. It relies on the static structure of ES2015 module syntax, i.e. import and export.”
The first sentence implies it's a synonym while the second one mentions some specific language features that are used by this algorithm.
Nomenclature dispute
“Rather than excluding dead code (dead code elimination), we’re including live code (tree shaking elimination)”, distinguishes Rich Harris in his excellent post on the topic.
One practical difference between both approaches is that the so-called tree shaking usually refers to the work done by bundlers, whereas dead code elimination is performed by minifiers, like Terser. As a result, the whole process of optimizing the final output often has 2 steps if we are discussing the creation of production-ready files. In fact, webpack actively avoids doing dead code eliminations and offloads some of that work to Terser while dropping only the necessary bits. All of this is to make the work easier for Terser, as it operates on files and has no knowledge of modules or the project structure. Rollup, on the other hand, does things the hard way and implements more heuristics in its core, which allows for generating less code. It's still advised to run the resulting code through Terser, though, to achieve the best overall effect.
If you ask me, there is little point in arguing which definition is correct. It’s like battling over whether we should say function parameters or function arguments. There’s a difference in meaning, but people have been misusing the terms for so long that these terms became interchangeable in everyday use. Speaking of tree shaking, I understand Rich's point, but I also think that trying to distinguish separate approaches has introduced more confusion than clarification, and that ultimately, both techniques check the exact same things. That is why I'm going to use both terms interchangeably throughout this post.
Why even bother?
The frontend community often seems to be obsessed with the size of JavaScript bundles that we ship to our clients. There are some very good reasons behind this concern, and we definitely should pay attention to how we write code, how we structure our applications, and what dependencies we include.
The primary motivating factor is to send less code to the browser, which translates to both faster download and execution, which in turn means that our sites can be displayed or become interactive faster.
No magic
The currently popular tools like webpack, Rollup, Terser, and others don't implement a lot of overly complicated algorithms for tracking things through function/method boundaries, etc. Doing so in such a highly dynamic language as JavaScript would be extremely difficult. Tools like Google Closure Compiler are much more sophisticated, and they’re capable of performing more advanced analysis, but they’re rather unpopular and tend to be hard to configure.
Given that there is not that much magic involved in what those tools do, some things simply cannot be optimized by them. The golden rule is that if you care about the bundle size, you should prefer composable pieces rather than functions with tons of options or classes with a lot of methods, and so on. If your logic embeds too much and your users use only 10% of that, they will still pay the cost of the whole 100% – using the currently popular tooling there is just no way around it.
General view on how minifiers and bundlers work
Any given tool performing static code analysis operates on the Abstract Syntax Tree representation of your code. It's basically the source text of a program represented with objects which form a tree. The translation is pretty much 1 to 1, and converting between the source text and AST is semantically reversible – you can always deserialize your source code to AST and later serialize it back to the semantically-equivalent text. Note that in JavaScript things like whitespaces or comments don’t have semantic meaning and most tools don't preserve your formatting. What those tools have to do is figure out how your program behaves, without actually executing the program. It involves a lot of book-keeping and cross-referencing deduced information based on that AST. Based on that, tools can drop certain nodes from the tree once they prove that it won't affect the overall logic of the program.
Side effects
Given the language you use, certain language constructs are better than others for static code analysis. If we consider this very basic program:
function add(a, b) {
return a + b
}
function multiply(a, b) {
return a * b
}
console.log(add(2, 2))
We can safely say that the whole multiply
function isn’t used by this program and therefore doesn’t need to be included in the final code. A simple rule to remember is that a function can almost always be safely removed if it stays unused because a mere declaration doesn’t execute any side effects.
Side effects are the most vital part to understand here. They are what actually affects the outer world, for example, a call to a console.log
is a side effect because it yields an observable outcome of a program. It wouldn’t be OK to remove such a call as users usually expect to see it. It's hard to list all possible side effect types a program might have, but to name a few:
- Assigning a property to a global object like
window
- Changing all other objects
- Calling many builtin functions, like
fetch
- Calling user-defined functions that contain side effects
The code that has no side effects is called pure.
Minifiers and bundlers have to always assume the worst and play safe since removing any given line of code incorrectly can be very costly. It can tremendously alter the program's behavior and waste people's time on debugging bizarre problems that manifest only on production. (Minifying the code during development is not a popular choice.)
Popular deoptimizing patterns and how to fix them
As mentioned at the beginning, this article is dedicated primarily to library authors. Application development usually focuses on functionality, rather than optimization. Over-optimizing the aspects mentioned below in the application code is generally not advised. Why? The application codebase should contain only the code that’s actually in use – profits coming from the implementation of eyebrow-raising techniques would be negligible. Keep your apps simple and understandable.
? It's really worth noting that any advice given in this article is only valid for the initialization path of your modules, for what gets executed right away when you import a particular module. Code within functions, classes, and others is mostly not a subject of this analysis. Or to put it differently, such code is rarely unused and easily discoverable by linting rules like as no-unused-vars and no-unreachable.
Property access
This might be surprising, but even reading a property cannot be dropped safely:
const test = someFunction()
test.bar
The problem is that the bar
property might actually be a getter function, and functions can always have side effects. Given that we don't know much about someFunction
, as its implementation might be too complex to be analyzed, we should assume the worst-case scenario: this is a potential side effect and as such cannot be removed. The same rule applies when assigning to a property.
Function calls
Note that even if we were able to remove that property read operation, we'd still be left with the following:
someFunction()
As the execution of this function potentially leads to side effects.
Let's consider a slightly different example that might resemble some real-world code:
export const test = someFunction()
Assume that thanks to the tree shaking algorithms in a bundler, we already know that test
isn’t used and thus can be dropped, which leaves us with:
const test = someFunction()
A simple variable declaration statement doesn't contain any side effects either, therefore it can be dropped as well:
someFunction()
In a lot of situations, however, the call itself cannot be dropped.
Pure annotations
Is there anything that can be done? It turns out that the solution is quite simple. We have to annotate the call with a special comment that the minifying tool will understand. Let's put it all together:
export const test = /* #__PURE__ */ someFunction()
This little thing tells our tools that if the result of the annotated function stays unused, then that call can be removed, which in turn can lead to the whole function declaration being dropped if nothing else refers to it.
In fact, parts of the runtime code generated by bundlers are also annotated by such comments, leaving the opportunity of the generated code being dropped later.
Pure annotations vs. property access
Does /* #__PURE__ */
work for getters and setters? Unfortunately not. There isn’t much that can be done about them without changing the code itself. The best thing you could do is to move them to functions. Depending on the situation, it might be possible to refactor the following code:
const heavy = getFoo().heavy
export function test() {
return heavy.compute()
}
To this:
export function test() {
let heavy = getFoo().heavy
return heavy.compute()
}
And if the same heavy
instance is needed for all future calls, you can try the following:
let heavy
export function test() {
// lazy initialization
heavy = heavy || getFoo().heavy
return heavy.compute()
}
You could even try to leverage #__PURE__
with an IIFE, but it looks extremely weird and might raise eyebrows:
const heavy = /* #__PURE__ */ (() => getFoo().heavy)()
export function test() {
return heavy.compute()
}
Relevant side effects
Is it safe to annotate side-effectful functions like this? In the library context, it usually is. Even if a particular function has some side effects (a very common case after all), they are usually only relevant if the result of such a function stays used. If the code within a function cannot be safely dropped without altering the overall program's behavior, you should definitely not annotate a function like this.
Builtins
What might also come as a surprise is that even some well-known builtin functions are oftentimes not recognized as "pure" automatically.
There are some good reasons for that:
- The processing tool cannot know in what environment your code will actually get executed, so, for example,
Object.assign({}, { foo: 'bar' })
could very well just throw an error, like "Uncaught TypeError: Object.assign is not a function". - The JavaScript environment can be easily manipulated by some other code the processing tool isn’t aware of. Consider a rogue module that does the following:
Math.random = function () { throw new Error('Oops.') }
.
As you can see, it's not always safe to assume even the basic behavior.
Some tools like Rollup decide to be a little bit more liberal and choose pragmatism over guaranteed correctness. They might assume a non-altered environment, and in effect, allow to produce more optimal results for the most common scenarios.
Transpiler-generated code
It's rather easy to optimize your code once you sprinkle it with the #__PURE__
annotations, given you’re not using any additional code-transpiling tools. However, we often pass our code through tools like Babel or TypeScript to produce the final code that will get executed, and the generated code cannot be easily controlled.
Unfortunately, some basic transformations might deoptimize your code in terms of its treeshakeability, so sometimes, inspecting the generated code can be helpful in finding those deoptimization patterns.
I’ll illustrate, what I mean, with a simple class having a static field. (Static class fields will become an official part of the language with the upcoming ES2021 specification, but they are already widely used by developers.)
class Foo {
static defaultProps = {}
}
Babel output:
class Foo {}
_defineProperty(Foo, "defaultProps", {});
TypeScript output:
class Foo {}
Foo.defaultProps = {};
Using the knowledge gained throughout this article, we can see that both outputs have been deoptimized in a way that might be hard for other tools to handle properly. Both outputs put a static field outside the class declaration and assign an expression to the property – either directly or through the defineProperty
call (where the latter is more correct according to the specification). Usually, such a scenario isn’t handled by tools like Terser.
sideEffects: false
It’s been quickly realized that tree shaking can automatically yield only some limited benefits to the majority of users. The results are highly dependent on the included code since a lot of the code in the wild uses the above-mentioned deoptimizing patterns. In fact, those deoptimizing patterns aren’t inherently bad and most of the time shouldn’t be seen as problematic; it’s normal code.
Making sure that code isn’t using those deoptimizing patterns is currently mostly a manual job, so maintaining a library tree-shakeable tends to be challenging in the long run. It’s rather easy to introduce harmless-looking normal code that will accidentally start retaining too much.
Therefore, a new way to annotate the whole package (or just some specific files in a package) as side-effect-free has been introduced.
It's possible to put a "sideEffects": false
in a package.json
of your package to tell bundlers that files in that package are pure in a similar sense that was described previously in the context of the #__PURE__
annotations.
However, I believe that what it does is vastly misunderstood. It doesn't actually work like a global #__PURE__
for function calls in that module, nor does it affect getters, setters, or anything else in the package. It's just a piece of information to a bundler that if nothing has been used from a file in such a package, then the whole file can be removed, without looking into its content.
To illustrate the concept, we can imagine the following module:
// foo.js
console.log('foo initialized!')
export function foo() {
console.log('foo called!')
}
// bar.js
console.log('bar initialized!')
export function bar() {
console.log('bar called!')
}
// index.js
import { foo } from './foo'
import { bar } from './bar'
export function first() {
foo()
}
export function second() {
bar()
}
If we only import first
from the module, then the bundler will know it can omit the whole ./bar.js
file (thanks to the "sideEffects": false
flag). So, in the end, this would be logged:
foo initialized!
foo called!
This is quite an improvement but at the same time, it's not, in my humble opinion, a silver bullet. The main problem with this approach is that one needs to be extra careful about how the code is organized internally (the file structure, etc.) in order to achieve the best results. It’s been common advice in the past to "flat bundle" library code, but in this case, it’s to the contrary – flat bundling is actively harmful to this flag.
This can also be easily deoptimized if we decide to use anything else from the ./bar.js
file because it will only be dropped if no export from the module ends up being used.
How to test this
Testing is hard, especially since different tools yield different results. There are some nice packages that can help you, but I've usually found them to be faulty in one way or another.
I usually try to manually inspect the bundles I get after running webpack & Rollup on a file like this:
import 'some-library'
The ideal result is an empty bundle – no code in it. This rarely happens, therefore a manual investigation is required. One can check what got into the bundle and investigate why it could have happened, knowing what things can deoptimize such tools.
With the presence of "sideEffects": false
, my approach can easily produce false-positive results. As you may have noticed, the import above doesn't use any export of the some-library
, so it's a signal for the bundler that the whole library can be dropped. This doesn't reflect how things are used in the real world, though.
In such a case I try to test the library after removing this flag from its package.json
to check what would happen without it and to see if there’s a way to improve the situation.
Happy tree shaking!
Don't forget to check our other content on dev.to!
If you want to collaborate with us on expanding the area of business messaging, visit our Developer Program!
This content originally appeared on DEV Community and was authored by Mateusz Burzyński
Mateusz Burzyński | Sciencx (2021-04-20T13:13:14+00:00) Tree shaking for JavaScript library authors. Retrieved from https://www.scien.cx/2021/04/20/tree-shaking-for-javascript-library-authors/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.