The Discord Servers I'm a moderator in have a bit of a spam bot problem lately, where bots join that never interact and only spam user's DMs. While I could wait until Discord bans them, it's not exactly the best solution, so why not build my own bot to handle users that never interact? So I started building a bot using NodeJS and discord.js. It's been going great so far, but then my ADHD got the better of me when I wanted to know: What actually is the fastest way to clone an Array?
Searching for an answer, I stumbled on this StackOverflow question. And as usual, almost all of the answers are simply regurgitated content with nothing to back it up. So with nothing else to do on a weekend, I threw my entire knowledge at the problem, wrote a benchmarking "tool", wrote several ways to do the same, and I think I found it.
Coding a Benchmark
Please make sure include the necessary credit in your derivative work:
Copyright 2024 Michael Fabian 'Xaymar' Dirks
I'm not a fan of services like JSPerf or JSBench, as many of them tend to rely on third-party libraries and include permanently running code on the side. And when something breaks you never know why or when it'll be fixed, resulting in delays that really shouldn't be necessary. So instead I wrote my own generic benchmarking function using only standard JavaScript, compatible with NodeJS and common Browsers:
// Copyright <2024> Michael Fabian 'Xaymar' Dirks <info-at-xaymar-dot-com>
// Licensed under 3-Clause BSD License: https://opensource.org/license/bsd-3-clause
// Handle Node.JS
if (typeof window === 'undefined') {
import('node:perf_hooks').then((perfHooks) => {
global.performance = perfHooks.performance;
});
}
export default {};
export function benchTime(cycles, timeLimit, fnSetup, ...fnProcess) {
let finalResults = new Map();
function measureCycle(timeLimit, fn, args) {
let tmp = null;
let end, start = performance.now();
let iterations = 0;
// Run until we exceed the time limit for one cycle.
do {
tmp = fn.apply(null, args);
end = performance.now();
++iterations;
} while ((end - start) <= timeLimit);
tmp = undefined;
// Build a result object and return it.
return {
"iterations": iterations,
"start": start,
"end": end,
"duration": end - start,
"opsPerSec": (iterations / (end - start)) * 1000.0,
};
}
console.log(`Measuring ${fnProcess.length} functions...`);
let params = fnSetup();
//console.log("Setup function returned:", params);
// Perform this for every function passed.
for (let fn of fnProcess) {
let results = [];
//console.groupCollapsed(`${fn.name}: Running for ${cycles} cycles...`);
// Perform this N times.
for (let cycle = 0; cycle < cycles; cycle++) {
let result = {
"iterations": Number.NaN,
"start": Number.NaN,
"end": Number.NaN,
"duration": Number.NaN,
"opsPerSec": Number.NaN,
};
try {
result = measureCycle(timeLimit, fn, params);
results.push(result);
} catch (ex) {
console.error(`${fn.name}:`, ex);
break;
}
//console.log(`Cycle ${cycle}/${cycles}: ${result.iterations}, ${result.end - result.start}, ${result.opsPerSec} ops/s`);
}
// If we have more than 3 repeats, drop slowest and fastest as outliers.
if (results.length > 3) {
//console.log("Dropping slowest and fastest result.");
results = results.sort((a, b) => {
return (a.end - a.start) > (b.end - b.start);
}).slice(1);
results = results.sort((a, b) => {
return (a.end - a.start) < (b.end - b.start);
}).slice(1);
}
//console.groupEnd();
// Merge all results for the final average.
let iterations = 0;
let totalTime = 0;
let opsPerSecMin = +Infinity;
let opsPerSecMax = -Infinity;
let opsPerSec = 0;
for (let result of results) {
iterations += result.iterations;
totalTime += result.duration;
opsPerSec += result.opsPerSec;
if (opsPerSecMin > result.opsPerSec) {
opsPerSecMin = result.opsPerSec;
}
if (opsPerSecMax < result.opsPerSec) {
opsPerSecMax = result.opsPerSec;
}
}
let operations = opsPerSec / results.length; //iterations / totalTime;
let operationVariance = opsPerSecMax - opsPerSecMin;
console.log(`${fn.name}: ${(operations / 1000).toFixed(3)}±${(operationVariance / 1000).toFixed(3)} ops/ms, ${iterations} iterations over ${totalTime} ms.`);
finalResults.set(fn, results);
}
//console.log("Done.");
return finalResults;
}
Next up was writing all the ways I knew or could find on how to shallow-clone an Array, and coming up with possible other varieties to fill up the benchmark. Overall, I came up with 2 on my own, and used already available code from documentation and other posts for the remaining 8. In total, we now have 10 different ways to shallow clone an array.
// Copyright <2024> Michael Fabian 'Xaymar' Dirks <info-at-xaymar-dot-com>
// Licensed under 3-Clause BSD License: https://opensource.org/license/bsd-3-clause
import { benchTime } from './benchTime.mjs';
export default function (size) {
function spread(arr) { return [...arr]; }
function spreadNew(arr) { return new Array(...arr); }
function arraySlice(arr) { return arr.slice(); }
function arraySlice0(arr) { return arr.slice(0); }
function arrayConcat(arr) { return [].concat(arr); }
function arrayMap(arr) { return arr.map(i => i); }
function objectValues(arr) { return Object.values(arr); }
function objectAssign(arr) { return Object.assign([], arr); }
function json(arr) { return JSON.parse(JSON.stringify(arr)); }
function loop(arr) { const a = []; for (let val of arr) { a.push(val); } return a; }
benchTime(
10, 1000,
() => {
let arr = new Array(size);
for (let a = 0; a < arr.length; a++) { arr[a] = Math.random(); };
return [arr];
},
spread,
spreadNew,
arraySlice,
arraySlice0,
arrayConcat,
arrayMap,
objectValues,
objectAssign,
json,
loop
);
};
Running a Markbench
Now that we have everything, we just need to run the test, and even you can do it in your Browsers' Developer Console. Just paste this in: const { default: asc } = await import("./arrayShallowClone.mjs"); for (let e of [256, 2048, 16384, 131072, 1048576]){ console.log(e); asc(e); };
, hit enter/return and it should start measuring. For my own sanity, I limited myself to NodeJS V8, Firefox SpiderMonkey and Chromium V8 only.
256 Elements
Test | NodeJS v20.12.2 | Firefox v127.0b2 | Edge 124.0.2478.97 |
---|---|---|---|
spread | 3744.706 ±837.720 |
427.328 ±160.628 |
194.148 ±21.069 |
spreadNew | 772.173 ±4.565 |
253.318 ±14.201 |
944.612 ±65.431 |
arraySlice | 8153.649 ±74.025 |
11135.403 ±122.001 |
4332.440 ±566.803 |
arraySlice0 | 8097.358 ±221.166 |
10743.045 ±1498.073 |
4132.175 ±291.671 |
arrayConcat | 7824.854 ±1304.302 |
8114.314 ±59.299 |
4266.308 ±519.280 |
arrayMap | 1556.550 ±555.325 |
1477.043 ±30.384 |
1601.246 ±496.596 |
objectValues | 395.249 ±13.034 |
11097.390 ±115.324 |
2351.786 ±298.991 |
objectAssign | 24.374 ±0.333 |
10540.011 ±99.678 |
1742.264 ±198.06 |
json | 14.893 ±0.190 |
263.623 ±6.714 |
371.344 ±37.337 |
loop | 558.856 ±33.762 |
452.884 ±134.844 |
1003.259 ±186.607 |
For extremely small arrays, the best option appears to be Array.slice()
across the board. To hopefully no surprise, the most commonly suggest solution are also among the worst ones: [...array]
, and JSON.parse(JSON.stringify(array))
. Interestingly, Firefox's SpiderMonkey appears to be cheating here a bit and treats several methods almost identically.
2048 Elements
Test | NodeJS v20.12.2 | Firefox v127.0b2 | Edge 124.0.2478.97 |
---|---|---|---|
spread | 564.068 ±69.269 |
54.662 ±12.295 |
24.842 ±4.052 |
spreadNew | 100.493 ±18.711 |
8.528 ±0.445 |
166.986 ±3.784 |
arraySlice | 1200.385 ±107.918 |
10952.114 ±327.757 |
2132.616 ±419.095 |
arraySlice0 | 1244.755 ±347.808 |
10291.675 ±718.242 |
2360.948 ±313.196 |
arrayConcat | 1195.633 ±202.225 |
7922.923 ±152.032 |
2330.935 ±734.994 |
arrayMap | 230.627 ±11.868 |
203.621 ±10.492 |
380.155 ±8.593 |
objectValues | 49.283 ±8.554 |
11062.764 ±580.898 |
668.146 ±24.536 |
objectAssign | 2.721 ±0.471 |
10545.962 ±159.850 |
567.441 ±7.771 |
json | 1.773 ±0.237 |
41.225 ±1.094 |
61.034 ±0.985 |
loop | 73.709 ±13.677 |
63.957 ±10.470 |
152.831 ±47.908 |
Not much changes in the more average use case, only cementing further that the Spread operator and JSON-clone are among the worst options. Similarly to before, Firefox's SpiderMonkey is still cheating a lot, and doesn't seem to do any actual cloning. I've been unable to make SpiderMonkey behave, so we'll just ignore Firefox from now on.
16384 Elements
Test | NodeJS v20.12.2 | Firefox v127.0b2 | Edge 124.0.2478.97 |
---|---|---|---|
spread | 13.180 ±1.022 |
7.436 ±1.110 |
3.321±0.239 |
spreadNew | 4.727 ±0.532 |
1.010 ±0.022 |
21.045±2.982 |
arraySlice | 12.912 ±2.127 |
11046.737 ±237.575 |
494.359±32.726 |
arraySlice0 | 13.192 ±0.477 |
10665.299 ±500.553 |
492.209±66.837 |
arrayConcat | 16.590 ±0.656 |
7923.657 ±224.637 |
476.975±112.053 |
arrayMap | 6.542 ±0.301 |
32.960 ±3.743 |
52.127±3.472 |
objectValues | 4.339 ±0.111 |
10840.392 ±619.567 |
115.369±3.217 |
objectAssign | 0.270 ±0.013 |
10471.860 ±202.291 |
83.135±3.439 |
json | 0.205 ±0.039 |
4.014 ±1.679 |
7.730±0.319 |
loop | 6.138 ±0.287 |
6.727 ±1.296 |
27.691±1.217 |
As size increases, cloning slows down significantly, and we start seeing oddities in some engines. With NodeJS, suddenly [].concat(array)
has become significantly faster than array.slice()
- almost 30% faster even! This suggests that there is a certain size threshold after which slicing an Array stops being fast in V8.
131072 Elements
Test | NodeJS v20.12.2 | Firefox v127.0b2 | Edge 124.0.2478.97 |
---|---|---|---|
spread | 2.152 ±0.062 |
0.787 ±0.029 |
0.305 ±0.040 |
spreadNew | NaN ±Infinity |
0.122 ±0.005 |
NaN ±Infinity |
arraySlice | 2.255 ±0.092 |
10978.511 ±434.535 |
4.450 ±0.429 |
arraySlice0 | 2.251 ±0.076 |
10959.391 ±164.369 |
4.496 ±0.319 |
arrayConcat | 2.249 ±0.347 |
7989.046 ±365.379 |
4.316 ±0.462 |
arrayMap | 1.034 ±0.057 |
4.006 ±0.543 |
2.988 ±0.841 |
objectValues | 0.445 ±0.028 |
10516.475 ±918.310 |
3.843 ±0.389 |
objectAssign | 0.031 ±0.002 |
10339.829 ±749.871 |
10.521 ±1.427 |
json | 0.026 ±0.002 |
0.533 ±0.023 |
0.510 ±0.022 |
loop | 0.499 ±0.026 |
0.811 ±0.022 |
1.104 ±0.036 |
At this point you'd normally already be using a Database, and this very clearly shows in the numbers too. Everything has practically slowed down to a crawl, except for one thing: Object.assign([], array)
in Edge. It is more than twice as fast as array.slice()
, despite being almost 100% slower in the same V8 engine under NodeJS. We also return to array.slice()
being the fastest way in NodeJS again - odd, but it is consistent even after multiple runs.
1048576 Elements
Test | NodeJS v20.12.2 | Firefox v127.0b2 | Edge 124.0.2478.97 |
---|---|---|---|
spread | 0.280 ±0.007 |
0.096 ±0.008 |
0.038 ±0.001 |
spreadNew | NaN ±Infinity |
NaN ±Infinity |
NaN ±Infinity |
arraySlice | 0.371 ±0.006 |
10994.583 ±320.441 |
0.774 ±0.008 |
arraySlice0 | 0.362 ±0.008 |
10947.817 ±152.471 |
0.770 ±0.022 |
arrayConcat | 0.367 ±0.044 |
7990.954 ±173.320 |
0.771 ±0.014 |
arrayMap | 0.018 ±0.002 |
0.536 ±0.008 |
0.283 ±0.002 |
objectValues | 0.017 ±0.000 |
11160.335 ±413.641 |
0.584 ±0.008 |
objectAssign | 0.002 ±0.001 |
10549.829 ±167.641 |
1.362 ±0.028 |
json | 0.003 ±0.000 |
0.058 ±0.002 |
0.048 ±0.002 |
loop | 0.068 ±0.002 |
0.092 ±0.004 |
0.072 ±0.041 |
This is practically insanity, but hey - why not test it anyway. It's effectively a repeat of the previous block, but the numbers are even smaller. In NodeJS the fastest way is still array.slice()
, in Edge it is Object.assign([], array)
, and Firefox is still cheating.
What is truly the fastest way to shallow clone an array?
Going by the results I've gotten on my machine, it appears that in modern JavaScript engines Array.slice()
has become the fastest way overall. It's also supported even in the most ancient browsers, so it's a clear winner in my book. I've already implemented it, and it's sped up processing time quite a lot.