# Javascript code performance optimization – from troubleshooting to processing

Time：2019-12-2

Recently, we are optimizing the performance of our console. This time, we record the performance troubleshooting and optimization of code execution (optimization of DOM operation is not included in pure JS). Other optimization points will be shared later.

## Performance troubleshooting and collection

First, we need to find out the points that need to be optimized. In this case, we can use Chrome’s devtool to find out the performance problems in the website.

It is better to collect information in stealth mode to avoid the influence of some plug-ins.

### Performance

The first way is to collect information with the help of performance panel. Expand the main panel to see the information of code running. However, there are many contents in the performance panel, including rendering, network, memory and other information, so the visual interference is serious. Although it is very powerful, it is not recommended for pure JS performance troubleshooting. Today, I mainly introduce another way.

### JavaScript Profiler

Another way is to use JavaScript profiler, which is hidden by default. You need to open it in more buttons (three point buttons) = > more tools in the top right corner of devtool.

You can see that the JavaScript profiler panel is much simpler than the performance panel. The top row of buttons on the left can collect, delete and garbage collect (it may be used to enforce GC, which is not sure). You can collect multiple profilers for comparison.

On the right is the display area of profiler. You can switch the display modes on the top, including chart, heavy and tree. Here, chart is recommended, which is the most intuitive and understandable.

Chart is on the top of chart panel, CPU utilization is on the vertical axis, and peak point is the key area for troubleshooting. Below is the code execution time fragment information. A long time fragment will cause obvious jam in the page, which needs to be checked.

In the chart panel, scroll up and down to zoom in and out the graph, scroll left and right to scroll the timeline, or circle and drag the mouse in the graph. CMD + F can be searched, which is more convenient when you want to find the corresponding code performance.

Through the JavaScript profiler panel, you can find out the code with abnormal performance.

For example, n.bootstrap in the figure has an execution time of 354.3ms, which will obviously cause a more serious jam.

You can also go down the time fragment to find out which step takes longer. From the above, you can see that L. initstate takes 173ms, and the following are several foreach. Obviously, the cycle performance consumption here is relatively large. Click the time fragment to jump to the corresponding code on the source panel, which is very convenient for troubleshooting.

With the help of JavaScript profiler, we can sort out all the codes with long time and possible performance problems and put them in the agent list for further investigation.

### console.time

With Profiler, it is very convenient to sort out the problem code, but it is a bit troublesome in the actual tuning process, because each debugging needs to perform a collection. After collecting, we still need to find the current debugging point, which will waste a lot of time. So in the actual tuning process, we will choose other ways, such as calculating the time stamp difference and then log out, but its There is a more convenient way – console. Time.

const doSomething = () => {
return new Array((Math.random() * 100000) | 0).fill(null).map((v, i) => {
return i * i;
});
};
// start a time log
console.time('time log name');
doSomething();
// log time
console.timeLog('time log name', 1);
doSomething();
// log time
console.timeLog('time log name', 2);
doSomething();
// log time and end timer
console.timeEnd('time log name', 'end');

At present, most browsers support console.time, which can easily print out the execution time of a piece of code.

• Console.time receives a parameter ID and opens a timer, which can then be used to execute timelog and timeend
• The timelog receives 1-N parameters. The first one is the timer ID, followed by optional parameters. After execution, the time difference of the current timer and other optional parameters passed in will be printed out
• Timeend is similar to timelog, except that it does not accept extra optional parameters and will close the timer after execution
• Multiple timers with the same identity cannot be enabled at the same time
• After a timer ends, you can open a timer with the same name again

Through console.time, we can intuitively see the execution time of a piece of code. After each change, the page refreshes to see the log, so as to see the impact of the change.

## Sorting and optimization of performance problems

With the help of JavaScript profiler, multiple performance optimization points are found from the console. (the following time is the data when debugging locally and opening devtool, which is higher than the actual situation.)

Name position Single time consuming Number of first execution Switching execution times
initState route.extend.js:148 200ms – 400ms 1 0
initRegionHash s_region.js:217 50ms – 110ms 1 0
initRegion s_region.js:105QuickMenuWrapper/index.jsx:72 70ms – 200ms 1 0
getProducts s_globalAction.js:73 40ms – 80ms 1 2
getNav s_userinfo:58 40ms – 200ms 2 0
extendProductTrans s_translateLoader.js:114 40ms – 120ms 1 1
filterTopNavShow EditPanel.jsx:224 0 – 20ms 7 3

According to the listed troubleshooting points, specifically eliminate performance problems. Here are some typical questions.

var localeFilesHandle = function (files) {
var result = [];
var reg = /[^\/\\:\*\"\<\>\|\?\.]+(?=\.json)/;
_.each(files, function (file, i) {
// some code
});
return result;
};

var loadFilesHandle = function (files) {
var result = [];
var reg = /[^\/\\:\*\"\<\>\|\?\.]+(?=\.json)/;
_.each(files, function (file, i) {
// some code
});
return result;
};

self.initState = function (data, common) {
console.time('initState');
// some code
_.each(filterDatas, function (state, name) {
var route = _.extend({}, common, state);
var localeFiles = localeFilesHandle(route['files']);

route['localeFiles'] = localeFiles;
routes[name] = route;
$stateProvider.state(name, route); }); // some code console.timeEnd('initState'); }; In initstate, filterdata is a route map with nearly 1000 keys. To initialize, you need to register the route information in the UI router. There is no way to omit$stateprovider.state, but the two files can be delayed. When pulling files, you need to get the file list.

self.initState = function (data, common) {
console.time('initState');
// some code
_.each(filterDatas, function (state, name) {
var route = _.extend({}, common, state);
routes[name] = route;
$stateProvider.state(name, route); }); // some code console.timeEnd('initState'); }; // when load files !toState.loadfiles && (toState.loadfiles = _.union( toState['common_files'] || [],$UStateExtend.loadFilesHandle(toState['files'])
));
!toState.localeFiles && (toState.localeFiles = $UStateExtend.localeFilesHandle(toState['files'])); After reducing the tasks in the iteration, the initstate speed increased by 30% – 40%. ### Clarify logic var bitMaps = { // map info }; function getUserRights(bits,key){ var map = {}; _.each(bitMaps,function(val,key){ map[key.toUpperCase ()] = val; }); return (map && map[(key||'').toUpperCase ()] != null) ? !!(+bits.charAt(map[(key||'').toUpperCase ()])) : false; } In getuserrights, you can see that every time you traverse bitmaps, there will be no change in bitmaps itself, so you only need to traverse once during initialization, or cache after the first traverse. var _bitMaps = { // map info }; var bitMaps = {}; _.each(_bitMaps, function(value, key) { bitMaps[key.toUpperCase()] = value; }); function getUserRights(bits, key) { key = (key || '').toUpperCase(); return bitMaps[key] != null ? !!+bits.charAt(bitMaps[key]) : false; } After the above changes, the efficiency of getuserrights has increased by 90 +%, and getuserrights has been called many times in many of the above performance problems, so this change can bring significant performance improvement. ### Make good use of bit operation var buildRegionBitMaps = function(bit,rBit){ var result; if( !bit || !rBit){ return ''; } var zoneBit = (bit + '').split(''); var regionBit = (rBit + '').split(''); var forList = zoneBit.length > regionBit.length ? zoneBit : regionBit; var diffList = zoneBit.length > regionBit.length ? regionBit : zoneBit; var resultList = []; _.each(forList,function(v,i){ resultList.push(parseInt(v) || parseInt(diffList[i] || 0)); }); result = resultList.join(''); return result; }; var initRegionsHash = function(data){ // some code _.each(data,function(o){ if(!regionsHash[o['Region']]){ regionsHash[o['Region']] = []; regionsHash['regionBits'][o['Region']] = o['BitMaps']; regionsList.push(o['Region']); } regionsHash['regionBits'][o['Region']] = buildRegionBitMaps(o['BitMaps'],regionsHash['regionBits'][o['Region']]); regionsHash[o['Region']].push(o); }); // some code }; Buildregionbitmaps is to combine two 512 bit long permission bit binary strings (the length is not fixed according to the current code) to calculate the actual permission. The current code breaks the binary string into an array, and then iterates to calculate the permission of each bit, with low efficiency. Buildregionbitmaps will be called multiple times in initregionshash, which will enlarge the performance problem here. Here we can use bit operation to calculate permissions conveniently, which is much more efficient than array traversal. var buildRegionBitMaps = function(bit,rBit){ var result = ''; if (!bit || !rBit) { return ''; } var lBit, sBit, sl; if (bit.length > rBit.length) { lBit = bit; sBit = rBit; } else { lBit = rBit; sBit = bit; } sl = sBit.length; var i = 0; var s = 30; for (; i < sl; ) { var n = i + s; result += (parseInt('1' + lBit.substring(i, n), 2) | parseInt('1' + sBit.substring(i, (i = n)), 2)).toString(2).substring(1); } return result + lBit.slice(sl); }; With the above changes, initregionhash’s running time is optimized to 2ms – 8ms, increasing by 90 +%. Note that the bit operation in JavaScript is based on 32 bits, and more than 32 bits overflow, so the above is disassembled into 30 bit strings for merging. ### Reduce repetitive tasks function () { currentTrans = {}; angular.forEach(products, function (product, index) { setLoaded(product['name'],options.key,true); currentTrans = extendProduct(product['name'],options.key, CNlan); }); currentTrans = extendProduct(Loader.cname||'common',options.key, CNlan); if($rootScope.reviseTrans){
currentTrans = Loader.changeTrans($rootScope.reviseNoticeSet,currentTrans); } deferred.resolve(currentTrans[options.key]); } The above code is used to merge product languages. Products is the product name corresponding to the route, and there will be duplicates. The common language is large, and there are more than 1W keys, so the merging time is very heavy. function () { console.time('extendTrans'); currentTrans = {}; var productNameList = _.union(_.map(products, product => product.name)); var cname = Loader.cname || 'common'; angular.forEach(productNameList, function(productName, index) { setLoaded(productName, options.key, true); if (productName === cname || productName === 'common') return; extendProduct(productName, options.key, CNlan); }); extendProduct('common', options.key, CNlan); cname !== 'common' && extendProduct(cname, options.key, CNlan); if ($rootScope.reviseTrans) {
currentTrans = Loader.changeTrans($rootScope.reviseNoticeSet, currentTrans); } deferred.resolve(currentTrans[options.key]); console.timeEnd('extendTrans'); } Here, the product name in the product is de duplicated to reduce the number of merges, and then the language merges corresponding to common and CNAME are removed from the traversal. Finally, the merges are done to reduce the number of merges and reduce the amount of data in the previous merges. After the change, extendtrans speed increased by 70 +%. ### Quit as early as possible user.getNav = function(){ var result = []; if ( _.isEmpty ($rootScope.USER ) ) {
return result;
}
_.each ( modules , function ( list ) {
var show = true;
if ( list.isAdmin === true ) {
show = $rootScope.USER.Admin == 1; } var authBitKey = list.bitKey ? regionService.getUserRights ( list.bitKey.toUpperCase () ) : show; var item = _.extend ( {} , list , { show : show, authBitKey : authBitKey } ); if ( item.isUserNav === true ) { result.push ( item ) } } ); return result; }; Modules in getNav is routing. As mentioned above, routing is more than a thousand, and getUseRights is called in the traversal here, resulting in serious performance loss. Another serious problem is that most of the data will be removed by isUserNav screen. user.getNav = function(){ var result = []; if ( _.isEmpty ($rootScope.USER ) ) {
return result;
}
console.time(getNav);

_.each ( modules , function ( list ) {
if(list.isUserNav !== true) return;

var show = true;
if ( list.isAdmin === true ) {
show = $rootScope.USER.Admin == 1; } var authBitKey = list.bitKey ? regionService.getUserRights ( list.bitKey.toUpperCase () ) : show; var item = _.extend ( {} , list , { show : show, authBitKey : authBitKey } ); result.push ( item ); } ); console.timeEnd(getNav); return result; }; The speed of getnav is increased by 99% by advancing judgment, ending meaningless code as early as possible, and optimizing getuserrights. ### Make good use of lazy renderMenuList = () => { const { translateLoadingSuccess, topMenu } = this.props; if (!translateLoadingSuccess) { return null; } return topMenu .filter(item => { const filterTopNavShow = this.$filter('filterTopNavShow')(item);
return filterTopNavShow > 0;
})
.map((item = [], i) => {
const title = INDEX_TOP_${(item[0] || {}).type}.toUpperCase(); return ( <div className="uc-nav__edit-panel-item" key={i}> <div className="uc-nav__edit-panel-item-title"> {formatMessage({ id: title })} </div> <div className="uc-nav__edit-panel-item-content"> <Row gutter={12}>{this.renderMenuProdList(item)}</Row> </div> </div> ); }); }; The above code is in a menu editing panel of the console. This panel will only appear if the user clicks edit. However, the existing logic causes this data to be frequent. Once entering the page, filter top navshow will be executed 7 times, and it will be re rendered. renderMenuList = () => { const { translateLoadingSuccess, topMenu, mode } = this.props; if (!translateLoadingSuccess) { return null; } if (mode !== 'edit' && this._lazyRender) return null; this._lazyRender = false; const menuList = topMenu .filter(item => { const filterTopNavShow = this.$filter('filterTopNavShow')(item);
return filterTopNavShow > 0;
})
.map((item = [], i) => {
const title = INDEX_TOP_\${(item[0] || {}).type}.toUpperCase();
return (
<div className="uc-nav__edit-panel-item" key={i}>
<div className="uc-nav__edit-panel-item-title">
{formatMessage({ id: title })}
</div>
<div className="uc-nav__edit-panel-item-content">
</div>
</div>
);
});
};

In this case, you can simply add a “lazy render” field to delay rendering and calculation until the first time you open it, avoiding unnecessary operations during page initialization.

## Achievements

First look at the time comparison before and after the transformation

Name Single time consuming Optimization effect
initState 200ms – 400ms 120ms – 300ms, 30% – 40% less
initRegionHash 50ms – 110ms 2ms – 8ms, 90% less
getMenu 0 – 40ms 0ms – 8ms, 80% less
initRegion 70ms – 200ms 3MS – 10ms, 90% less
getProducts 40ms – 80ms 3MS – 10ms, 90% less
getNav 40ms – 200ms 0ms – 2ms, 99% less
extendProductTrans 40ms – 120ms 70% reduction in 10ms – 40ms
filterStorageMenu 4ms – 10ms 0ms – 2ms, 80% less
filterTopNavShow 0 – 20ms First load is no longer executed, expand execution

The comparison is still obvious, most of the time is controlled within 10ms.

You can take a look at the profiler before and after the transformation.

Before transformation:

After transformation:

After optimization, we can see that a lot of peaks have disappeared (the rest are some optimization points that are not easy to do at present). When entering the page and switching products, we can also obviously feel the difference.

## summary

From the above optimization code, we can see that most of the performance problems are caused by loops. A small performance problem will also have a serious impact after multiple loops. Therefore, in normal code, many things still need to be paid attention to as much as possible. For example, the code that can end as soon as possible will end as soon as possible, and unnecessary operations will be omitted. It is necessary to do caching to ensure that With good programming habits, you can make your code run at a good speed even in unknown situations.

With the help of JavaScript profiler and console.time, performance troubleshooting and optimization can be very simple, and it is easy to find the problem point and make optimization scheme for the problem.

## How to search files or directories under CentOS

Search for files or directories: The search for files is great! Because we often need to know where that file is, let’s talk about how to search it! There are also quite excellent ones under Linux Search system! Usually find is not very common! Because the speed is slow, but also very hard disk! Usually […]