Light tracks

Measuring frontend performance

As a frontend developer, I deal with performance issues every single day. Let's see how it is possible to get an initial picture of frontend performances.

Measuring frontend performances can be very difficult, as there are too many factors involved. This article does not claim to offer the ultimate solution to such a hard problem. Please consider this as a starting point for your studies and insights.

Every time I write a new feature which involves javscript or massive css code, I need to check how much that piece of code impacts performances. 

While I was searching for a tool which could retrieve some information about page load timings, I found the great phantomas. This tools is able to collect a huge amount of data regarding a specific URL. Bingo!

The workflow

So, here is how I take advantage of git (are you using git, do you? ) and phantomas to get an idea of how much weird is my code in terms of performances:

  • I always start from a clean repository
  • I collect backend and frontend timing metrics
  • I write my super fancy feature which involves javascript or a lot of css code
  • With git I can easly get to know what are the modifications I am doing
  • I collect backend and frontend timing metrics again, watching the difference

Do you think it is too simple? Oh, I love simple things and practices, and you should love them too.

The script

Data phantomas collects may be hard to read, so I wrote a very basic bash script (tested on both Linux and OSX) to get a graph of backend and frontend loading times.


DEFAULT_OPTS="--no-externals --reporter=json --silent --timeout=600"

while getopts ":n:u:" opt; do
    case $opt in
            echo "Invalid option: -$OPTARG" >&2
            exit 1

[[ $URLSET -ne 1 ]] && echo "[WARNING] no URL set. Crawling"

function collect {

    OPTS="--runs $RUNS"
    phantomas $URL $DEFAULT_OPTS $OPTS > desktop.json

    cat > $1 << \EOL
<!DOCTYPE html>
    <script src=""></script>
    <script src=""></script>

    cat desktop.json | sed -e 's/^{/var bigjson={/' >> $1
    cat >> $1 << \EOL
$(function () {
    var timeFrontend = [];
    var timeBackend = [];
    var serie = [];
    var categories = [];
    var i = 0;
    for (i=0; i<bigjson.runs.length; i++){
        timeFrontend[i] = bigjson.runs[i].metrics['timeFrontend'];
        timeBackend[i] = bigjson.runs[i].metrics['timeBackend'];
        categories[i] = 'Run' + i;
    serie[0] = { name: 'Time to get frontend (ms)' , data: timeFrontend }
    serie[1] = { name: 'Time to get backend (ms)' , data: timeBackend }

        chart: { type: 'line' },
        title: { text: bigjson.runs[0].url },
        xAxis: { categories: categories },
        series: serie
    <div id="container" style="width:100%; height:400px;">

    rm desktop.json
    echo "[DONE] Report: $( realpath $1 )"

collect report.html

Run the script specifying the URL to crawl and number of runs. For example:

./ -u -n 50


You should consider this as a starting point to measure frontend performances.

Include phantomas in a continuos monitoring system could be a great thing, but it is time consuming and requires a lot of sysadmin knowledge.

My script may be useful for quick access to performance data while you are trying to convince your sysadmin to set up a continuos monitoring server :)