The Cost of Frameworks


This week Paul Lewis (@aerotwist) posted a talk on “The Cost of Frameworks”. In his talk Paul raises the question on whether the cost of MVC frameworks is too high since their inclusion and execution likely means you’ve blown any respectable performance budget. Paul provides some data and a tool to help analyze this point. I like any conversation that starts with data, as it’s a great neutral point we can all look at together.

One illustration in particular sums up my biggest issue with any MVC framework:

"Time to Interactive" Graph

At Paravel we scrapped an Angular project for this reason. Even with PhoneGap, waiting for the app to Get/Execute/Fetch/Execute/Paint (GEFEP, pronounced “Geoffp”) was too painful over a mobile connection, especially on non-ideal devices. I’m hardly able to recommend that to clients as a sustainable strategy.

Paul’s post was followed up by Tom Dale (@tomdale), an Ember creator, with a great rebuttal called “Javascript Frameworks and Mobile Performance”. I found myself nodding along with nearly all the points Tom was making about managing growing codebases.

The conventions that a framework offers allow individuals or teams to work on large codebases sustainably over a long period of time. Even in my own limited experience I’ve found this to be true. Using CSS grid frameworks like Blueprint and the 960 Grid allowed Paravel to stay afloat in the early days and build literally hundreds of websites. It allowed us to build productively without rethinking everything, despite the plethora of blog posts saying grid frameworks were “not semantic enough” or “soul-less”.

I really enjoy a good ol’ fashioned blog post rebuttal, but there are a few things I’d like to call out:

  • Tom makes a few attempts to discredit Paul and “Devrel folks”, page-count-shames their “demos”, and makes multiple jabs at Chrome for being slow. For me, these rode the line of being ad hominem attacks.
  • Real apps” sounds a lot like “Real Scotsman” to me. Just when we seemed to overcome “Web Apps vs Websites”, it’s now “Real Apps vs Fake (?!??!) Apps”. I think the point worth salvaging here is that multi-model applications will benefit from framework abstractions more than a single-model application.
  • Moore’s Law (“Computers will always get faster!”) is inferred, which most technologists (even Intel) agree has plateaued to some degree.
  • Despite the title, mobile performance of MVC frameworks and the reality of the current device landscape isn’t really addressed. It’s just explained away as “Chrome’s fault” and “This will obviously get better, obvo.” (see above)

I think the interesting discussion to be had from Tom’s post is: Are we trying to make lightweight sites that WORK FAST or maintainable sitesn that WORK FOR YEARS?

Your answer is probably different and depends on your past experiences.

Users don’t want to wait, so the Quest for Speed is very important. It’s also very alluring! If I do things just right and score 100 on Page Speed Insights, I’m promised that unforetold riches will be deposited into my bank account. It will rain rupees. Google is all-in on this effort: Fast is best because it makes money.

As a community, we talk a lot about performance because it’s easy to measure and we can quickly see who is doing a good job and who is doing a bad job.

However, by measuring what can only be measured in terms of page speed means we have no insight to the reasons a framework was employed or how much money it saved the organization. How big is team that built the site? One? Ten? What were past organizational failures that led to adopting a large framework? Did using a framework win out in a lengthy internal Cost-Benefit analysis? Is CSAT up due to swanky animations and autocomplete features that took minutes to build? Was code shipped faster? Was it built by one person over a weekend? Did the abstraction allow the developers to actually have fun building, therefore increasing job satisfaction, therefore reducing organizational churn, therefore reducing cost of the end product for the user?

We don’t know.

There’s so much we don’t know, it’s hard for me to believe any metric describes the quality of a site. I can build a very fast website that is harder to maintain due to critical path hoops, supporting AMP-HTML, and providing a near perfect offline experience. Just add more code and grunt tasks and you will be rewarded in ValhallaGoogle search results! Longtail, however, the user experience also suffers because updates are slower to roll out due to feature burden.

“Make users happy” is something I believe in. But in client services if I deliver a site that is super fast but impossible to maintain, I have failed at my job. “Developer Ergonomics” is a laughable concept to me, but I think we all walk the line of meeting User Needs and Organizational Needs and we do ourselves a disservice by ignoring that reality.

Introducing LQIP – Low Quality Image Placeholders

On one hand, images account for over 60% of the page weight. This means they play a major role in overall page load time, motivating dev teams to try and make images as small (byte-wise) as possible. On the other hand, new devices boast retina displays and higher resolutions, and designers are eager to leverage these screens and provide beautiful rich graphics. This trend, along with others, led to a 30% growth in the average number of image KB on a page in the last year alone.

This conflict is partly due to what I think of as “Situational Performance”. If you’re on a fiber connection – like most designers – the high quality images won’t slow you down much, and will give you a richer experience. If you’re on a cellular connection, you’ll likely prefer a lower quality image to a painfully slow page.

Fortunately, not all hope is lost.
A few months ago we created a new optimization called Low Quality Image Placeholders, or LQIP (pronounced el-kip) for short. This optimization proved useful in bridging the gap between fast and slow connections, and between designers and IT, so I figured it’s worth sharing.

Core Concept

LQIP’s logic is simple. In a sense, this is like loading progressive JPEGs, except it’s page wide. There are more implementation details below, but it boils down to two main steps:

  • Initially load the page with low quality images
  • Once the page loaded (e.g. in the onload event), replace them with the full quality images

LQIP gives us the best of both worlds. On a slow connection, the user will load a fully usable page much faster, giving them a significantly better user experience. On a fast connection, the extra download of the low quality images – or the delay of the high quality ones – does not lead to a substantial delay. In fact, even on the fast connection the user will get a usable page faster, and they’ll get the full rich visuals a moment later.

Real World Example

Here’s an example of the Etsy homepage loaded with and without LQIP. Note that Etsy isn’t actually applying this optimization – I made a static copy of their homepage and applied the optimization to it using Akamai FEO.

On a DSL connection speed, LQIP boosted the page visual by about 500ms (~20%), while on FIOS it was only 100ms faster (10%). This acceleration came from the fact the overall page weight dropped from ~480KB to ~400KB thanks to the lower quality images. All in all, not bad numbers for a single optimization – especially on a highly optimized web page like Etsy’s home page.

DSL Connection, Without LQIP (above) vs. With LQIP (below)


FIOS Connection, Without LQIP (above) vs. With LQIP (below)


The faster visual isn’t the whole story, though. The LQIP page you see actually uses lower quality images than the other. While the LQIP page weighs 80KB less before onload, it weighs 40KB more by the time the full quality images were downloaded. However, the page is definitely usable with the low quality images, keeping the user from idly waiting for the bigger download. You can see an example of a regular and low quality image in the table below – I didn’t turn quality down too far.

Image Before LQIP (15.6 KB) Image After LQIP (5.2 KB)
lqip-sample-before lqip-sample-after

It’s just intelligent delivery

LQIP also helps on the political front, by bridging the gap between IT/Dev and the designers.

The designers are happy because their full-quality images are showed to the user, unmodified. Sure, the images are a bit delayed, but the end result will usually show up within a few seconds, and their handiwork would remain unharmed.

IT is happy because they deliver a fast page to their users, even on slow connections. The low quality images may just be placeholders, but (assuming quality wasn’t too drastically reduced) the page is fully usable long before the full images arrive.

Implementation Tips

LQIP implementation includes three parts:

  1. Prepare the low quality images (server-side)
  2. Load the low quality images (client-side)
  3. Load the high quality images (client-side)

Step #1 varies greatly by your system. You can create the images in your CMS systems, duplicate them in your build system, or adjust quality in real-time using tools like Akamai Edge Image Manipulation.

Step #2 is simple – just load the images. You can do so with simple img tags, CSS, or using your favorite scripted image loader. If you use small enough images, you can even inline images (matches Ilya’s recommendations). In Akamai, we use LQIP in combination with loading images on-demand, reducing the number of requests as well.

Step #3 is where a new script probably comes in. A simple flow would be:

  1. Create a JS function that iterates the IMG tags on the page, and for each: 1. Determines the full quality image URL (using a naming convention or an extra attribute on the IMG tag)
  2. Modifies the SRC attribute to point to this full URL (will reload the image)
  3. Call your JS function in the onload event

If you want to get fancy, you can load the high quality image in a hidden IMG tag, and then swap the low quality image with it at the onload event. This will prevent the low quality image from disappearing before the full quality image is fully downloaded, which can hinder the user experience.

Lastly, if you use CSS to load your images, you can also swap the low quality images for higher quality images by loading/applying a new CSS.


I’m pretty excited about LQIP.

It helps bridge the gap between two conflicting and growing needs, would work on old and new browsers alike, and is (relatively) easy to implement. It’s a “perceived performance” optimization, which is how we should all be thinking – and I believe it’s an optimization everybody should apply.

tree traversal

down vote


Fundamentally, the difference between DFS and BFS is that with a DFS you push the children of the current node onto a stack, so they will be popped and processed before everything else, while for BFS you push the children onto the end of a queue, so they will be popped and processed aftereverything else.

DFS is easy to implement recursively because you can use the call stack as the stack. You can't do that with BFS, because you need a queue. Just to make the similarity clear, lets convert your DFS to an interative implementation first:

//DFSTree.prototype.traverse = function (callback) {  var stack=[this];  var n;  while(stack.length>0) {    n = stack.pop();    callback(n.value);    if (!n.children) {      continue;    }    for (var i = n.children.length-1; i>=0; i--) {       stack.push(n.children[i]);    }  }};

And now BFS

//BFSTree.prototype.traverse = function (callback) {  var queue=[this];  var n;  while(queue.length>0) {    n = queue.shift();    callback(n.value);    if (!n.children) {      continue;    }    for (var i = 0; i< n.children.length; i++) {       queue.push(n.children[i]);    }  }};

I don't have a JavaScript description but i would generally do it by keeping a queue of unexplored nodes.

  1. Start with only the root node in the queue.
  2. Pop an item from the front of the queue
  3. Explore it add all of the nodes found during exploration to the back of the queue
  4. Check if there are any nodes in the queue if there are go back to step 2
  5. Your done

Also there is some pseudopod on the Wikipedia page as well as some more explanations HERE

Also a quick Google search turned up a similar algorithm that could be bent to your purpose HERE




Javascript具有自动垃圾回收机制(GC:Garbage Collecation),也就是说,执行环境会负责管理代码执行过程中使用的内存。





function fn1() {
 var obj = {name: 'hanzichi', age: 10};
function fn2() {
 var obj = {name:'hanzichi', age: 10};
 return obj;
var a = fn1();
var b = fn2();

我们来看代码是如何执行的。首先定义了两个function,分别叫做fn1和fn2,当fn1被调用时,进入fn1的环境,会开辟一块内存存放对象{name: 'hanzichi', age: 10},而当调用结束后,出了fn1的环境,那么该块内存会被js引擎中的垃圾回收器自动释放;在fn2被调用的过程中,返回的对象被全局变量b所指向,所以该块内存并不会被释放。




function test(){
 var a = 10 ; //被标记 ,进入环境
 var b = 20 ; //被标记 ,进入环境
test(); //执行完毕 之后 a、b又被标离开环境,被回收。





function test(){
 var a = {} ; //a的引用次数为0
 var b = a ; //a的引用次数加1,为1
 var c =a; //a的引用次数再加1,为2
 var b ={}; //a的引用次数减1,为1

  Netscape Navigator3是最早使用引用计数策略的浏览器,但很快它就遇到一个严重的问题:循环引用。循环引用指的是对象A中包含一个指向对象B的指针,而对象B中也包含一个指向对象A的引用。

function fn() {
 var a = {};
 var b = {}; = b; = a;



var element = document.getElementById("some_element");
var myObject = new Object();
myObject.e = element;
element.o = myObject;



window.onload=function outerFunction(){
 var obj = document.getElementById("element");
 obj.onclick=function innerFunction(){};




myObject.element = null;
element.o = null;

window.onload=function outerFunction(){
 var obj = document.getElementById("element");
 obj.onclick=function innerFunction(){};








1)、Javascript引擎基础GC方案是(simple GC):mark and sweep(标记清除),即:

  • (1)遍历所有可访问的对象。
  • (2)回收已不可访问的对象。





(1)分代回收(Generation GC)
这个和Java回收策略思想是一致的。目的是通过区分“临时”与“持久”对象;多回收“临时对象”区(young generation),少回收“持久对象”区(tenured generation),减少每次需遍历的对象,从而减少每次GC的耗时。如图:

这里需要补充的是:对于tenured generation对象,有额外的开销:把它从young generation迁移到tenured generation,另外,如果被引用了,那引用的指向也需要修改。




比如:低 (对象/s) 比率时,中断执行GC的频率,simple GC更低些;如果大量对象都是长期“存活”,则分代处理优势也不大。



移动前端不得不了解的HTML5 head 头标签(2016最新版)


HTML的头部内容特别多,有针对SEO的头部信息,也有针对移动设备的头部信息。而且各个浏览器内核以及各个国内浏览器厂商都有些自己的标签元素,有很多差异性。移动端的工作已经越来越成为前端工作的重要内容,除了平常的项目开发,HTML 头部标签功能,特别是meta,link等标签的功能属性显得非常重要。这里整理了一份 <head> 部分的清单,让大家了解每个标签及相应属性的意义,写出满足自己需求的 <head> 头部标签,可以很有效的增强页面的可用性。

注:去年整理过移动前端不得不了解的html5 head 头标签,随着时间和浏览器厂商的升级,现在看起来似乎有些过时了。所以重新整理了一下。增加了新的内容,及过时的一些提示,同时增加了部分桌面端浏览器的一些说明。



  1. <!doctype html>
  2. <html>
  3. <head>
  4. <meta charset="utf-8">
  5. <meta http-equiv="x-ua-compatible" content="ie=edge">
  6. <!--移动端的页面这个可以忽略,具体可以查看本文Internet Explorer浏览器部分-->
  7. <meta name="viewport" content="width=device-width, initial-scale=1">
  8. <!--具体可以查看本文 为移动设备添加 viewport 部分-->
  9. <!-- 以上 3 个 meta 标签 *必须* 放在 head 的最前面;其他任何的 head 内容必须在这些标签的 *后面* -->
  10. <title>页面标题</title>
  11. ...
  12. </head>


  1. <meta http-equiv="x-ua-compatible" content="ie=edge">

在桌面开发的时候可以让IE浏览器以最新的模式渲染页面,具体可以查看本文Internet Explorer浏览器部分。

  1. <meta name="viewport" content="width=device-width, initial-scale=1">



DOCTYPE(Document Type),该声明位于文档中最前面的位置,处于 html 标签之前,此标签告知浏览器文档使用哪种 HTML 或者 XHTML 规范。

使用 HTML5 doctype,不区分大小写。

  1. <!DOCTYPE html> <!-- 使用 HTML5 doctype,不区分大小写 -->



  1. <meta charset="utf-8">

html5 之前网页中会这样写:

  1. <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

这两个是等效的,具体可移步阅读:<meta charset='utf-8'> vs<meta http-equiv='Content-Type'>,所以建议使用较短的,易于记忆。


更加标准的 lang 属性写法


  1. <html lang="zh-cmn-Hans"> <!-- 更加标准的 lang 属性写法 -->


  1. <html lang="zh-cmn-Hant"> <!-- 更加标准的 lang 属性写法 -->


  1. <p lang="zh-cmn-Hans">
  2. <strong lang="zh-cmn-Hans-CN">菠萝</strong><strong lang="zh-cmn-Hant-TW">鳳梨</strong>其实是同一种水果。只是大陆和台湾称谓不同,且新加坡、马来西亚一带的称谓也是不同的,称之为<strong lang="zh-cmn-Hans-SG">黄梨</strong>
  3. </p>

为什么 lang="zh-cmn-Hans" 而不是我们通常写的 lang="zh-CN" 呢,请移步阅读: 页头部的声明应该是用 lang=”zh” 还是 lang=”zh-cn”

Meta 标签

meta标签是HTML中head头部的一个辅助性标签,它位于HTML文档头部的 <head> 和 <title> 标记之间,它提供用户不可见的信息。虽然这部分信息用户不可见,但是其作用非常强大,特别是当今的前端开发工作中,设置合适的meta标签可以大大提升网站页面的可用性。

桌面端开发中,meta标签通常用来为搜索引擎优化(SEO)及 robots定义页面主题,或者是定义用户浏览器上的cookie;它可以用于鉴别作者,设定页面格式,标注内容提要和关键字;还可以设置页面使其可以根据你定义的时间间隔刷新自己,以及设置RASC内容等级,等等。



meta标签根据属性的不同,可分为两大部分:http-equiv 和 name 属性。



  1. <!-- 设置文档的字符编码 -->
  2. <meta charset="utf-8">
  3. <meta http-equiv="x-ua-compatible" content="ie=edge">
  4. <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  5. <!-- 以上 3 个 meta 标签 *必须* 放在 head 的最前面;其他任何的 head 内容必须在这些标签的 *后面* -->
  7. <!-- 允许控制资源的过度加载 -->
  8. <meta http-equiv="Content-Security-Policy" content="default-src 'self'">
  9. <!-- 尽早地放置在文档中 -->
  10. <!-- 仅应用于该标签下的内容 -->
  12. <!-- Web 应用的名称(仅当网站被用作为一个应用时才使用)-->
  13. 1
  14. 2
  15. 3
  16. 4
  17. ...
  18. 53