The Cost of Frameworks

from: http://daverupert.com/2015/11/framework-cost/

This week Paul Lewis (@aerotwist) posted a talk on “The Cost of Frameworks”. In his talk Paul raises the question on whether the cost of MVC frameworks is too high since their inclusion and execution likely means you’ve blown any respectable performance budget. Paul provides some data and a tool to help analyze this point. I like any conversation that starts with data, as it’s a great neutral point we can all look at together.

One illustration in particular sums up my biggest issue with any MVC framework:

"Time to Interactive" Graph

At Paravel we scrapped an Angular project for this reason. Even with PhoneGap, waiting for the app to Get/Execute/Fetch/Execute/Paint (GEFEP, pronounced “Geoffp”) was too painful over a mobile connection, especially on non-ideal devices. I’m hardly able to recommend that to clients as a sustainable strategy.

Paul’s post was followed up by Tom Dale (@tomdale), an Ember creator, with a great rebuttal called “Javascript Frameworks and Mobile Performance”. I found myself nodding along with nearly all the points Tom was making about managing growing codebases.

The conventions that a framework offers allow individuals or teams to work on large codebases sustainably over a long period of time. Even in my own limited experience I’ve found this to be true. Using CSS grid frameworks like Blueprint and the 960 Grid allowed Paravel to stay afloat in the early days and build literally hundreds of websites. It allowed us to build productively without rethinking everything, despite the plethora of blog posts saying grid frameworks were “not semantic enough” or “soul-less”.

I really enjoy a good ol’ fashioned blog post rebuttal, but there are a few things I’d like to call out:

  • Tom makes a few attempts to discredit Paul and “Devrel folks”, page-count-shames their “demos”, and makes multiple jabs at Chrome for being slow. For me, these rode the line of being ad hominem attacks.
  • Real apps” sounds a lot like “Real Scotsman” to me. Just when we seemed to overcome “Web Apps vs Websites”, it’s now “Real Apps vs Fake (?!??!) Apps”. I think the point worth salvaging here is that multi-model applications will benefit from framework abstractions more than a single-model application.
  • Moore’s Law (“Computers will always get faster!”) is inferred, which most technologists (even Intel) agree has plateaued to some degree.
  • Despite the title, mobile performance of MVC frameworks and the reality of the current device landscape isn’t really addressed. It’s just explained away as “Chrome’s fault” and “This will obviously get better, obvo.” (see above)

I think the interesting discussion to be had from Tom’s post is: Are we trying to make lightweight sites that WORK FAST or maintainable sitesn that WORK FOR YEARS?

Your answer is probably different and depends on your past experiences.

Users don’t want to wait, so the Quest for Speed is very important. It’s also very alluring! If I do things just right and score 100 on Page Speed Insights, I’m promised that unforetold riches will be deposited into my bank account. It will rain rupees. Google is all-in on this effort: Fast is best because it makes money.

As a community, we talk a lot about performance because it’s easy to measure and we can quickly see who is doing a good job and who is doing a bad job.

However, by measuring what can only be measured in terms of page speed means we have no insight to the reasons a framework was employed or how much money it saved the organization. How big is team that built the site? One? Ten? What were past organizational failures that led to adopting a large framework? Did using a framework win out in a lengthy internal Cost-Benefit analysis? Is CSAT up due to swanky animations and autocomplete features that took minutes to build? Was code shipped faster? Was it built by one person over a weekend? Did the abstraction allow the developers to actually have fun building, therefore increasing job satisfaction, therefore reducing organizational churn, therefore reducing cost of the end product for the user?

We don’t know.

There’s so much we don’t know, it’s hard for me to believe any metric describes the quality of a site. I can build a very fast website that is harder to maintain due to critical path hoops, supporting AMP-HTML, and providing a near perfect offline experience. Just add more code and grunt tasks and you will be rewarded in ValhallaGoogle search results! Longtail, however, the user experience also suffers because updates are slower to roll out due to feature burden.

“Make users happy” is something I believe in. But in client services if I deliver a site that is super fast but impossible to maintain, I have failed at my job. “Developer Ergonomics” is a laughable concept to me, but I think we all walk the line of meeting User Needs and Organizational Needs and we do ourselves a disservice by ignoring that reality.

Introducing LQIP – Low Quality Image Placeholders


On one hand, images account for over 60% of the page weight. This means they play a major role in overall page load time, motivating dev teams to try and make images as small (byte-wise) as possible. On the other hand, new devices boast retina displays and higher resolutions, and designers are eager to leverage these screens and provide beautiful rich graphics. This trend, along with others, led to a 30% growth in the average number of image KB on a page in the last year alone.

This conflict is partly due to what I think of as “Situational Performance”. If you’re on a fiber connection – like most designers – the high quality images won’t slow you down much, and will give you a richer experience. If you’re on a cellular connection, you’ll likely prefer a lower quality image to a painfully slow page.

Fortunately, not all hope is lost.
A few months ago we created a new optimization called Low Quality Image Placeholders, or LQIP (pronounced el-kip) for short. This optimization proved useful in bridging the gap between fast and slow connections, and between designers and IT, so I figured it’s worth sharing.

Core Concept

LQIP’s logic is simple. In a sense, this is like loading progressive JPEGs, except it’s page wide. There are more implementation details below, but it boils down to two main steps:

  • Initially load the page with low quality images
  • Once the page loaded (e.g. in the onload event), replace them with the full quality images

LQIP gives us the best of both worlds. On a slow connection, the user will load a fully usable page much faster, giving them a significantly better user experience. On a fast connection, the extra download of the low quality images – or the delay of the high quality ones – does not lead to a substantial delay. In fact, even on the fast connection the user will get a usable page faster, and they’ll get the full rich visuals a moment later.

Real World Example

Here’s an example of the Etsy homepage loaded with and without LQIP. Note that Etsy isn’t actually applying this optimization – I made a static copy of their homepage and applied the optimization to it using Akamai FEO.

On a DSL connection speed, LQIP boosted the page visual by about 500ms (~20%), while on FIOS it was only 100ms faster (10%). This acceleration came from the fact the overall page weight dropped from ~480KB to ~400KB thanks to the lower quality images. All in all, not bad numbers for a single optimization – especially on a highly optimized web page like Etsy’s home page.

DSL Connection, Without LQIP (above) vs. With LQIP (below)

lqip-etsy-dsl

FIOS Connection, Without LQIP (above) vs. With LQIP (below)

lqip-etsy-fios

The faster visual isn’t the whole story, though. The LQIP page you see actually uses lower quality images than the other. While the LQIP page weighs 80KB less before onload, it weighs 40KB more by the time the full quality images were downloaded. However, the page is definitely usable with the low quality images, keeping the user from idly waiting for the bigger download. You can see an example of a regular and low quality image in the table below – I didn’t turn quality down too far.

Image Before LQIP (15.6 KB) Image After LQIP (5.2 KB)
lqip-sample-before lqip-sample-after

It’s just intelligent delivery

LQIP also helps on the political front, by bridging the gap between IT/Dev and the designers.

The designers are happy because their full-quality images are showed to the user, unmodified. Sure, the images are a bit delayed, but the end result will usually show up within a few seconds, and their handiwork would remain unharmed.

IT is happy because they deliver a fast page to their users, even on slow connections. The low quality images may just be placeholders, but (assuming quality wasn’t too drastically reduced) the page is fully usable long before the full images arrive.

Implementation Tips

LQIP implementation includes three parts:

  1. Prepare the low quality images (server-side)
  2. Load the low quality images (client-side)
  3. Load the high quality images (client-side)

Step #1 varies greatly by your system. You can create the images in your CMS systems, duplicate them in your build system, or adjust quality in real-time using tools like Akamai Edge Image Manipulation.

Step #2 is simple – just load the images. You can do so with simple img tags, CSS, or using your favorite scripted image loader. If you use small enough images, you can even inline images (matches Ilya’s recommendations). In Akamai, we use LQIP in combination with loading images on-demand, reducing the number of requests as well.

Step #3 is where a new script probably comes in. A simple flow would be:

  1. Create a JS function that iterates the IMG tags on the page, and for each: 1. Determines the full quality image URL (using a naming convention or an extra attribute on the IMG tag)
  2. Modifies the SRC attribute to point to this full URL (will reload the image)
  3. Call your JS function in the onload event

If you want to get fancy, you can load the high quality image in a hidden IMG tag, and then swap the low quality image with it at the onload event. This will prevent the low quality image from disappearing before the full quality image is fully downloaded, which can hinder the user experience.

Lastly, if you use CSS to load your images, you can also swap the low quality images for higher quality images by loading/applying a new CSS.

Summary

I’m pretty excited about LQIP.

It helps bridge the gap between two conflicting and growing needs, would work on old and new browsers alike, and is (relatively) easy to implement. It’s a “perceived performance” optimization, which is how we should all be thinking – and I believe it’s an optimization everybody should apply.

tree traversal


3
down vote

accepted

Fundamentally, the difference between DFS and BFS is that with a DFS you push the children of the current node onto a stack, so they will be popped and processed before everything else, while for BFS you push the children onto the end of a queue, so they will be popped and processed aftereverything else.

DFS is easy to implement recursively because you can use the call stack as the stack. You can't do that with BFS, because you need a queue. Just to make the similarity clear, lets convert your DFS to an interative implementation first:

//DFSTree.prototype.traverse = function (callback) {  var stack=[this];  var n;  while(stack.length>0) {    n = stack.pop();    callback(n.value);    if (!n.children) {      continue;    }    for (var i = n.children.length-1; i>=0; i--) {       stack.push(n.children[i]);    }  }};

And now BFS

//BFSTree.prototype.traverse = function (callback) {  var queue=[this];  var n;  while(queue.length>0) {    n = queue.shift();    callback(n.value);    if (!n.children) {      continue;    }    for (var i = 0; i< n.children.length; i++) {       queue.push(n.children[i]);    }  }};



I don't have a JavaScript description but i would generally do it by keeping a queue of unexplored nodes.


  1. Start with only the root node in the queue.
  2. Pop an item from the front of the queue
  3. Explore it add all of the nodes found during exploration to the back of the queue
  4. Check if there are any nodes in the queue if there are go back to step 2
  5. Your done

Also there is some pseudopod on the Wikipedia page as well as some more explanations HERE

Also a quick Google search turned up a similar algorithm that could be bent to your purpose HERE


from:http://stackoverflow.com/questions/5411270/breadth-first-traversal-of-object

javascript的垃圾回收机制与内存管理

一、垃圾回收机制—GC

Javascript具有自动垃圾回收机制(GC:Garbage Collecation),也就是说,执行环境会负责管理代码执行过程中使用的内存。

原理:垃圾收集器会定期(周期性)找出那些不在继续使用的变量,然后释放其内存。

JavaScript垃圾回收的机制很简单:找出不再使用的变量,然后释放掉其占用的内存,但是这个过程不是实时的,因为其开销比较大,所以垃圾回收器会按照固定的时间间隔周期性的执行。

不再使用的变量也就是生命周期结束的变量,当然只可能是局部变量,全局变量的生命周期直至浏览器卸载页面才会结束。局部变量只在函数的执行过程中存在,而在这个过程中会为局部变量在栈或堆上分配相应的空间,以存储它们的值,然后在函数中使用这些变量,直至函数结束,而闭包中由于内部函数的原因,外部函数并不能算是结束。

还是上代码说明吧:

1
2
3
4
5
6
7
8
9
10
11
function fn1() {
 var obj = {name: 'hanzichi', age: 10};
}
 
function fn2() {
 var obj = {name:'hanzichi', age: 10};
 return obj;
}
 
var a = fn1();
var b = fn2();

我们来看代码是如何执行的。首先定义了两个function,分别叫做fn1和fn2,当fn1被调用时,进入fn1的环境,会开辟一块内存存放对象{name: 'hanzichi', age: 10},而当调用结束后,出了fn1的环境,那么该块内存会被js引擎中的垃圾回收器自动释放;在fn2被调用的过程中,返回的对象被全局变量b所指向,所以该块内存并不会被释放。

这里问题就出现了:到底哪个变量是没有用的?所以垃圾收集器必须跟踪到底哪个变量没用,对于不再有用的变量打上标记,以备将来收回其内存。用于标记的无用变量的策略可能因实现而有所区别,通常情况下有两种实现方式:标记清除和引用计数。引用计数不太常用,标记清除较为常用。

二、标记清除

js中最常用的垃圾回收方式就是标记清除。当变量进入环境时,例如,在函数中声明一个变量,就将这个变量标记为“进入环境”。从逻辑上讲,永远不能释放进入环境的变量所占用的内存,因为只要执行流进入相应的环境,就可能会用到它们。而当变量离开环境时,则将其标记为“离开环境”。

1
2
3
4
5
function test(){
 var a = 10 ; //被标记 ,进入环境
 var b = 20 ; //被标记 ,进入环境
}
test(); //执行完毕 之后 a、b又被标离开环境,被回收。

  垃圾回收器在运行的时候会给存储在内存中的所有变量都加上标记(当然,可以使用任何标记方式)。然后,它会去掉环境中的变量以及被环境中的变量引用的变量的标记(闭包)。而在此之后再被加上标记的变量将被视为准备删除的变量,原因是环境中的变量已经无法访问到这些变量了。最后,垃圾回收器完成内存清除工作,销毁那些带标记的值并回收它们所占用的内存空间。

  到目前为止,IE、Firefox、Opera、Chrome、Safari的js实现使用的都是标记清除的垃圾回收策略或类似的策略,只不过垃圾收集的时间间隔互不相同。

三、引用计数

  引用计数的含义是跟踪记录每个值被引用的次数。当声明了一个变量并将一个引用类型值赋给该变量时,则这个值的引用次数就是1。如果同一个值又被赋给另一个变量,则该值的引用次数加1。相反,如果包含对这个值引用的变量又取得了另外一个值,则这个值的引用次数减1。当这个值的引用次数变成0时,则说明没有办法再访问这个值了,因而就可以将其占用的内存空间回收回来。这样,当垃圾回收器下次再运行时,它就会释放那些引用次数为0的值所占用的内存。

1
2
3
4
5
6
function test(){
 var a = {} ; //a的引用次数为0
 var b = a ; //a的引用次数加1,为1
 var c =a; //a的引用次数再加1,为2
 var b ={}; //a的引用次数减1,为1
}

  Netscape Navigator3是最早使用引用计数策略的浏览器,但很快它就遇到一个严重的问题:循环引用。循环引用指的是对象A中包含一个指向对象B的指针,而对象B中也包含一个指向对象A的引用。

1
2
3
4
5
6
7
8
function fn() {
 var a = {};
 var b = {};
 a.pro = b;
 b.pro = a;
}
 
fn();

  以上代码a和b的引用次数都是2,fn()执行完毕后,两个对象都已经离开环境,在标记清除方式下是没有问题的,但是在引用计数策略下,因为a和b的引用次数不为0,所以不会被垃圾回收器回收内存,如果fn函数被大量调用,就会造成内存泄露。在IE7与IE8上,内存直线上升。

我们知道,IE中有一部分对象并不是原生js对象。例如,其内存泄露DOM和BOM中的对象就是使用C++以COM对象的形式实现的,而COM对象的垃圾回收机制采用的就是引用计数策略。因此,即使IE的js引擎采用标记清除策略来实现,但js访问的COM对象依然是基于引用计数策略的。换句话说,只要在IE中涉及COM对象,就会存在循环引用的问题。

1
2
3
4
var element = document.getElementById("some_element");
var myObject = new Object();
myObject.e = element;
element.o = myObject;

  这个例子在一个DOM元素(element)与一个原生js对象(myObject)之间创建了循环引用。其中,变量myObject有一个名为element的属性指向element对象;而变量element也有一个属性名为o回指myObject。由于存在这个循环引用,即使例子中的DOM从页面中移除,它也永远不会被回收。

看上面的例子,有同学回觉得太弱了,谁会做这样无聊的事情,其实我们是不是就在做

1
2
3
4
window.onload=function outerFunction(){
 var obj = document.getElementById("element");
 obj.onclick=function innerFunction(){};
};

这段代码看起来没什么问题,但是obj引用了document.getElementById(“element”),而document.getElementById(“element”)的onclick方法会引用外部环境中德变量,自然也包括obj,是不是很隐蔽啊。

解决办法

最简单的方式就是自己手工解除循环引用,比如刚才的函数可以这样

1
2
myObject.element = null;
element.o = null;


1
2
3
4
5
window.onload=function outerFunction(){
 var obj = document.getElementById("element");
 obj.onclick=function innerFunction(){};
 obj=null;
};

将变量设置为null意味着切断变量与它此前引用的值之间的连接。当垃圾回收器下次运行时,就会删除这些值并回收它们占用的内存。

要注意的是,IE9+并不存在循环引用导致Dom内存泄露问题,可能是微软做了优化,或者Dom的回收方式已经改变

四、内存管理

1、什么时候触发垃圾回收?

垃圾回收器周期性运行,如果分配的内存非常多,那么回收工作也会很艰巨,确定垃圾回收时间间隔就变成了一个值得思考的问题。IE6的垃圾回收是根据内存分配量运行的,当环境中存在256个变量、4096个对象、64k的字符串任意一种情况的时候就会触发垃圾回收器工作,看起来很科学,不用按一段时间就调用一次,有时候会没必要,这样按需调用不是很好吗?但是如果环境中就是有这么多变量等一直存在,现在脚本如此复杂,很正常,那么结果就是垃圾回收器一直在工作,这样浏览器就没法儿玩儿了。

微软在IE7中做了调整,触发条件不再是固定的,而是动态修改的,初始值和IE6相同,如果垃圾回收器回收的内存分配量低于程序占用内存的15%,说明大部分内存不可被回收,设的垃圾回收触发条件过于敏感,这时候把临街条件翻倍,如果回收的内存高于85%,说明大部分内存早就该清理了,这时候把触发条件置回。这样就使垃圾回收工作职能了很多

2、合理的GC方案

1)、Javascript引擎基础GC方案是(simple GC):mark and sweep(标记清除),即:

  • (1)遍历所有可访问的对象。
  • (2)回收已不可访问的对象。

2)、GC的缺陷

和其他语言一样,javascript的GC策略也无法避免一个问题:GC时,停止响应其他操作,这是为了安全考虑。而Javascript的GC在100ms甚至以上,对一般的应用还好,但对于JS游戏,动画对连贯性要求比较高的应用,就麻烦了。这就是新引擎需要优化的点:避免GC造成的长时间停止响应。

3)、GC优化策略

David大叔主要介绍了2个优化方案,而这也是最主要的2个优化方案了:

(1)分代回收(Generation GC)
这个和Java回收策略思想是一致的。目的是通过区分“临时”与“持久”对象;多回收“临时对象”区(young generation),少回收“持久对象”区(tenured generation),减少每次需遍历的对象,从而减少每次GC的耗时。如图:

这里需要补充的是:对于tenured generation对象,有额外的开销:把它从young generation迁移到tenured generation,另外,如果被引用了,那引用的指向也需要修改。

(2)增量GC
这个方案的思想很简单,就是“每次处理一点,下次再处理一点,如此类推”。如图:

这种方案,虽然耗时短,但中断较多,带来了上下文切换频繁的问题。

因为每种方案都其适用场景和缺点,因此在实际应用中,会根据实际情况选择方案。

比如:低 (对象/s) 比率时,中断执行GC的频率,simple GC更低些;如果大量对象都是长期“存活”,则分代处理优势也不大。

参考:

以上就是关于javascript的垃圾回收机制与内存管理的全部内容,希望对大家的学习有所帮助。


移动前端不得不了解的HTML5 head 头标签(2016最新版)

from: http://www.css88.com/archives/6410

HTML的头部内容特别多,有针对SEO的头部信息,也有针对移动设备的头部信息。而且各个浏览器内核以及各个国内浏览器厂商都有些自己的标签元素,有很多差异性。移动端的工作已经越来越成为前端工作的重要内容,除了平常的项目开发,HTML 头部标签功能,特别是meta,link等标签的功能属性显得非常重要。这里整理了一份 <head> 部分的清单,让大家了解每个标签及相应属性的意义,写出满足自己需求的 <head> 头部标签,可以很有效的增强页面的可用性。

注:去年整理过移动前端不得不了解的html5 head 头标签,随着时间和浏览器厂商的升级,现在看起来似乎有些过时了。所以重新整理了一下。增加了新的内容,及过时的一些提示,同时增加了部分桌面端浏览器的一些说明。

HTML基本的头部标签

下面是HTML基本的头部元素:

  1. <!doctype html>
  2. <html>
  3. <head>
  4. <meta charset="utf-8">
  5. <meta http-equiv="x-ua-compatible" content="ie=edge">
  6. <!--移动端的页面这个可以忽略,具体可以查看本文Internet Explorer浏览器部分-->
  7. <meta name="viewport" content="width=device-width, initial-scale=1">
  8. <!--具体可以查看本文 为移动设备添加 viewport 部分-->
  9. <!-- 以上 3 个 meta 标签 *必须* 放在 head 的最前面;其他任何的 head 内容必须在这些标签的 *后面* -->
  10. <title>页面标题</title>
  11. ...
  12. </head>

其中

  1. <meta http-equiv="x-ua-compatible" content="ie=edge">

在桌面开发的时候可以让IE浏览器以最新的模式渲染页面,具体可以查看本文Internet Explorer浏览器部分。
如果你的页面确定只在桌面浏览器中运行,那么

  1. <meta name="viewport" content="width=device-width, initial-scale=1">

也可以省略。

DOCTYPE

DOCTYPE(Document Type),该声明位于文档中最前面的位置,处于 html 标签之前,此标签告知浏览器文档使用哪种 HTML 或者 XHTML 规范。

使用 HTML5 doctype,不区分大小写。

  1. <!DOCTYPE html> <!-- 使用 HTML5 doctype,不区分大小写 -->

charset

声明文档使用的字符编码,

  1. <meta charset="utf-8">

html5 之前网页中会这样写:

  1. <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

这两个是等效的,具体可移步阅读:<meta charset='utf-8'> vs<meta http-equiv='Content-Type'>,所以建议使用较短的,易于记忆。

lang属性

更加标准的 lang 属性写法 http://zhi.hu/XyIa

简体中文

  1. <html lang="zh-cmn-Hans"> <!-- 更加标准的 lang 属性写法 http://zhi.hu/XyIa -->

繁体中文

  1. <html lang="zh-cmn-Hant"> <!-- 更加标准的 lang 属性写法 http://zhi.hu/XyIa -->

很少情况才需要加地区代码,通常是为了强调不同地区汉语使用差异,例如:

  1. <p lang="zh-cmn-Hans">
  2. <strong lang="zh-cmn-Hans-CN">菠萝</strong><strong lang="zh-cmn-Hant-TW">鳳梨</strong>其实是同一种水果。只是大陆和台湾称谓不同,且新加坡、马来西亚一带的称谓也是不同的,称之为<strong lang="zh-cmn-Hans-SG">黄梨</strong>
  3. </p>

为什么 lang="zh-cmn-Hans" 而不是我们通常写的 lang="zh-CN" 呢,请移步阅读: 页头部的声明应该是用 lang=”zh” 还是 lang=”zh-cn”

Meta 标签

meta标签是HTML中head头部的一个辅助性标签,它位于HTML文档头部的 <head> 和 <title> 标记之间,它提供用户不可见的信息。虽然这部分信息用户不可见,但是其作用非常强大,特别是当今的前端开发工作中,设置合适的meta标签可以大大提升网站页面的可用性。

桌面端开发中,meta标签通常用来为搜索引擎优化(SEO)及 robots定义页面主题,或者是定义用户浏览器上的cookie;它可以用于鉴别作者,设定页面格式,标注内容提要和关键字;还可以设置页面使其可以根据你定义的时间间隔刷新自己,以及设置RASC内容等级,等等。

移动端开发中,meta标签除了桌面端中的功能设置外,还包括,比如viewport设置,添加到主屏幕图标,标签页颜色等等实用设置。具体可以看后面详细的介绍。

meta标签分类

meta标签根据属性的不同,可分为两大部分:http-equiv 和 name 属性。

http-equiv:相当于http的文件头作用,它可以向浏览器传回一些有用的信息,以帮助浏览器正确地显示网页内容。
name属性:主要用于描述网页,与之对应的属性值为content,content中的内容主要是便于浏览器,搜索引擎等机器人识别,等等。

推荐使用的meta标签

  1. <!-- 设置文档的字符编码 -->
  2. <meta charset="utf-8">
  3. <meta http-equiv="x-ua-compatible" content="ie=edge">
  4. <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  5. <!-- 以上 3 个 meta 标签 *必须* 放在 head 的最前面;其他任何的 head 内容必须在这些标签的 *后面* -->
  6.  
  7. <!-- 允许控制资源的过度加载 -->
  8. <meta http-equiv="Content-Security-Policy" content="default-src 'self'">
  9. <!-- 尽早地放置在文档中 -->
  10. <!-- 仅应用于该标签下的内容 -->
  11.  
  12. <!-- Web 应用的名称(仅当网站被用作为一个应用时才使用)-->
  13. 1
  14. 2
  15. 3
  16. 4
  17. ...
  18. 53