How NativeScript Works

from: http://developer.telerik.com/featured/nativescript-works/

NativeScript is a framework that lets you build native iOS and Android (and eventually Windows Universal) apps using JavaScript code. NativeScript has a lot of cool features, such as two-way data binding between JavaScript objects and native UI components, and a CSS implementation for native apps. But my favorite feature, and the subject of this article, is NativeScript’s mechanism for giving you direct access to native platform APIs.

It’s pretty awesome, but it can mess with your mind a little. For example, check out this code for a NativeScript Android app:

var time = new android.text.format.Time();time.set( 1, 0, 2015 );console.log( time.format( "%D" ) );

I’ll give your brain a minute or two to parse this, because yes, this JavaScript code instantiates a Javaandroid.text.format.Time() object, calls its set() method, and then logs the return value of its format() method, which is the string "01/01/15".

NsKHcJT

I’m with you Keanu, but hold on, because the rabbit hole gets deeper. Here’s one more example before we dive into the how this code actually works—this time for iOS:

var alert = new UIAlertView();alert.message = "Hello world!";alert.addButtonWithTitle( "OK" );alert.show();

This JavaScript code instantiates an Objective-C UIAlertView class, sets its message property, and then calls its addButtonWithTitle() and show() methods. When you run a NativeScript iOS app with this code you’ll see the alert below:

Pretty cool, huh?

One thing I should clarify before we dive into how all of this works: just because you can access native iOS and Android APIs, doesn’t mean NativeScript apps contain only JavaScript-ified Objective-C and Java code.

NativeScript includes a number of cross-platform modules for common tasks, such as making HTTP requests, building UI components, and so forth. But that being said, most apps have some need to access native APIs occasionally, and the NativeScript runtime makes that access simple when you need it. Let’s look at how it works.

The NativeScript Runtime

The NativeScript runtime may seem like magic, but believe it or not, the architecture isn’t all that complex. Everything starts with JavaScript virtual machines, as they’re what NativeScript uses to execute JavaScript commands. Specifically, NativeScript uses V8 on Android and JavaScriptCore on iOS. Because NativeScript uses JavaScript VMs, all native-API-accessing code you write, including the code in the examples above, still needs to use JavaScript constructs and syntaxes that V8 and JavaScript Core understand.

Generally speaking, NativeScript tries to use the latest stable releases of both V8 and JavaScriptCore; therefore the ECMAScript language support in NativeScript for iOS is nearly identical to the support in desktop Safari, and the support in NativeScript for Android is nearly identical to the support in desktop Chrome. You can get an idea of what specific ES6 features that includes here.

Knowing that that NativeScript uses JavaScript VMs is important, but it’s only the first part of the puzzle. Let’s return to the first line of code in this article:

var time = new android.text.format.Time();

In the NativeScript Android runtime, this code is compiled (JIT compiled, technically) and executed by V8. How this works is pretty easy to understand for simple statements like var x = 1 + 2;, but in this case, the next question becomes… how does V8 know what android.text.format.Time() is?

The next few sections focus on V8 and Android for simplicity, but the same basic architectural patterns apply to JavaScriptCore and iOS. Where there are notable differences they will be noted.

How NativeScript Manages JavaScript VMs

V8 knows what android is because the NativeScript runtime injects it, because as it turns out, V8 has a whole slew of APIs that let you configure a bunch of things about its JavaScript environment. You can insert custom C++ code to profile JavaScript CPU usage, manage JavaScript garbage collection, change how the environment’s internals work, and a whole lot more:

654dcuD
V8 has a ton of APIs. Who knew?

Amidst these APIs are a few “Context” classes that let you manipulate the global scope, making it possible for NativeScript to inject a global android object. This is actually the same mechanism Node.js uses to make its global APIs available – e.g. require() – and NativeScript uses it to inject APIs that let you access native code. JavaScriptCore has a similar mechanism that makes the same technique possible for iOS. Cool, right?

Let’s go back to our code:

var time = new android.text.format.Time();

You now know that this code runs on V8, and that V8 knows what android.text.format.Time() is because NativeScript injected the necessary objects into the global scope. But there are still some big unanswered questions, like, how does NativeScript know which APIs to inject, and how does NativeScript know what to do when the Time() call is actually made? Let’s start with the first of these questions, and look at how NativeScript builds its list of APIs.

Metadata

NativeScript uses reflection to build the list of the APIs that are available on the platform they run on. If you’re a JavaScript developer you may not be familiar with reflection, as the permissive nature of the JavaScript language makes reflection largely unnecessary. In many other languages, most notably Java, reflection is the only technique you can use to examine the runtime itself. For example, in Java the only way to build a list of methods an arbitrary Object can invoke is with reflection.

For NativeScript’s purposes, reflection is what lets NativeScript build a comprehensive list of APIs for each platform, including android.text.format.Time. Because generating this data is non-trivial from a performance perspective, NativeScript does it ahead of time, and embeds the pre-generated metadata during the Android/iOS build step.

With that in mind let’s again return to our line of code:

var time = new android.text.format.Time();

You now know that this code runs on V8, that NativeScript injects the android.text.format.TimeJavaScript object, that NativeScript knows each API to inject from a separate metadata process, and that NativeScript embeds that metadata during its Android and iOS builds. On to the next question: how does NativeScript turn a JavaScript Time() call into a native android.text.format.Time()object?

Invoking Native Code

The answer to how NativeScript invokes native code again lies in the JavaScript VM APIs. When we last looked at V8’s APIs, we saw how NativeScript used them to inject global variables. This time we’ll look at a series of callbacks that let you execute C++ code at given points during JavaScript execution.

For example, the code new android.text.format.Time() invokes a JavaScript function, which V8 has a callback for. That is, V8 has a callback that lets NativeScript intercept the function call, take some action with custom C++ code, and provide a new result.

In the case of Android, the NativeScript runtime’s C++ code cannot directly access Java APIs such as android.text.format.Time. However, Android’s JNI, or Java Native Interface, provides the ability to bridge between C++ and Java, so NativeScript uses JNI to make the jump. On iOS this extra bridge is unnecessary as C++ code can directly invoke Objective-C APIs.

With all of this in mind, let’s return to our line of code:

var time = new android.text.format.Time();

We already know that this code runs on V8; that it knows what android.text.format.Time is because NativeScript injects such an object; and that NativeScript has a metadata-generating process for obtaining these APIs. We now know that when Time() executes, the following things happen:

  • 1) The V8 function callback runs.
  • 2) The NativeScript runtime uses its metadata to know that Time() means it needs to instantiate an android.text.format.Time object.
  • 3) The NativeScript runtime uses the JNI to instantiate a android.text.format.Time object and keeps a reference to it.
  • 4) The NativeScript runtime returns a JavaScript object that proxies the Java Time object.
  • 5) Control returns to JavaScript where the proxy object gets stored as a local time variable.

The proxy object is how NativeScript maintains a mapping of JavaScript objects to native ones. For example, let’s look at the next line of code from our earlier example:

var time = new android.text.format.Time();time.set( 1, 0, 2015 );

Because of the generated metadata, NativeScript knows all the methods to put on the proxy object. In this case the code invokes the Time object’s set() method. When this method runs, V8 again invokes its function callback; NativeScript detects that this is a method call; and then NativeScript uses the Android JNI to make the corresponding method call on the Java Time object.

And that’s really most of how NativeScript works. Cool, right?

Now, I did leave out some of the really complex parts, because converting Objective-C and Java objects into JavaScript objects can get tricky, especially when you consider the different inheritance models each language uses. If you’re curious, the NativeScript docs have thorough details on these trickier scenarios. Here are the iOS docs; and here are the Android docs.

However, we’re not going to dig into those type conversion details here because they’re not a very common thing you need to know when building a NativeScript app. In fact, even though this article has focused on how native access in NativeScript works, another feature of NativeScript keeps you from having to dive into native code very often: NativeScript modules.

NativeScript Modules

I like to think of NativeScript modules as Node modules that depend on the NativeScript runtime. NativeScript modules follow the same CommonJS conventions as Node modules, so if you already know how require() and the exports object work, then you already know how NativeScript modules work.

NativeScript modules allow you to abstract platform-specific code into a platform-agnostic API, and NativeScript itself provides several dozens of these modules for you out of the box. As an example, suppose you need to create a file in your iOS/Android app. You could write the following code for Android:

new java.io.File( path );

As well as the following code on iOS:

NSFileManager.defaultManager();fileManager.createFileAtPathContentsAttributes( path );

But you’re better off just using the NativeScript file-system module, as it lets you write your code once, without having to worry about the iOS/Android internals:

var fs = require( "file-system" );var file = new fs.File( path );

The NativeScript modules also support TypeScript as a first-class citizen; therefore, you could optionally write the code in TypeScript if you prefer:

import {File} from "file-system";let file = new File( path );

Regardless of whether you use the NativeScript modules in JavaScript or TypeScript, what’s cool is that these modules are written using the same NativeScript runtime conventions discussed in this article—which means that it’s really easy to browse any module’s source code, and that it’s really easy to create your own distributable NativeScript modules. For example, here’s a NativeScript module that retrieves a device’s OS version:

// device.ios.jsmodule.exports = {    version: UIDevice.currentDevice().systemVersion}// device.android.jsmodule.exports = {    version: android.os.Build.VERSION.RELEASE}

This code only retrieves one property, but it gives you an idea of much you can accomplish in a small amount of code. Using custom NativeScript modules is also trivial, as you use the same require() call you use to retrieve npm modules. Here’s how you use the device module shown above:

var device = require( "./device" );console.log( device.version );

NativeScript modules are surprisingly easy to write, distribute, and use, especially if you’re already familiar with npm’s conventions. Personally, as a web developer, native iOS and Android code scares me, but even I can reference the Java/Objective-C API documentation and throw together something functional if you give me a few hours. It’s exciting stuff, and it lowers the barrier for web and Node developers that want to build on native platforms.

Want to Learn More?

NativeScript has a bunch of other components that are out of the scope of this article, but that build on the runtime explained here. For example the NativeScript layout mechanisms and UI elements are nothing more than NativeScript modules that use the NativeScript runtime. A <Button> is implemented as a button NativeScript module that leverages the android.widget.Button and UIButton APIs under the hood.

If you want to try NativeScript out, the best to place to start is with our JavaScript Getting Started Guide, or our TypeScript & Angular Getting Started Guide. The guides will walk you through building a NativeScript app from scratch, and you’ll get hands-on experience using the native API access that this article discussed. Happy NativeScript-ing!

跨平台开发时代的 (再次) 到来?

from: https://onevcat.com/2015/03/cross-platform/

这篇文章主要想谈谈最近又刮起的移动开发跨平台之风,并着重介绍和对比一下像是 XamarinNativeScript 和 React Native 之类的东西。不会有特别深入的技术讨论,大家可以当作一篇科普类的文章来看。

故事的开始

“一次编码,处处运行” 永远是程序员们的理想乡。二十年前 Java 正是举着这面大旗登场,击败了众多竞争对手。但是时至今日,事实已经证明了 Java 笨重的体型和缓慢的发展显然已经很难再抓住这个时代快速跃动的脚步。在新时代的移动大潮下,一个应用想要取胜,完美的使用体验可以说必不可少。使用 native 的方式固然对提升用户体验很有帮助,但是移动的现状是必须针对不同平台 (至少是 iOS 和 Android) 进行开发。这对于开发来说妥妥的是隐患和额外的负担:我们不仅需要在不同的项目间努力用不同的语言实现同样代码的同步,还要承担由此带来的后续维护任务。如果仅只限制在 iOS 和 Android 的话还行,但是如果还要继续向 Windows Phone 等平台拓展的话,所需要付出的代价和工数将几何级增长,这显然是难以接受的。于是,一个其实一直断断续续被提及但是从没有占据过统治地位的概念又一次走进了移动开发者们的视野,那就是跨平台开发。

本地 HTML 和 JavaScript

因为每个平台都有浏览器,也都有 WebView 控件,所以我们可以使用 HTML,CSS 和 JavaScript 来将 web 的内容和体验搬到本地。通过这样做我们可以将逻辑和 UI 渲染部分都统一,以减少开发和维护成本。这种方式开发的 app 一般被称为 Hybrid app,像 PhoneGap 或者 Cordova 这样的解决方案就是典型的应用。除了使用前端开发的一套技巧来构建页面和交互以外,一般这类框架还会提供一些访问设备的接口,比如相机和 GPS 等。

hybrid-app

虽然使用全网页的开发策略和环境可以带来代码维护的便利,但是这种方式是有致命弱点的,那就是缓慢的渲染速度和难以驾驭的动画效果。这两者对于用户体验是致命而且难以接受的。随着三年前 Facebook 使用 native 代码重新构建 Facebook 的手机 app 这一标志性事件的发生,曾经一度占领半壁江山的网页套壳的 app 的发展也日渐式微。特别在现在对于用户体验的追求几近苛刻的现在,呆板的动画效果和生硬的交互体验已经完全无法满足人民群众对高质量 app 的心理预期了。

跨平台之心不死的我们该怎么办

想要解决用户体验的问题,基本还是需要回到 native 来进行开发,但是这种行为必然会与平台绑定。世界上总是有聪明人的,并且他们总会利用看起来更加聪明但是实际上却很笨的电脑来做那些很笨的事情 (恰得其所)。其中一件事情就是自动将某个平台的代码转换到另外的平台上去。有一家英国的小公司正在做这样的事情,MyAppConverter 想做的事情就是把 iOS 的代码自动转成 Java 的。但是很可惜,如果你尝试过的话,就知道他们的产品暂时还处于无法实用的状态。

在这条路的另一个分叉上有一家公司走得更远,它叫做 Apportable。他们在游戏的转换上已经取得了很大的成果,像是 Kingdom Rush 或者 Mega Run 这样的大作都使用了这家的服务将游戏从 iOS 转换到 Android,并且非常成功。可以毫不夸张地说,Apportable 是除开直接使用像 Unity 或者 Cocos2d-x 以外的另一套诱人的游戏跨平台解决方案。基本上你可以使用 Objective-C 或者 Swift 来在熟悉的平台上开发,而不必去触碰像是 C++ 这样的怪兽 (虽然其实在游戏开发中也不会碰到很难的 C++)。

但是好消息终结于游戏开发了,因为游戏在不同平台上体验不会差别很大,也很少用到不同平台的不同特性,所以处理起来相对容易。当我们想开发一个非游戏的 app 时,事情就要复杂得多。虽然 Apportable 有一个计划让 app 转换也能可行,但是估计还需要一段时间我们才能看到它的推出。

新的希望

Xamarin

其实跨平台开发最大的问题还是针对不同的平台 UI 和体验的不同。如果忽视掉这个最困难的问题,只是共用逻辑部分的代码的话,问题一下子就简单不少。十多年前,当 .NET 刚刚被公布,大家对新时代的开发充满期待的同时,一群喜欢捣鼓的 Hacker 就在盘算要如何将 .NET 和 C# 搬到 Linux 上去。而这就是 Mono 的起源。Mono 通过在其他平台上实现和 Windows 平台下功能相同的 Common Language Runtime 来运行 .NET 中间代码。现在 Mono 社区已经足够强大,并且不仅仅支持 Linux 平台,对移动设备也同样支持。Mono 背后的支撑企业 Xamarin 也顺理成章并适时地推出了一整套的移动跨平台解决方案。

Xamarin 的思路相对简单,那就是使用 C# 来完成所有平台共用的,和平台无关的 app 逻辑部分;然后由于各个平台的 UI 和交互不同,使用预先由 Xamarin 封装好的 C# API 来访问和操控 native 的控件,进行分别针对不同平台的 UI 开发。

xamarin

虽然只有逻辑部分实现了真正的跨平台,而表现层已然需要分别开发,但这确实也是一种在完整照顾用户体验的基础上的好方式 – 至少开发语言得到了统一。因为 Xamarin 解决方案中的纯 C# 环境和有深厚的 .NET 技术背景做支撑,这个项目现在也受到了微软的支持和重视。

不过存在的致命问题是针对某个特定平台你所能使用的 API 是由 Xamarin 所决定的。也就是说一旦 iOS 或者 Android 平台推出了新的 SDK,加入了新的功能,你必须要等 Xamarin 的工程师先进行封装,然后才能在自己的项目中使用。这种延迟往往可能是致命的,因为现在 AppStore 对于新功能的首页推荐往往只会有新系统上线后的一两周,错过这段时间的话,可能你的 app 就再无翻身之日。而且如果你想使用一些第三方框架的话,将不得不自己动手将它们打包成二进制,并且写 binding 为它们提供 C# 的封装,除非已经有别人帮你做过这件事情了。

另外,因为 UI 部分还是各自为战,所以不同的代码库依然存在于项目之中,这对工作量的减少的帮助有限,并且之后的维护中还是存在无法同步和版本差异的隐患。但是总体来说,Xamarin 是一个很不错的解决跨平台开发的思路了。(如果抛开价格因素的话)

NativeScript

NativeScript 是一家名叫 Telerik 的名不见经传保加利亚公司刚刚宣布的项目。虽然 Telerik 并不是很出名,但是却已经在 hybrid app 和跨平台开发这条路上走了很久。

JavaScript 因为广泛的群众基础和易学易用的语言特点,已经大有一统天下的趋势。而现在主流移动平台也都有强劲的处理 JavaScript 的能力 (iOS 7 以后的 JavaScriptCore 以及 Android 自带的 V8 JavaScript Engine),因为使用 JavaScript 来跨平台水到渠成地成为了一个可选项。

在此要吐槽一下,JavaScript 真的是一家公司,一个项目拯救回来的语言。V8 之前谁能想到 JavaScript 能有今日…

NativeScript 的思路就是使用移动平台的 JavaScript 引擎来进行跨平台开发。逻辑部分自然无需多说,关键在于如何使用平台特性,JavaScript 要怎样才能调用 native 的东西呢。NativeScript 给出的答案是通过反射得到所有平台 API,预编译它们,然后将这些 API 注入到 JavaScript 运行环境,接下来在 Javascript 调用后拦截这个调用,并运行 native 代码。

在此不打算展开说 NativeScript 详细的原理,如果你对它感兴趣,不妨去看看 Telerik 的员工的写的这篇博客以及发布时的 Keynote

nativescript-architecture

这么做最大的好处是你可以任意使用最新的平台 API 以及各种第三方库。通过对元数据的反射和注入,NativeScript 的 JavaScript 运行环境总能找到它们,触发相应的调用以及最终访问到 iOS 或者 Android 的平台代码。最新版本的平台 SDK 或者第三方库的内容总是可以被获取和使用,而不需要有什么限制。

举个简单的例子,比如创建一个文件,为 iOS 开发的话,可以直接在 JavaScript 里写这样的代码:

var fileManager = NSFileManager.defaultManager();fileManager.createFileAtPathContentsAttributes( path );

而对应的 Android 版本也许是:

new java.io.File( path );

你不需要担心 NSFileManager 或者 java.io 这类东西的存在,而是可以任意地使用它们!

如果仅只是这样的话,使用上还是非常不便。NativeScript 借助类似 node 的一套包管理系统,用 modules 对这些不同平台的代码进行了统一的封装。比如上面的代码,可以统一使用下面的形式替换:

var fs = require( "file-system" );var file = new fs.File( path );

写过 node 的同学肯定对这样的形式很熟悉了,这里的 file-system 就是 NativeScript 进行的统一平台的封装。现在的完整的封装列表可以参见这个 repo。因为写法很简单,所以开发者如果有需要的话,也可以创建自己的封装,甚至使用 npm 来发布和共享 (当然也有获取别人写的封装)。因为依赖于已有的成熟包管理系统,所以可以认为扩展性是有保证的。

对于 UI 的处理,NativeScript 选择了使用类似 Android 的 XML 的方式进行布局,然后用 CSS 来控制控件的样式。这是一种很有趣的想法,虽然 UI 的布局灵活性上无法与针对不同平台的 native 布局相比,但是其实和传统的 Android 布局已经很接近。举个布局文件的例子就可见一斑:

<Page loaded="onPageLoaded">    <GridLayout rows="auto, *">        <StackLayout orientation="horizontal" row="0">            <TextField width="200" text="" hint="Enter a task" id="task" />            <Button text="Add" tap="add"></Button>        </StackLayout>        <ListView items="" row="1">            <ListView.itemTemplate>                <Label text="" />            </ListView.itemTemplate>        </ListView>    </GridLayout></Page>

熟悉 Android 或者 Window Phone 开发的读者可能会感到找到了组织。你可能已经注意到,相比于 Android 的布局方式,NativeScript 天生支持 MVVM 和 data binding,这在开发中会十分方便 (但是性能上暂时就未知了)。而像是 Button 或者 ListView 这样的控件都是由 modules 映射到对应平台的系统标准控件。这些控件的话都是使用 css 来指定样式的,这与传统的网页开发没太大区别。

nativescript-ui

NativeScript 代表的思路是使用大量 web 开发的技巧来进行 app 开发。这是一个很值得期待的方向,相信也会受到很多前端开发者的欢迎 – 因为工具链和语言都非常熟悉。但是这个方向依然面临的最大挑战还是 UI,现在看来开发者是被限制在预先定义好的 UI 控件中的,而不能像传统 Hybrid app 那样使用 HTML5 的元素。这使得如何能开发出高度自定义的 UI 和交互成为问题。另一个可能存在的问题是最终 app 的尺寸。因为我们需要将整个元数据注入到运行环境中,也存在很多在不同语言中的编译,所以不可避免地会造成较大的 app 尺寸。最后一个挑战是对于像 app 这样的工程,没有类型检查和编译器的帮助,开发起来难度会比较大。另外在调试的时候也可能会有传统 app 开发中不曾遇到的问题。

总体来看,NativeScript 是很有希望的一个方案。如果它能实现自己的愿景,那必将是跨平台这块大蛋糕的有力竞争者。当然,现在 NativeScript 还太年轻,也还有很多问题。不妨多给这个项目一点时间,看看正式版本上线后的表现。

React Native

Facebook 几个月前公布了 React Native,而今天这个项目终于在万众期待下发布了。

React Native 在一定程度上和 NativeScript 的概念类似:都是使用 JavaScript 和 native UI 来实现 app (所以说 JavaScript 真是有一桶浆糊的趋势..如果你现在还不会写几句 JavaScript 的话,建议尽早学一学)。但是它们的出发点略有不同,React Native 在首页上就写明了,使用这个库可以:

learn once, write anywhere

而并不是 “run anywhere”。所以说 React Native 想要达成的目标其实并不是一个跨平台 app 开发方案,而是让你能够使用相似的方法和同样的语言来在不同平台进行开发的工具。另外,React Native 的主要工作是构建响应式的 View,其长处在于根据应用所处的状态来决定 View 的表现状态。而对于其他一些系统平台的 API 来说,就显得比较无力。而正是由于这些要素,使得 React Native 确实不是一个跨平台的好选择。

那为什么我们还要在这篇以 “跨平台” 为主题的文章里谈及 React Native 呢?

因为虽然 Facebook 不是以跨平台为出发点,但是却不可能阻止工程师想要这么来使用它。从原理上来说,React Native 继承了 React.js 的虚拟 DOM 的思想,只不过这次变成了虚拟 View。事实上这个框架提供了一组 native 实现的 view (在 iOS 平台上是 RCT 开头的一系列类)。我们在写 JavaScript (更准确地说,对于 React Native,我们写的是带有 XML 的 JavaScript:JSX) 时,通过将虚拟 View 添加并绑定到注册的模块中,在 native 侧用 JavaScript 运行环境 (对于 iOS 来说也就是 JavaScriptCore) 执行编译并注入好的 JavaScript 代码,获取其对 UI 的调用,将其截取并桥接到 native 代码中进行对应部件的渲染。而在布局方面,依然是通过 CSS 来实现的。

这里整个过程和思路与 NativeScript 有相似之处,但是在与 native 桥接的时候采取的策略完全相反。React Native 是将 native 侧作为渲染的后端,去提供统一的 JavaScript 侧所需要的 View 的实体。NativeScript 基本算反其道行之,是在 JavaScript 里写分开的中间层来分别对应不同平台。

对于非 View 的处理,对于 iOS,React Native 提供了 RCTBridgeModule 协议,我们可以通过在 native 侧实现这个协议来提供 JavaScript 中的访问可能。另外,回调和事件发送等也可以通过相应的 native 代码来完成。

总结来说,如果想要把 React Native 作为一个跨平台方案来看的话 (实际上也并不应当如此),那么单靠 JavaScript 一侧是难以完成的,因为一款有意义的 app 不太可能完全不借助平台 API 的力量。但是毕竟这个项目背后是 Facebook,如果 Facebook 想要通过自己的影响力自立一派的话,必定会通过不断改进和工具链的完善,将 app 开发的风向引导至自己旗下。对于原来就使用 React.js 的开发者来说,这个框架降低了他们进入 app 开发的门槛。但是对于已经在做 native app 开发的人来说,是否值得和需要投入精力进行学习,还需要观察 Facebook 接下来动作。

不过现在 React Native 的正式发布才过去了不到 24 小时,我想我们有的是时间来思考和检阅这样一个框架。

总结

当然还有一些其他方案,比如 Titanium 等。现在使用跨平台方案开发 app 的案例并不算很多,但是无论在项目管理还是维护上,跨平台始终是一种诱惑。它们都解决了一些 Hybrid app 的遗留问题,但是它们又都有一些非 native app 的普遍面临的阴影。谁能找到一个好的方式来解决像是自定义 UI,API 扩展性以及 app 尺寸这样的问题,谁就将能在这个市场中取得领先或者胜利,从而引导之后的开发潮流。

但是谁又知道最后谁能取胜呢?也有可能大家在跨平台的道路上再一次全体失败。伺机而动也许是现在开发者们很好的选择,不过我的建议是提前学点儿 JavaScript 总是不会出错的。

es6 modules

link: http://exploringjs.com/es6/ch_modules.html

How To Install and Secure phpMyAdmin with Nginx on an Ubuntu 14.04 Server

from: https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-phpmyadmin-with-nginx-on-an-ubuntu-14-04-server

Introduction

Relational database management systems like MySQL are needed for a significant portion of web sites and applications. However, not all users feel comfortable administering their data from the command line.

To solve this problem, a project called phpMyAdmin was created in order to offer an alternative in the form of a web-based management interface. In this guide, we will demonstrate how to install and secure a phpMyAdmin configuration on an Ubuntu 14.04 server. We will build this setup on top of the Nginx web server, which has a good performance profile and can handle heavy loads better than some other web servers.

Prerequisites

Before we begin, there are a few requirements that need to be settled.

To ensure that you have a solid base to build this system upon, you should run through our initial server setup guide for Ubuntu 14.04. Among other things, this will walk you through setting up a non-root user with sudo access for administrative commands.

The second prerequisite that must be fulfilled in order to start on this guide is to install a LEMP (Linux, Nginx, MySQL, and PHP) stack on your Ubuntu 14.04 server. This is the platform that we will use to serve our phpMyAdmin interface (MySQL is also the database management software that we are wishing to manage). If you do not yet have a LEMP installation on your server, follow our tutorial on installing LEMP on Ubuntu 14.04.

When your server is in a properly functioning state after following these guides, you can continue on with the rest of this page.

Step One — Install phpMyAdmin

With our LEMP platform already in place, we can begin right away with installing the phpMyAdmin software. This is available within Ubuntu's default repositories, so the installation process is simple.

First, update the server's local package index to make sure it has a fresh set of references to available packages. Then, we can use the apt packaging tools to pull the software down from the repositories and install it on our system:

sudo apt-get updatesudo apt-get install phpmyadmin

During the installation, you will be prompted for some information. It will ask you which web server you would like the software to automatically configure. Since Nginx, the web server we are using, is not one of the available options, you can just hit TAB to bypass this prompt.

The next prompt will ask if you would like dbconfig-common to configure a database for phpmyadmin to use. Select "Yes" to continue.

You will need to enter the database administrative password that you configured during the MySQL installation to allow these changes. Afterward, you will be asked to select and confirm a password for a new database that will hold phpMyAdmin's own data.

The installation will now complete. For the Nginx web server to find and serve the phpMyAdmin files correctly, we just need to create a symbolic link from the installation files to our Nginx document root directory by typing this:

sudo ln -s /usr/share/phpmyadmin /usr/share/nginx/html

A final item that we need to address is enabling the mcrypt PHP module, which phpMyAdmin relies on. This was installed with phpMyAdmin so we just need to toggle it on and restart our PHP processor:

sudo php5enmod mcryptsudo service php5-fpm restart

With that, our phpMyAdmin installation is now operational. To access the interface, go to your server's domain name or public IP address followed by /phpmyadmin, in your web browser:

http://server_domain_or_IP/phpmyadmin

phpMyAdmin login screen

To sign in, use a username/password pair of a valid MySQL user. The root user and the MySQL administrative password is a good choice to get started. You will then be able to access the administrative interface:

phpMyAdmin admin interface

Click around to get familiar with the interface. In the next section, we will take steps to secure our new interface.

Step Two — Secure your phpMyAdmin Instance

The phpMyAdmin instance installed on our server should be completely usable at this point. However, by installing a web interface, we have exposed our MySQL system to the outside world.

Even with the included authentication screen, this is quite a problem. Because of phpMyAdmin's popularity combined with the large amount of data it provides access to, installations like these are common targets for attackers.

We will implement two simple strategies to lessen the chances of our installation being targeted and compromised. We will change the location of the interface from /phpmyadmin to something else to sidestep some of the automated bot brute-force attempts. We will also create an additional, web server-level authentication gateway that must be passed before even getting to the phpMyAdmin login screen.

Changing the Application's Access Location

In order for our Nginx web server to find and serve our phpMyAdmin files, we created a symbolic link from the phpMyAdmin directory to our document root in an earlier step.

To change the URL where our phpMyAdmin interface can be accessed, we simply need to rename the symbolic link. Move into the Nginx document root directory to get a better idea of what we are doing:

cd /usr/share/nginx/htmlls -l
total 8-rw-r--r-- 1 root root 537 Mar  4 06:46 50x.html-rw-r--r-- 1 root root 612 Mar  4 06:46 index.htmllrwxrwxrwx 1 root root  21 Aug  6 10:50 phpmyadmin -> /usr/share/phpmyadmin

As you can see, we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we would like. This will change the location where phpMyAdmin can be accessed from a browser, which can help obscure the access point from hard-coded bots.

Choose a name that does not indicate the purpose of the location. In this guide, we will name our access location /nothingtosee. To accomplish this, we will just rename the link:

sudo mv phpmyadmin nothingtoseels -l
total 8-rw-r--r-- 1 root root 537 Mar  4 06:46 50x.html-rw-r--r-- 1 root root 612 Mar  4 06:46 index.htmllrwxrwxrwx 1 root root  21 Aug  6 10:50 nothingtosee -> /usr/share/phpmyadmin

Now, if you go to the previous location of your phpMyAdmin installation, you will get a 404 error:

http://server_domain_or_IP/phpmyadmin

phpMyAdmin 404 error

However, your phpMyAdmin interface will be available at the new location we selected:

http://server_domain_or_IP/nothingtosee

phpMyAdmin login screen

Setting up a Web Server Authentication Gate

The next feature we wanted for our installation was an authentication prompt that a user would be required to pass before ever seeing the phpMyAdmin login screen.

Fortunately, most web servers, including Nginx, provide this capability natively. We will just need to modify our Nginx configuration file with the details.

Before we do this, we will create a password file that will store our the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

To create an encrypted password, type:

openssl passwd

You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

O5az.RSPzd.HE

Copy this value, as you will need to paste it into the authentication file we will be creating.

Now, create an authentication file. We will call this file pma_pass and place it in the Nginx configuration directory:

sudo nano /etc/nginx/pma_pass

Within this file, you simply need to specify the username you would like to use, followed by a colon (:), followed by the encrypted version of your password you received from the openssl passwd utility.

We are going to name our user demo, but you should choose a different username. The file for this guide looks like this:

demo:O5az.RSPzd.HE

Save and close the file when you are finished.

Now, we are ready to modify our Nginx configuration file. Open this file in your text editor to get started:

sudo nano /etc/nginx/sites-available/default

Within this file, we need to add a new location section. This will target the location we chose for our phpMyAdmin interface (we selected /nothingtosee in this guide).

Create this section within the server block, but outside of any other blocks. We will put our new location block below the location / block in our example:

server {    . . .    location / {        try_files $uri $uri/ =404;    }    location /nothingtosee {    }    . . .}

Within this block, we need to set the value of a directive called auth_basic to an authentication message that our prompt will display to users. We do not want to indicate to unauthenticated users what we are protecting, so do not give specific details. We will just use "Admin Login" in our example.

We then need to use a directive called auth_basic_user_file to point our web server to the authentication file that we created. Nginx will prompt the user for authentication details and check that the inputted values match what it finds in the specified file.

After we are finished, the file should look like this:

server {    . . .    location / {        try_files $uri $uri/ =404;    }    location /nothingtosee {        auth_basic "Admin Login";        auth_basic_user_file /etc/nginx/pma_pass;    }    . . .}

Save and close the file when you are finished.

To implement our new authentication gate, we must restart the web server:

sudo service nginx restart

Now, if we visit our phpMyAdmin location in our web browser (you may have to clear your cache or use a different browser session if you have already been using phpMyAdmin), you should be prompted for the username and password you added to the pma_pass file:

http://server_domain_or_IP/nothingtosee

Nginx authentication page

Once you enter your credentials, you will be taken to the normal phpMyAdmin login page. This added layer of protection will help keep your MySQL logs clean of authentication attempts in addition to the added security benefit.

Conclusion

You can now manage your MySQL databases from a reasonably secure web interface. This UI exposes most of the functionality that is available from the MySQL command prompt. You can view databases and schema, execute queries, and create new data sets and structures.

How To Install and Use Docker on Ubuntu 16.04

from: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04

Introduction

Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

There are two methods for installing Docker on Ubuntu 16.04. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.

In this tutorial, you'll learn how to install and use it on an existing installation of Ubuntu 16.04.

Prerequisites

To follow this tutorial, you will need the following:

Note: Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. The default 64-bit Ubuntu 16.04 server meets these requirements.

All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudoInitial Setup Guide for Ubuntu 16.04 explains how to add users and give them sudo access.

Step 1 — Installing Docker

The Docker installation package available in the official Ubuntu 16.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

First, add the GPG key for the official Docker repository to the system:

  • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources:

  • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Next, update the package database with the Docker packages from the newly added repo:

  • sudo apt-get update

Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:

  • apt-cache policy docker-ce

You should see output similar to the follow:

Output of apt-cache policy docker-ce
docker-ce:  Installed: (none)  Candidate: 17.03.1~ce-0~ubuntu-xenial  Version table:     17.03.1~ce-0~ubuntu-xenial 500        500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages     17.03.0~ce-0~ubuntu-xenial 500        500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 16.04. The docker-ce version number might be different.

Finally, install Docker:

  • sudo apt-get install -y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it's running:

  • sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

Output
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

Step 2 — Executing the Docker Command Without Sudo (Optional)

By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

Output
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

  • sudo usermod -aG docker ${USER}

To apply the new group membership, you can log out of the server and back in, or you can type the following:

  • su - ${USER}

You will be prompted to enter your user's password to continue. Afterwards, you can confirm that your user is now added to the docker group by typing:

  • id -nG
Output
sammy sudo docker

If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

  • sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo.

Step 3 — Using the Docker Command

With Docker installed and working, now's the time to become familiar with the command line utility. Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

  • docker [option] [command] [arguments]

To view all available subcommands, type:

  • docker

As of Docker 1.11.1, the complete list of available subcommands includes:

Output
attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container update Update configuration of one or more containers version Show the Docker version information volume Manage Docker volumes wait Block until a container stops, then print its exit code

To view the switches available to a specific command, type:

  • docker docker-subcommand --help

To view system-wide information about Docker, use:

  • docker info

Step 4 — Working with Docker Images

Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you'll need to run Docker containers have images that are hosted on Docker Hub.

To check whether you can access and download images from Docker Hub, type:

  • docker run hello-world

The output, which should include the following, should indicate that Docker in working correctly:

Output
Hello from Docker.This message shows that your installation appears to be working correctly....

You can search for images available on Docker Hub by using the docker command with the searchsubcommand. For example, to search for the Ubuntu image, type:

  • docker search ubuntu

The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

Output
NAME DESCRIPTION STARS OFFICIAL AUTOMATEDubuntu Ubuntu is a Debian-based Linux operating s... 3808 [OK] ubuntu-upstart Upstart is an event-based replacement for ... 61 [OK] torusware/speedus-ubuntu Always updated official Ubuntu docker imag... 25 [OK]rastasheep/ubuntu-sshd Dockerized SSH service, built on top of of... 24 [OK]ubuntu-debootstrap debootstrap --variant=minbase --components... 23 [OK] nickistre/ubuntu-lamp LAMP server on Ubuntu 6 [OK]nickistre/ubuntu-lamp-wordpress LAMP on Ubuntu with wp-cli installed 5 [OK]nuagebec/ubuntu Simple always updated Ubuntu docker images... 4 [OK]nimmis/ubuntu This is a docker images different LTS vers... 4 [OK]maxexcloo/ubuntu Docker base image built on Ubuntu with Sup... 2 [OK]admiringworm/ubuntu Base ubuntu images based on the official u... 1 [OK]...

In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identified the image that you would like to use, you can download it to your computer using the pull subcommand, like so:

  • docker pull ubuntu

After an image has been downloaded, you may then run a container using the downloaded image with the run subcommand. If an image has not been downloaded when docker is executed with the runsubcommand, the Docker client will first download the image, then run a container using it:

  • docker run ubuntu

To see the images that have been downloaded to your computer, type:

  • docker images

The output should look similar to the following:

Output
REPOSITORY TAG IMAGE ID CREATED SIZEubuntu latest c5f1cf30c96b 7 days ago 120.8 MBhello-world latest 94df4f0ce8a4 2 weeks ago 967 B

As you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

Step 5 — Running a Docker Container

The hello-world container you ran in the previous is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

As an example, let's run a container using the latest image of Ubuntu. The combination of the -i and -tswitches gives you interactive shell access into the container:

  • docker run -it ubuntu

Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:

Output
root@d9b100f2f636:/#

Important: Note the container id in the command prompt. In the above example, it is d9b100f2f636.

Now you may run any command inside the container. For example, let's update the package database inside the container. No need to prefix any command with sudo, because you're operating inside the container with root privileges:

  • apt-get update

Then install any application in it. Let's install NodeJS, for example.

  • apt-get install -y nodejs

Step 6 — Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

This section shows you how to save the state of a container as a new Docker image.

After installing nodejs inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it.

To save the state of the container as a new image, first exit from it:

  • exit

Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:

  • docker commit -m "What did you do to the image" -a "Author Name" container-id repository/new_image_name

For example:

  • docker commit -m "added node.js" -a "Sunday Ogwu-Chinuwa" d9b100f2f636 finid/ubuntu-nodejs

Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.

After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:

  • docker images

The output should be similar to this:

Output
finid/ubuntu-nodejs latest 62359544c9ba 50 seconds ago 206.6 MBubuntu latest c5f1cf30c96b 7 days ago 120.8 MBhello-world latest 94df4f0ce8a4 2 weeks ago 967 B

In the above example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image. Images may also be built from what's called a Dockerfile. But that's a very involved process that's well outside the scope of this article.

Step 7 — Listing Docker Containers

After using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:

  • docker ps

You will see output similar to the following:

Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESf7c79cc556dd ubuntu "/bin/bash" 3 hours ago Up 3 hours silly_spence

To view all containers — active and inactive, pass it the -a switch: