from:http://zeroturnaround.com/rebellabs/5-command-line-tools-you-should-be-using/

Working on the command line will make you more productive, even on Windows!

There’s an age-old discussion between the usability and friendliness of GUI programs, versus the simplicity and productivity of CLI ones. But this is not a holy war I intend to trigger or fuel. In the past, RebelLabs has discussed built-in JDK tools and received amazing feedback, so I feel an urge to share more non-JDK command line tools which I simply couldn’t live without.

I do firmly believe every developer who’s worth their salt should have at least some notion of how to work with the command line, if only because some tools only exist in CLI variants. Plus, because geek++!

All other nuances that people pour words over, like the choice of operating system (OSX of course, they have beautiful aluminum cases), your favorite shell (really it should be ZSH), or the preference of Vim over Emacs (unless you have more fingers than usual) are much less relevant. OK, that was a little flamewar-like, but I promise that will be the last of it!

So my advice would be that you should learn how to use tools at the command line, as it will have a positive impact on your happiness and productivity at least for half a century!

Anyway, in this post I want to share with you four-five lesser-known yet pretty awesome command line gems. As an added bonus I will also advise the proper way to use shell under Windows, which is a pretty valuable bit of knowledge in itself.

The reason I wanted to write this post in the first place is because I really enjoy using these tools myself, and want to learn about other command line tools that I don’t yet know about. So please, awesome reader, leave me a comment with your favourite CLI tools — that’d be grand! Now, assuming we all have a nice, workable shell, let’s go over some neat command line tools that are worth hearing about.

0. HTTPie

 

The first on my list is a tool called HTTPie. Fear not, this tool has nothing to do with Internet Explorer, fortunately. In essence HTTPie is a cURL wrapper, the utility that performs HTTP requests from the command line. HTTPie adds nice features like auto-formatting and intelligent colour highlighting to the output making it much more readable and useful to the user. Additionally, it takes a very human-centric approach to its execution, not asking you to remember obscure flags and options. To perform an HTTP GET, you simply run http, to post you http POST, what can be easier or more beautiful?

sample httpie output

Almost all command line tools are conveniently packaged for installation. HTTPie is no exception, to install it, run the following command.

  • On OSX use homebrew, the best package manager to be found on OSX: brew install httpie
  • All other platforms, using Python’s pip: pip install --upgrade httpie

I personally use HTTPie a lot when developing a REST API, as it allows me to very simply query the API, returning nicely structured, legible data. Without doubt this tool saves me serious work and frustration. Luckily the usage does not stop at just REST APIs. Generally speaking, all interactions over HTTP, whether it’s inputting or outputting data, can be done in a very human-readable format.

I’d encourage you to take a look at the website, spend the 10 seconds it takes to install and give it a go yourself. Try to get the source of any website and be amazed by the output.

How unstoppable you can be with proper tools

Protip: Combine the HTTPie greatness with jq for command line JSON manipulation or pup for HTML parsing and you’ll be unstoppable!

1. Icdiff

 

At ZeroTurnaround I am blessed to work with Mercurial, a very nice and easy to use VCS. On OSX the excellent GUI program SourceTree makes working with Mercurial an absolute breeze, even with the more complex stuff. Unfortunately I like to keep the number of programs/tabs/windows I have open to an absolute minimum. Since I always have a terminal window opened it makes sense to use the CLI.

All was fine and well apart from one single pitfall in my setup. This was a feature I could barely go without: side-by-side diffs. Introducing icdiff. Of all the tools I use each day, this is the one I most appreciate. Let’s take a look at a screenshot:

example of icdiff at work

By itself, icdiff is an intelligent Python script, smart at detecting which of the differences are modifications, additions or deletions. The excellent colour highlighting in the tool makes it easy to distinguish between the three types of differences mentioned.

To get going with icdiff, do the following:

  • Via homebrew once again: brew install icdiff
  • Manually grab the Python script from the site above and put it in your PATH

When you couple icdiff with a VCS such as Mercurial, you’ll see it really shine. To fully integrate it, you’ll need to complete two more configuration steps, already documented here. The gist of the instructions is to first add a wrapping script that allows the one-by-one file diff of icdiff to operate on entire directories. Secondly you need to config your VCS to actually use icdiff. The link above shows the details of configuring it for Mercurial, but porting this to Git shouldn’t be so hard.

2. Pandoc

 

In the spirit of “practice what you preach” I set out to write this entire blogpost via a CLI. Most of the work was done using MacVim, in iTerm2 on OSX. All of the text was written and formatted using standard MarkDown syntax. The only issue to arise here is that it’s pretty difficult sometimes to accurately guess how your eventual text will come out.

This is where the next tool comes in: Pandoc. A program so powerful and versatile it’s a wonder it was GPL’d in the first place. Let’s take a look at how we might use it.

pandoc -f markdown -t html blogpost.md > blogpost.html 

Think of a markup format, any markup format. The chances are, Pandoc can convert it from one format to any other. For example, I’m writing this blogpost in Vim and use Pandoc to convert it from MarkDown into HTML, to actually see the final result. It’s a nice thing, needing only my terminal and a browser, rather than being tied to a particular online platform, fully standalone and offline.

Don’t let yourself be limited by simple formats like MarkDown though, give it some docx files, or perhaps some LaTeX. Export into PDFepub, let it handle and format your citations. The possibilities are endless.

Once again brew install pandoc does the trick. Did I mention I really like Homebrew? Maybe that should have made my tool list! Anyway, you get the gist of what that does!

3. Moreutils

 

The next tool in this post is actually a collection of nifty tools that didn’t make it into coreutils:Moreutils. It should be obtainable under moreutils in about any distro you can think of. OSX users can get all this goodness by brewing it like I did throughout this post:

brew install moreutils 

Here are a list of the included programs with short descriptions:

  • chronic: runs a command quietly unless it fails
  • combine: combine the lines in two files using boolean operations
  • ifdata: get network interface info without parsing ifconfig output
  • ifne: run a program if the standard input is not empty
  • isutf8: check if a file or standard input is utf-8
  • lckdo: execute a program with a lock held
  • mispipe: pipe two commands, returning the exit status of the first
  • parallel: run multiple jobs at once
  • pee: tee standard input to pipes
  • sponge: soak up standard input and write to a file
  • ts: timestamp standard input
  • vidir: edit a directory in your text editor
  • vipe: insert a text editor into a pipe
  • zrun: automatically uncompress arguments to command

As the maintainer hints himself sponge is perhaps the most useful tool, in that you can easily sponge up standard input into a file. However, it is not difficult to see the advantages of some of the other commands such as chronicparallel and pee.

My personal favourite though, and the ultimate reason to include this collection, is without doubtvipe.

You can literally intercept your data as it moves from command to command through the pipe. Even though this is not a useful tool in your scripts, it can be extremely helpful when running commands interactively. Instead of giving you a useful example I will leave you with a modified fortune!

sample vipe command

4. Babun

 

These days the Windows OS comes packaged with two different shells: its classic command line, and PowerShell. Let’s completely ignore those and have a look at the proper way or running command line tools under Windows: Babun! The reason this project is amazingly awesome is because it actually brings all the goodness of the \*NIX command line into Windows in a completely pre-configured no-nonsense manner.

Moreover, its default shell is my beloved ZSH, though it can very easily be changed to use Bash, if that’s your cup of tea. With ZSH it also packages the highly popular oh-my-zsh framework, which combines all the benefits of ZSH with no config whatsoever thanks to some very sane defaults and an impressive plugin system.

By default Babun is loaded with more applications than any sane developer may ever need, and is thus a rather solid 728 MBs(!) when expanded. In return you get essentials like Vim pre-installed and ready to go!

screenshot of babun termina;

Under the hood Babun is basically a fancy wrapper around Cygwin. If you already have a Cygwin install you can seamlessly re-use that one. Otherwise it will default to its own packaged Cygwin binaries, and supply you with access to those.

Some more points of interest are that Babun provides its own package manager, which again wraps around Cygwin’s, and an update mechanism both for itself and for oh-my-zsh. The best thing is that no actual installation is required, nor is the usual requirement of admin rights necessary, so for those people on a locked down PC this may be just the thing they need!


I hope this small selection of tools gave you at least one new cool toy to play with. As for me, it seems it is time to look at command line browsers before writing a following blogpost, to fully ditch the world of the GUI!

By all means fire up any comments or suggestions that you have, and let’s get some tool-sharing going on. If you just want to chat just ping RebelLabs on Twitter: @ZeroTurnaround, they are pretty chatty, but great smart people.

posted @ 2016-04-06 14:49 小马歌 阅读(275) | 评论 (0)编辑 收藏
 
     摘要: 本文由 ImportNew - hejiani 翻译自 java-performance。欢迎加入翻译小组。转载请见文末要求。JMH是新的microbenchmark(微基准测试)框架(2013年首次发布)。与其他众多框架相比它的特色优势在于,它是由Oracle实现JIT的相同人员开发的。特别是我想提一下Aleksey Shipilev和他优秀的博...  阅读全文
posted @ 2016-04-06 14:19 小马歌 阅读(411) | 评论 (0)编辑 收藏
 
     摘要: 花了一下午时间,总算全部搞定。时间主要都花费在下载jar包上,虽然开了VPN还是下载慢,没有VPN的话,真心要奔溃的。这期间有太多坑了,所以写这篇文章, 一是记录下,二是方便大家查阅。本人的系统环境为什么要说系统环境呢?不同的环境有不同的设置方法,但看了这篇文章后,可以举一反三,在其他环境设置也没什么问题。OS: OS X EI Capitan 10.11IDE: IntelliJ IDEA 14...  阅读全文
posted @ 2016-04-06 10:11 小马歌 阅读(1956) | 评论 (0)编辑 收藏
 
http://zeroturnaround.com/rebellabs/monadic-futures-in-java8/

Few people will argue that asynchronous computation is cool and useful. In fact, the wholereactive programming idea is based on asynchronous computations being possible. Well, there’s more than that, but the core idea is to allow data and events to flow through your system and do something with the results when they become available.

So let’s look at an example of asynchronous function that everyone has seen and many have written themselves.

$("#book").fadeIn("slow",    function() {    console.log(“hurray”);   }); 

This piece of JavaScript code takes a book element and fades it in. When fading is complete a callback function is called and “hurray” string appears in the console. All is well and good in this trivial case, but once your system grows you can find yourself writing more and more of these nested callbacks.

Callbacks are a common way of dealing with asynchronous or delayed actions. They are not the best option though; the problem with callbacks is that they tend to chain forever, callbacks for callbacks for callbacks, until you find yourself in a complete mess and every change in the code becomes extremely painful and slow.

Maybe there are other ways to organize asynchronous code? In fact, there are: all you need to do is just tweak the perspective a bit. Imagine, if you had a type to represent a result of an async computation. It would be awesome, and your code would pass it around like every other value and be flat, fluid and readable.

Well, why don’t we build it!

When we’re done, we’ll have a monadic type Promise written in Java 8 that will make our asynchronous code wonderful. It’s not like it wasn’t ever done before, but I want to lead you through the process and help you understand what’s happening and why. If you are lazy or just prefer starting from code, check out the github repo.

Getting to love monads in 9.5 minutes

Oh, monads! Every programmer worth their morning coffee has written about them. Monads are what functional programming adepts love, use and praise. And there are thousands of tutorials and posts describing the concept.

So if you know everything there is to know about monads and want to get a closer look onto more interesting things, scroll down to the code below. Otherwise, bear with me just ten minutes, maybe this will become your go-to explanation about what a monad is.

A monad is a type, that represents a context of computation. I bet you’ve heard that before, but have you thought about what it means?

First of all, a monad doesn’t specify what is happening, that’s the responsibility of the computation within the context. A monad says what surrounds the computation that is happening.

Now, if you want an image reference to help you out, you can think of a monad as a bubble. Some people prefer a box, but a box is something concrete so a bubble works better for me.
A lovely bubble with a cure dragon-ish creature inside
These monad-bubbles have two properties:

  • a bubble can surround something
  • a bubble can receive instructions about what should it do with a surrounded thing

The surrounding part is easy to model in a programming language. Just take something and return a bubble! A constructor or a factory method comes to mind immediately here. Let’s look at how it is formalized. I’m assuming that you have some knowledge of Haskell notation (which you probably should have anyway). So the function that takes something and returns a monad is usually called pure or return:

return :: a -> m a 

Or in Java, if we can have some Monad class already.

public class Monad<T> {    public Monad(T t) {     …   } }

See that was easy. In fact, we’re halfway there. Another thing we must add is the ability to receive instructions for working with this value T eaten by our bubble.

What will help us is a bind function, which takes some form of an action and returns a different monad bubble that wraps this action executed on whatever was previously in the bubble.

For the sake of completeness, here is how it looks in Haskell.

(>>=)  :: m a -> (a -> m b) -> m b 

So this bind function takes a monad over a(m a) and a function from a, and returns a different monad. In Java, we’ll have this definition as follows.

public class Monad<T> {    public abstract <V> Monad<V> bind(Function<T, Monad<V>> f); }

That will complete our generic definition of monads so we can proceed with an implementation.

Wait, what? I can have my monads in Java?

First of all, there are many different types of monads. In that sense, a monad is more like an interface in Java terms. There is a List monad, a Maybe monad, an IO monad (for languages that are very pure and cannot allow themselves to have normal IO), etc.

We will focus on creating a specific monad in Java, more specifically in Java 8. There is a good reason as to why we chose Java 8, since previously we found out that a monad will have to manipulate functions, which is really not that enjoyable in pre-lambda versions of Java.  However, Java 8 introduces lambdas and first-class methods, so it will be much more pleasant to work with them.

Your homemade Promise implementation

Here we go, now we’ve established our goal to have a monadic type to represent async computations. We’ve got our tools, namely Java 8, and we are ready to hack.

What we want to have is a Promise class that represents a result of asynchronous computation, either successful or erroneous.

Let’s pretend that we already have some Promise class that accepts callbacks to execute when the main computation is finished. Luckily, we don’t have to pretend very much, there are many implementations of that available: Akka’s FuturePlay’s promise and so forth.

For this post I’m using the one from Play Framework, in which case instances of Promise get redeemed when some thread calls invoke() or invokeWithException() methods. It also accepts callbacks in a form of Play’s Promise specific Action class arguments. Obviously, Promise has constructors already, but not only do I want to create new instances of Promise, I also want to mark them completed with a value immediately. Here is how I can do it.

public static <V> Promise<V> pure(final V v) {     Promise<V> p = new Promise<>();     p.invoke(v);     return p;   }

The returned Promise is already redeemed and is ready to provide us with a result of the computation, which is precisely the given value.

The bind implementation will look like something below. It takes a function and adds that as a callback to this instance. A callback will get a result of this computation and apply given function to it. Whatever that function application returns or throws is used to redeem the resultingPromise.

public <R> Promise<R> bind(final Function<V, Promise<R>> function) {     Promise<R> result = new Promise<>();      this.onRedeem(callback -> {       try {         V v = callback.get();         Promise<R> applicationResult = function.apply(v);         applicationResult.onRedeem(applicationCallback -> {           try {             R r = applicationCallback.get();             result.invoke(r);           }           catch (Throwable e) {             result.invokeWithException(e);           }         });       }       catch (Throwable e) {         result.invokeWithException(e);       }     });     return result;   } 

Both applying the given function and getting a result from this are wrapped into the try-catch blocks, so exceptions are propagated to the resulting instance of Promise, just as one might expect.

With these two constructs, it’s very easy to chain asynchronous computations while avoiding going deeper and deeper into the callback hell. In the following synthetic example, we do exactly that.

public static void example1()                      throws ExecutionException, InterruptedException {     Promise<String> promise = Async.submit(() -> {       String helloWorld = "hello world";       long n = 500;       System.out.println("Sleeping " + n + " ms example1");       Thread.sleep(n);       return helloWorld;     });     Promise<Integer> promise2 = promise.bind(string ->               Promise.pure(Integer.valueOf(string.hashCode())));     System.out.println("Main thread example2");     int hashCode = promise2.get();     System.out.println("HashCode = " + hashCode);   }

That is basically it. We’ve implemented a monadic type Promise to represent a result of an async action.

Production-ready completable future

For those of you who have beared with me this far, I just want to say some final words about the quality of this implementation. Naturally, the above-mentioned GitHub repository has some tests that are proving that in some contexts, this might all work. However, I wouldn’t recommend using those Promises in production.

One reason for that is that Java 8 already contains a class that represents a result of async computation and is monadic…, welcome, CompletableFuture!

It does exactly what we want it to do and features several methods that allow you to bind a function to the result of an existing computation. Moreover, it provides methods to apply a function or a consumer, which is a void function by the way, or a plain old Runnable.

On top of that, methods that end on *Async will execute this function asynchronously using a common ForkJoin executor. Otherwise, you can also supply an executor of your own liking.

Conclusion

Hopefully, this post shed some light onto what a monad is and next time you are about write a callback, you might want to consider a different approach.

In the post above we’ve looked at what monads are and how one can implement monadic classes in Java 8. Monads are great help in organizing data flow through your code and we’ve shown it with an example of Promise monad that represents results of an asynchronous computation. All the code from this blogpost is available for pondering in the Github repo.

Stay tuned for my next post, in which I plan to cover how to use the javaflow library to implement asynchronous awaiting for the promise to return a result. So you can get even more reactive :-)


Want to learn more about what rocks in Java 8? Check out Java 8 Revealed: Lambdas, Default methods and Bulk Data Operations by Anton Arhipov

Get the PDF

posted @ 2016-04-05 17:51 小马歌 阅读(187) | 评论 (0)编辑 收藏
 
     摘要: from:https://engineering.linkedin.com/play/play-framework-async-io-without-thread-pool-and-callback-hellUnder the hood, LinkedIn consists of hundreds of services that can be evolved and scaled indepen...  阅读全文
posted @ 2016-04-05 17:48 小马歌 阅读(416) | 评论 (0)编辑 收藏
 
http://www.eecs.berkeley.edu/~rcs/research/interactive_latency.html
posted @ 2016-04-05 17:47 小马歌 阅读(154) | 评论 (0)编辑 收藏
 
from:http://mmcgrana.github.io/2010/07/threaded-vs-evented-servers.html

Threaded vs Evented Servers

July 24 2010

Broadly speaking, there are two ways to handle concurrent requests to a server. Threadedservers use multiple concurrently-executing threads that each handle one client request, while evented servers run a single event loop that handles events for all connected clients.

To chose between the threaded and evented approaches you need to consider the load profile of the server. This post describes a simple mathematical model for reasoning about these load profiles and their implications for server design.

Suppose that requests to a server take c CPU milliseconds and w wall clock milliseconds to execute. The CPU time is spent actively computing on behalf of the request, while the wall clock time is the total time including that time spent waiting for calls to external resources. For example, a web application request might take 5 ms of CPU time c and 95 ms waiting for a database call for a total wall time w of 100 ms. Let’s also say that a threaded version of the server can maintain up to t threads before performance degrades because of scheduling and context-switching overhead. Finally, we’ll assume single-core servers.

If a server is CPU bound then it will be able to respond to at most

(/ 1000 c) 

requests per second. For example, if each requests takes 2 ms of CPU time then the CPU can only handle

(/ 1000 2) => 500 

requests per second.

If the server is thread bound then it can handle at most

(* t (/ 1000 w)) 

requests per second. This expression is similar to the one for CPU time, but here we multiply the result by t to account for the t concurrent threads.

The throughput of a threaded server is the minimum of the CPU and thread bounds since it is subject to both constraints. An evented server is not subject to the thread constraint since it only uses one thread; its throughput is given by the CPU bound. We can express this as follows:

(defn max-request-rate [t c w]   (let [cpu-bound    (/ 1000 c)         thread-bound (* t (/ 1000 w))]     {:threaded (min cpu-bound thread-bound)      :evented  cpu-bound})) 

Now we’ll consider some different types of servers and see how they might perform with threaded and evented implementations.

For the examples below I’ll use a t value of 25. This is a modest number of threads that most threading implementations can handle.

Let’s start with a classic example: an HTTP proxy server. These servers require very little CPU time, so say c is 0.1 ms. Suppose that the downstream servers can receive the relay within milliseconds for a wall time w of, say, 10 ms. Then we have

(max-request-rate 25 0.1 10) => {:threaded 2500, :evented 10000} 

In this case we expect a threaded server to be able to handle 2500 requests per second and an evented server 10000 requests per second. The higher performance of the evented server implies that the thread bound is limiting for the threaded server.

Another familiar example is the web application server. Let’s first consider the case where we have a lightweight app that does not access any external resources. In this case the request parsing and response generation might take a few milliseconds; say c is 2 ms. Since no blocking calls are made this is the value of w as well. Then

(max-request-rate 25 2 2) => {:threaded 500, :evented 500} 

Here the threaded server performs as well as the evented server because the workload is CPU bound.

Suppose we have a more heavyweight app that is making calls to external resources like the filesystem and database. In this case the amount of CPU time will be somewhat larger that the previous case but still modest; say c is 5 ms. But now that we are waiting on external resources we should expect a w value of, say, 100 ms. Then we have

(max-request-rate 25 5 100) => {:threaded 200, :evented 200} 

Even though we are making a lot of blocking calls, the workload is still CPU bound and the threaded and evented servers will therefore perform comparably.

Suppose now that we are implementing a background service such as an RSS feed fetcher that makes high-latency requests to external services and then performs minimal processing of the results. In this case c may be quite low, say 2 ms, but w will be high, say 250 ms. Then

(max-request-rate 25 2 250) => {:threaded 100, :evented 500} 

Here an evented server will perform better. The CPU load is sufficiently low and the external resource latency sufficiently high that the blocking external calls limit the threaded implementation.

Finally, consider the case of long polling clients. Here clients establish a connection to the server and the server responds only when it has a message it wants to send to the client. Suppose that we have a lightweight app such that c is 1 ms, but that response messages are sent to the client after 10 seconds such that the w value is 10000 ms. Then

(max-request-rate 25 1 10000) => {:threaded 2.5, :evented 1000} 

If the server were really limited to 25 threads and each client required its own thread, we could only allow 2.5 new connections per second if we wanted to avoid exceeding the thread allocation. An evented server on the other hand could saturate the CPU by accepting 1000 requests per second.

Even if we increase the maximum number of threads t by an order of magnitude to 250, the evented approach still fares better:

(max-request-rate 250 1 10000) => {:threaded 25, :evented 1000} 

Indeed, a threaded server would need to maintain 10000 threads in order to be able to accept requests at the rate of the evented server.

Now that we have seen some specific examples of the model we should step back and note the patterns. In general, an evented architecture becomes more favorable as the ratio of wall time w to CPU time c increases, i.e. as proportionally more time is spent waiting on external resources. Also, the viability of a threaded architecture depends on the strength of the underlying threading implementation; the higher the thread threshold t, the more wait time can be tolerated before eventing becomes necessary.

In addition to the quantitative performance implications captured by this model, there are several qualitative factors that influence the suitability of threaded and evented architectures for particular servers.

One factor is the fit of the server architecture to the work that the server is doing internally. For example, proxying is well suited to evented architectures because the work being done is fundamentally evented: upon receiving an input chunk from the client the chunk is relayed to a downstream server. In contrast, the business logic implemented by web applications is more naturally described in a synchronous style. The callbacks required by an evented architecture become unwieldy in complex application code.

Another consideration is memory coordination and consistency. Evented servers executing in a single event loop do not need to worry about the correctness and performance implications of maintaining consistent shared memory, but this may be a problem for threaded servers. Threaded servers therefore attempt to minimize memory shared among threads. This approach works well for the servers that we discussed above - proxies, web applications, background workers, and long poll endpoints - as none of them need to share state internally across client sessions. But fundamentally stateful servers like caches and databases cannot avoid this problem.

The threaded approach can be a non-starter if the underlying platform does not support proper threading. In these cases blocking calls to external resources prevent the process from using the CPU in other threads, even if the blocker is not itself using the CPU. C Ruby falls into this category. In these cases t is effectively 1, making evented architectures relatively more appealing.

In the other extreme, the assumption of t being 25 or even 250 may be too modest for some platforms. These low t values are an an artifact of threading implementations and not intrinsic to the threading model itself. More scalable threading implementations make threaded servers viable for higher w to c ratios.

An evented approach can be compromised by a lack of evented libraries for the platform. For evented servers to perform optimally, all external resources must be accessed through nonblocking libraries. Such libraries are not always available, especially on platforms that have typically used threaded/blocking models like the JVM and C Ruby. Fortunately this situation is improving as developers publish more nonblocking libraries in response to the demand from implementors of evented servers. Indeed, the requirement of pervasive evented libraries for optimal performance is one reason that node.js is so compelling for building evented servers.

posted @ 2016-04-05 17:47 小马歌 阅读(338) | 评论 (0)编辑 收藏
 

Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle’s code is protocol agnostic, simplifying the implementation of new protocols.

Finagle uses a cleansimple, and safe concurrent programming model, based onFutures. This leads to safe and modular programs that are also simple to reason about.

Finagle clients and servers expose statistics for monitoring and diagnostics. They are also traceable through a mechanism similar to Dapper‘s (another Twitter open source project, Zipkin, provides trace aggregation and visualization).

The quickstart has an overview of the most important concepts, walking you through the setup of a simple HTTP server and client.

A section on Futures follows, motivating and explaining the important ideas behind the concurrent programming model used in Finagle. The next section documents Services & Filters which are the core abstractions used to represent clients and servers and modify their behavior.

Other useful resources include:

posted @ 2016-04-05 17:46 小马歌 阅读(170) | 评论 (0)编辑 收藏
 
     摘要: from:http://www.infoq.com/cn/articles/hadoop-ten-years-interpretation-and-development-forecast编者按:Hadoop于2006年1月28日诞生,至今已有10年,它改变了企业对数据的存储、处理和分析的过程,加速了大数据的发展,形成了自己的极其火爆的技术生态圈,并受到非常广泛的应用。在2016年Hadoop十岁...  阅读全文
posted @ 2016-03-29 16:59 小马歌 阅读(221) | 评论 (0)编辑 收藏
 
Dubbo是阿里巴巴内部的SOA服务化治理方案的核心框架,每天为2000+ 个服务提供3,000,000,000+ 次访问量支持,并被广泛应用于阿里巴巴集团的各成员站点。Dubbo自2011年开源后,已被许多非阿里系公司使用。 

项目主页:http://alibaba.github.io/dubbo-doc-static/Home-zh.htm 

为了使大家对该框架有一个深入的了解,本期我们采访了Dubbo团队主要开发人员之一梁飞。 

ITeye期待并致力于为国内优秀的开源项目提供一个免费的推广平台,如果你和你的团队希望将自己的开源项目介绍给更多的开发者,或者你希望我们对哪些开源项目进行专访,请告诉我们,发站内短信给ITeye管理员或者发邮件到webmaster@iteye.com即可。

先来个自我介绍吧!Top

我叫梁飞,花名虚极,之前负责Dubbo服务框架,现已调到天猫。 

我的博客:http://javatar.iteye.com

Dubbo是什么?能做什么?Top

Dubbo是一个分布式服务框架,以及SOA治理方案。其功能主要包括:高性能NIO通讯及多协议集成,服务动态寻址与路由,软负载均衡与容错,依赖分析与降级等。 

可参见:http://alibaba.github.io/dubbo-doc-static/Home-zh.htm

Dubbo适用于哪些场景?Top

当网站变大后,不可避免的需要拆分应用进行服务化,以提高开发效率,调优性能,节省关键竞争资源等。 

当服务越来越多时,服务的URL地址信息就会爆炸式增长,配置管理变得非常困难,F5硬件负载均衡器的单点压力也越来越大。 

当进一步发展,服务间依赖关系变得错踪复杂,甚至分不清哪个应用要在哪个应用之前启动,架构师都不能完整的描述应用的架构关系。 

接着,服务的调用量越来越大,服务的容量问题就暴露出来,这个服务需要多少机器支撑?什么时候该加机器?等等…… 

在遇到这些问题时,都可以用Dubbo来解决。 

可参见:Dubbo的背景及需求

Dubbo的设计思路是什么?Top

该框架具有极高的扩展性,采用微核+插件体系,并且文档齐全,很方便二次开发,适应性极强。 

可参见:开发者指南 - 框架设计

Dubbo的需求和依赖情况?Top

Dubbo运行JDK1.5之上,缺省依赖javassist、netty、spring等包,但不是必须依赖,通过配置Dubbo可不依赖任何三方库运行。 

可参见:用户指南 - 依赖

Dubbo的性能如何?Top

Dubbo通过长连接减少握手,通过NIO及线程池在单连接上并发拼包处理消息,通过二进制流压缩数据,比常规HTTP等短连接协议更快。在阿里巴巴内部,每天支撑2000多个服务,30多亿访问量,最大单机支撑每天近1亿访问量。 

可参见:Dubbo性能测试报告

和淘宝HSF相比,Dubbo的特点是什么?Top

1.  Dubbo比HSF的部署方式更轻量,HSF要求使用指定的JBoss等容器,还需要在JBoss等容器中加入sar包扩展,对用户运行环境的侵入性大,如果你要运行在Weblogic或Websphere等其它容器上,需要自行扩展容器以兼容HSF的ClassLoader加载,而Dubbo没有任何要求,可运行在任何Java环境中。 

2.  Dubbo比HSF的扩展性更好,很方便二次开发,一个框架不可能覆盖所有需求,Dubbo始终保持平等对待第三方理念,即所有功能,都可以在不修改Dubbo原生代码的情况下,在外围扩展,包括Dubbo自己内置的功能,也和第三方一样,是通过扩展的方式实现的,而HSF如果你要加功能或替换某部分实现是很困难的,比如支付宝和淘宝用的就是不同的HSF分支,因为加功能时改了核心代码,不得不拷一个分支单独发展,HSF现阶段就算开源出来,也很难复用,除非对架构重写。 

3.  HSF依赖比较多内部系统,比如配置中心,通知中心,监控中心,单点登录等等,如果要开源还需要做很多剥离工作,而Dubbo为每个系统的集成都留出了扩展点,并已梳理干清所有依赖,同时为开源社区提供了替代方案,用户可以直接使用。 

4.  Dubbo比HSF的功能更多,除了ClassLoader隔离,Dubbo基本上是HSF的超集,Dubbo也支持更多协议,更多注册中心的集成,以适应更多的网站架构。

Dubbo在安全机制方面是如何解决的?Top

Dubbo主要针对内部服务,对外的服务,阿里有开放平台来处理安全和流控,所以Dubbo在安全方面实现的功能较少,基本上只防君子不防小人,只防止误调用。 

Dubbo通过Token令牌防止用户绕过注册中心直连,然后在注册中心上管理授权。Dubbo还提供服务黑白名单,来控制服务所允许的调用方。 

可参见:Dubbo的令牌验证

Dubbo在阿里巴巴内部以及外部的应用情况?Top

在阿里内部,除淘系以外的其它阿里子公司,都在使用Dubbo,包括:中文主站,国际主站,AliExpress,阿里云,阿里金融,阿里学院,良无限,来往等等。 

开源后,已被:去哪儿,京东,吉利汽车,方正证劵,海尔,焦点科技,中润四方,华新水泥,海康威视,等公司广泛使用,并不停的有新公司加入,社区讨论及贡献活跃,得到用户很高的评价。 

可参见:Dubbo的已知用户

在分布式事务、多语言支持方面,Dubbo的计划是什么?Top

分布式事务可能暂不会支持,因为如果只是支持简单的XA/JTA两阶段提交事务,实用性并不强。用户可以自行实现业务补偿的事件,或更复杂的分布式事务,Dubbo有很多扩展点可以集成。 

在多语言方面,Dubbo实现了C++版本,但在内部使用面极窄,没有得到很强的验证,并且C++开发资源紧张,没有精力准备C++开源事项。

Dubbo采用的开源协议?商业应用应该注意哪些事项?Top

Dubbo采用Apache License 2.0开源协议,它是一个商业友好的协议,你可以免费用于非开源的商业软件中。 

你可以对它进行改造和二次发布,只要求保留阿里的著作权,并在再发布时保留原始许可声明。 

可参见:Dubbo的开源许可证

Dubbo开发团队情况?Top

Dubbo共有六个开发人员参与开发和测试,每一个开发人员都是很有经验,团队合作很默契,开发过程也很有节奏,有完善质量保障流程。团队组成: 

  • 梁飞 (开发人员/产品管理)
  • 刘昊旻 (开发人员/过程管理)
  • 刘超 (开发人员/用户支持)
  • 李鼎 (开发人员/用户支持)
  • 陈雷 (开发人员/质量保障)
  • 闾刚 (开发人员/开源运维)
 
从左至右:刘超,梁飞,闾刚,陈雷,刘昊旻,李鼎

可参见:Dubbo的团队成员

其他开发者如何参与?可以做哪些工作?Top

开发者可以在Github上fork分支,然后将修改push过来,我们审核并测试后,会合并到主干中。 

Github地址:https://github.com/alibaba/dubbo 

开发者可以在JIRA上认领小的BUG修复,也可以在开发者指南页面领取大的功能模块。 

JIRA:http://code.alibabatech.com/jira/browse/DUBBO(暂不可用) 

开发者指南:http://alibaba.github.io/dubbo-doc-static/Developer+Guide-zh.htm

Dubbo未来的发展计划?Top

Dubbo的RPC框架已基本稳定,未来的重心会放在服务治理上,包括架构分析、监控统计、降级控制、流程协作等等。 

可参见:http://alibaba.github.io/dubbo-doc-static/Roadmap-zh.htm
posted @ 2016-03-24 13:21 小马歌 阅读(542) | 评论 (0)编辑 收藏
仅列出标题
共95页: First 上一页 5 6 7 8 9 10 11 12 13 下一页 Last