#
在分布式算法领域,有个非常重要的算法叫Paxos, 它的重要性有多高呢,Google的Chubby [1]中提到
all working protocols for asynchronous consensus we have so far encountered have Paxos at their core.
关于Paxos算法的详述在维基百科中有更多介绍,中文版介绍的是choose value的规则[2],英文版介绍的是Paxos 3 phase commit的流程[3],中文版不是从英文版翻译而是独立写的,所以非常具有互补性。Paxos算法是由Leslie Lamport提出的,他在Paxos Made Simple[4]中写道
The Paxos algorithm, when presented in plain English, is very simple.
当你研究了很长一段时间Paxos算法还是有点迷糊的时候,看到上面这句话可能会有点沮丧。但是公认的它的算法还是比较繁琐的,尤其是要用程序员严谨的思维将所有细节理清的时候,你的脑袋里更是会充满了问号。Leslie Lamport也是用了长达9年的时间来完善这个算法的理论。
实际上对于一般的开发人员,我们并不需要了解Paxos所有细节及如何实现,只需要知道Paxos是一个分布式选举算法就够了。本文主要介绍一下Paxos常用的应用场合,或许有一天当你的系统增大到一定规模,你知道有这样一个技术,可以帮助你正确及优雅的解决技术架构上一些难题。
1. database replication, log replication等, 如bdb的数据复制就是使用paxos兼容的算法。Paxos最大的用途就是保持多个节点数据的一致性。
2. naming service, 如大型系统内部通常存在多个接口服务相互调用。
1) 通常的实现是将服务的ip/hostname写死在配置中,当service发生故障时候,通过手工更改配置文件或者修改DNS指向的方法来解决。缺点是可维护性差,内部的单元越多,故障率越大。
2) LVS双机冗余的方式,缺点是所有单元需要双倍的资源投入。
通过Paxos算法来管理所有的naming服务,则可保证high available分配可用的service给client。象ZooKeeper还提供watch功能,即watch的对象发生了改变会自动发notification, 这样所有的client就可以使用一致的,高可用的接口。
3.config配置管理
1) 通常手工修改配置文件的方法,这样容易出错,也需要人工干预才能生效,所以节点的状态无法同时达到一致。
2) 大规模的应用都会实现自己的配置服务,比如用http web服务来实现配置中心化。它的缺点是更新后所有client无法立即得知,各节点加载的顺序无法保证,造成系统中的配置不是同一状态。
4.membership用户角色/access control list, 比如在权限设置中,用户一旦设置某项权限比如由管理员变成普通身份,这时应在所有的服务器上所有远程CDN立即生效,否则就会导致不能接受的后果。
5. 号码分配。通常简单的解决方法是用数据库自增ID, 这导致数据库切分困难,或程序生成GUID, 这通常导致ID过长。更优雅的做法是利用paxos算法在多台replicas之间选择一个作为master, 通过master来分配号码。当master发生故障时,再用paxos选择另外一个master。
这里列举了一些常见的Paxos应用场合,对于类似上述的场合,如果用其它解决方案,一方面不能提供自动的高可用性方案,同时也远远没有Paxos实现简单及优雅。
Yahoo!开源的ZooKeeper [5]是一个开源的类Paxos实现。它的编程接口看起来很像一个可提供强一致性保证的分布式小文件系统。对上面所有的场合都可以适用。但可惜的是,ZooKeeper并不是遵循Paxos协议,而是基于自身设计并优化的一个2 phase commit的协议,因此它的理论[6]并未经过完全证明。但由于ZooKeeper在Yahoo!内部已经成功应用在HBase, Yahoo! Message Broker, Fetch Service of Yahoo! crawler等系统上,因此完全可以放心采用。
另外选择Paxos made live [7]中一段实现体会作为结尾。
* There are significant gaps between the description of the Paxos algorithm and the needs of a real-world system. In order to build a real-world system, an expert needs to use numerous ideas scattered in the literature and make several relatively small protocol extensions. The cumulative effort will be substantial and the final system will be based on an unproven protocol.
* 由于chubby填补了Paxos论文中未提及的一些细节,所以最终的实现系统不是一个理论上完全经过验证的系统
* The fault-tolerance computing community has not developed the tools to make it easy to implement their algorithms.
* 分布式容错算法领域缺乏帮助算法实现的的配套工具, 比如编译领域尽管复杂,但是yacc, ANTLR等工具已经将这个领域的难度降到最低。
* The fault-tolerance computing community has not paid enough attention to testing, a key ingredient for building fault-tolerant systems.
* 分布式容错算法领域缺乏测试手段
这里要补充一个背景,就是要证明分布式容错算法的正确性通常比实现算法还困难,Google没法证明Chubby是可靠的,Yahoo!也不敢保证它的ZooKeeper理论正确性。大部分系统都是靠在实践中运行很长一段时间才能谨慎的表示,目前系统已经基本没有发现大的问题了。
摘要: 简介: Zookeeper 分布式服务框架是 Apache Hadoop 的一个子项目,它主要是用来解决分布式应用中经常遇到的一些数据管理问题,如:统一命名服务、状态同步服务、集群管理、分布式应用配置项的管理等。本文将从使用者角度详细介绍 Zookeeper 的安装和配置文件中各个配置项的意义,以及分析 Zookeeper 的典型的应用场景(配置文件的管理、集群管理、同步锁、Leader... 阅读全文
ibatis dbcp连接数据库问题(上)
(2007-12-20 22:43:33)
我是懒人,就不自己写了,就直接引用我找到的两篇博文:
最近网站会出现一个现象是,在并发量大的时候,Tomcat或JBoss的服务线程会线程挂起,同时服务器容易出现数据连接的 java.net.SocketException: Broken pipe 的错误。刚才开始咋一看感觉像是DB端处理不来或是DB端的连接时间到了wait_timeout 的时间强行断开。出于这两个目的,网收集了一些资料后,有的说法是在DB的 wait_timeout 时间后断开的一些连接在连接池中处于空闲状态,当应用层获取该连接后进行的DB操作就会发生上面这个错误。
但在我查看了DBCP连接池代码和做了些测试后,发生这种说法并非正确。
1. 首先,出现 Broken pipe 的错误不是因连接超时所致,这个错误只有在Linux下多发,就是在高并发的情况下,网络资源不足的情况出现的, 会发送SIGPIPE信号,LINUX下默认程序退出的,具体解决办法目前还未找到合适的,有的说法是在Linux的环境变量中设置: _JAVA_SR_SIGNUM = 12 基本就可以解决,但经测试结果看并未解决。对于该问题持续关注中。
2. 之后,Broken pipe 问题未彻底解决,那么对于DBCP连接池只好对一些作废的连接要进行强制回收,若这里不做强制回收的话,最终也就会导致 pool exhausted 了,所以这一步一定要加上保护。配置如下:
- #### 是否在自动回收超时连接的时候打印连接的超时错误
- dbcp.logAbandoned=true
- #### 是否自动回收超时连接
- dbcp.removeAbandoned=true
- #### 超时时间(以秒数为单位)
- dbcp.removeAbandonedTimeout=150
3. 对于DB的 wait_timeout 空闲连接时间设置,在超过该时间值的连接,DB端会强行关闭,经测试结果,即使DB强行关闭了空闲连接,对于DBCP而言在获取该连接时无法激活该连接,会自动废弃该连接,重新从池中获取空闲连接或是重新创建连接,从源代码上看,这个自动完成的激活逻辑并不需要配置任何参数,是DBCP的默认操作。故对于网上的不少说连接池时间配置与DB不协调会导致 Broken pipe 的说法是错误,至少对于DBCP是不会出现该问题,也许C3P0是这样。
不过对于连接池的优化而言,本来就在池里空闲的连接被DB给强行关闭也不件好事,这里可以组合以下几个配置解决该问题:
java 代码
- # false : 空闲时是否验证, 若不通过断掉连接, 前提是空闲对象回收器开启状态
- dbcp.testWhileIdle = true
- # -1 : 以毫秒表示空闲对象回收器由运行间隔。值为负数时表示不运行空闲对象回收器
- # 若需要回收, 该值最好小于 minEvictableIdleTimeMillis 值
- dbcp.timeBetweenEvictionRunsMillis = 300000
- # 1000*60*30 : 被空闲对象回收器回收前在池中保持空闲状态的最小时间, 毫秒表示
- # 若需要回收, 该值最好小于DB中的 wait_timeout 值
- dbcp.minEvictableIdleTimeMillis = 320000
4. 最后,还有一个就是DBCP的maxWait参数,该参数值不宜配置太大,因为在池消耗满时,该会挂起线程等待一段时间看看是否能获得连接,一般到池耗尽的可能很少,若真要耗尽了一般也是并发太大,若此时再挂线程的话,也就是同时挂起了Server的线程,若到Server线程也挂满了,不光是访问DB的线程无法访问,就连访问普通页面也无法访问了。结果是更糕。
这样,通过以上几个配置,DBCP连接池的连接泄漏应该不会发生了(当然除了程序上的连接泄漏),不过对于并发大时Linux上的BrokenPipe 问题最好能彻底解决。但是对于并发量大时,Tomcat或JBoss的服务线程会挂起的原因还是未最终定位到原因,目前解决了DBCP的影响后,估计问题可能会是出现在 mod_jk 与 Tomcat 的连接上了,最终原因也有可能是 broken pipe 所致。关注与解决中……
2.ibatis使用dbcp连接数据库
一、建立数据表(我用的是oracle 9.2.0.1)
prompt PL/SQL Developer import file
prompt Created on 2007年5月24日 by Administrator
set feedback off
set define off
prompt Dropping T_ACCOUNT...
dro p table T_ACCOUNT cascade constraints; (注意:这里由于ISP限制上传drop,所以加了一个空格)
prompt Creating T_ACCOUNT...
create table T_ACCOUNT
(
ID NUMBER not null,
FIRSTNAME VARCHAR2(2),
LASTNAME VARCHAR2(4),
EMAILADDRESS VARCHAR2(60)
)
;
alter table T_ACCOUNT
add constraint PK_T_ACCOUNT primary key (ID);
prompt Disabling triggers for T_ACCOUNT...
alter table T_ACCOUNT disable all triggers;
prompt Loading T_ACCOUNT...
insert into T_ACCOUNT (ID, FIRSTNAME, LASTNAME, EMAILADDRESS)
values (1, '王', '三旗', 'E_wsq@msn.com');
insert into T_ACCOUNT (ID, FIRSTNAME, LASTNAME, EMAILADDRESS)
values (2, '冷', '宫主', 'E_wsq@msn.com');
commit;
prompt 2 records loaded
prompt Enabling triggers for T_ACCOUNT...
alter table T_ACCOUNT enable all triggers;
set feedback on
set define on
prompt Done.
二、在工程中加入
commons-dbcp-1.2.2.jar
commons-pool-1.3.jar
ibatis-common-2.jar
ibatis-dao-2.jar
ibatis-sqlmap-2.jar
三、编写如下属性文件
jdbc.properties
#连接设置
driverClassName=oracle.jdbc.driver.OracleDriver
url=jdbc:oracle:thin:@90.0.12.112:1521:ORCL
username=gzfee
password=1
#<!-- 初始化连接 -->
initialSize=10
#<!-- 最大空闲连接 -->
maxIdle=20
#<!-- 最小空闲连接 -->
minIdle=5
#最大连接数量
maxActive=50
#是否在自动回收超时连接的时候打印连接的超时错误
logAbandoned=true
#是否自动回收超时连接
removeAbandoned=true
#超时时间(以秒数为单位)
removeAbandonedTimeout=180
#<!-- 超时等待时间以毫秒为单位 6000毫秒/1000等于60秒 -->
maxWait=1000
四、将上面建立的属性文件放入classes下
注:如果是用main类测试则应在工程目录的classes下,如果是站点测试则在web-inf的classes目录下
五、写ibatis与DBCP的关系文件
DBCPSqlMapConfig.xml
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE sqlMapConfig
PUBLIC "-//ibatis.apache.org//DTD SQL Map Config 2.0//EN"
"http://ibatis.apache.org/dtd/sql-map-config-2.dtd">
<sqlMapConfig>
<properties resource ="jdbc.properties"/>
<transactionManager type ="JDBC">
<dataSource type ="DBCP">
<property name ="JDBC.Driver" value ="${driverClassName}"/>
<property name ="JDBC.ConnectionURL" value ="${url}" />
<property name ="JDBC.Username" value ="${username}" />
<property name ="JDBC.Password" value ="${password}" />
<property name ="Pool.MaximumWait" value ="30000" />
<property name ="Pool.ValidationQuery" value ="select sysdate from dual" />
<property name ="Pool.LogAbandoned" value ="true" />
<property name ="Pool.RemoveAbandonedTimeout" value ="1800000" />
<property name ="Pool.RemoveAbandoned" value ="true" />
</dataSource>
</transactionManager>
<sqlMap resource="com/mydomain/data/Account.xml"/> (注:这里对应表映射)
</sqlMapConfig>
六、写数据表映射文件
Account.xml
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE sqlMap
PUBLIC "-//ibatis.apache.org//DTD SQL Map 2.0//EN"
"http://ibatis.apache.org/dtd/sql-map-2.dtd">
<sqlMap namespace="Account">
<!-- Use type aliases to avoid typing the full classname every time. -->
<typeAlias alias="Account" type="com.mydomain.domain.Account"/>
<!-- Result maps describe the mapping between the columns returned
from a query, and the class properties. A result map isn't
necessary if the columns (or aliases) match to the properties
exactly. -->
<resultMap id="AccountResult" class="Account">
<result property="id" column="id"/>
<result property="firstName" column="firstName"/>
<result property="lastName" column="lastName"/>
<result property="emailAddress" column="emailAddress"/>
</resultMap>
<!-- Select with no parameters using the result map for Account class. -->
<select id="selectAllAccounts" resultMap="AccountResult">
select * from T_ACCOUNT
</select>
<!-- A simpler select example without the result map. Note the
aliases to match the properties of the target result class. -->
<select id="selectAccountById" parameterClass="int" resultClass="Account">
select
id as id,
firstName as firstName,
lastName as lastName,
emailAddress as emailAddress
from T_ACCOUNT
where id = #id#
</select>
<!-- Insert example, using the Account parameter class -->
<insert id="insertAccount" parameterClass="Account">
insert into T_ACCOUNT (
id,
firstName,
lastName,
emailAddress
values (
#id#, #firstName#, #lastName#, #emailAddress#
)
</insert>
<!-- Update example, using the Account parameter class -->
<update id="updateAccount" parameterClass="Account">
update T_ACCOUNT set
firstName = #firstName#,
lastName = #lastName#,
emailAddress = #emailAddress#
where
id = #id#
</update>
<!-- Delete example, using an integer as the parameter class -->
<delete id="deleteAccountById" parameterClass="int">
delet e from T_ACCOUNT where id = #id# (注意:这里由于ISP限制上传delete,所以加了一个空格)
</delete>
</sqlMap>
Java Scripting and JRuby Examples
Author: Martin Kuba
The new JDK 6.0 has a new API for scripting languages, which seems to be a good idea. I decided this is a good opportunity to learn the Ruby language :-) But I haven't found a simple example of it using Google, so here it is.
Download and install JDK 6.0. You need version 6, as the scripting support was added in version 6. Then you need to put five files in your classpath:
All you need to do is to add these files into the CLASSPATH.
(If you don't know how to do that, please read the Java tutorial first. Don't forget to include the current directory into the CLASSPATH. On Linux, you can do:
export CLASSPATH=.
for i in *.jar ; do CLASSPATH=$CLASSPATH:$i; done
).
Here is a simple Java code that executes a Ruby script defining some functions and the executing them:
package cz.cesnet.meta.jruby;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineFactory;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
import javax.script.ScriptContext;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
public class JRubyExample1 {
public static void main(String[] args) throws ScriptException, FileNotFoundException {
//list all available scripting engines
listScriptingEngines();
//get jruby engine
ScriptEngine jruby = new ScriptEngineManager().getEngineByName("jruby");
//process a ruby file
jruby.eval(new BufferedReader(new FileReader("myruby.rb")));
//call a method defined in the ruby source
jruby.put("number", 6);
jruby.put("title", "My Swing App");
long fact = (Long) jruby.eval("showFactInWindow($title,$number)");
System.out.println("fact: " + fact);
jruby.eval("$myglobalvar = fact($number)");
long myglob = (Long) jruby.getBindings(ScriptContext.ENGINE_SCOPE).get("myglobalvar");
System.out.println("myglob: " + myglob);
}
public static void listScriptingEngines() {
ScriptEngineManager mgr = new ScriptEngineManager();
for (ScriptEngineFactory factory : mgr.getEngineFactories()) {
System.out.println("ScriptEngineFactory Info");
System.out.printf("\tScript Engine: %s (%s)\n", factory.getEngineName(), factory.getEngineVersion());
System.out.printf("\tLanguage: %s (%s)\n", factory.getLanguageName(), factory.getLanguageVersion());
for (String name : factory.getNames()) {
System.out.printf("\tEngine Alias: %s\n", name);
}
}
}
}
And here is the Ruby code in file myruby.rb:
def fact(n)
if n==0
return 1
else
return n*fact(n-1)
end
end
class CloseListener
include java.awt.event.ActionListener
def actionPerformed(event)
puts "CloseListere.actionPerformed() called"
java.lang.System.exit(0)
end
end
def showFactInWindow(title,number)
f = fact(number)
frame = javax.swing.JFrame.new(title)
frame.setLayout(java.awt.FlowLayout.new())
button = javax.swing.JButton.new("Close")
button.addActionListener(CloseListener.new)
frame.contentPane.add(javax.swing.JLabel.new(number.to_s+"! = "+f.to_s))
frame.contentPane.add(button)
frame.defaultCloseOperation=javax.swing.WindowConstants::EXIT_ON_CLOSE
frame.pack()
frame.visible=true
return f
end
The Ruby script defines a function fact(n) which computes the factorial of a given number. Then it defines a (Ruby) class CloseListener, which extend a (Java) class java.awt.event.ActionListener. And finaly it defines a function showFactInWindow, which builds a GUI window displaying a label and a close button, assigns the CloseListener class as a listener for the button action, and returns the value of n! :
Please note that a Ruby and Java classes can be mixed together.
(To run the example save the codes above into files cz/cesnet/meta/jruby/JRubyExample1.java and myruby.rb , and compile them and run using
javac cz/cesnet/meta/jruby/JRubyExample1.java
java cz.cesnet.meta.jruby.JRubyExample1
)
You can pass any Java object using the put("key",object) method on the ScriptingEngine class, the key becomes a global variable in Ruby, so you can access it using $key . The numerical value returned by showFactInWindow is Ruby's Fixnum clas, which is converted into java.lang.Long and returned by the eval() method.
Any additional global variable in the Ruby script can be obtained in Java by getBindings(), as is shown by getting the $myglobalvar RUby global variable.
In JRuby 0.9.8, it was not possible to override or add methods of Java classes in Ruby and call them in Java. However, in JRuby 1.0 it is possible. If you have read the previous version of this page, please note, that the syntax for extending Java interfaces has changed in JRuby 1.0 to use include instead of <.
This is a Java interface MyJavaInterface.java:
package cz.cesnet.meta.jruby;
public interface MyJavaInterface {
String myMethod(Long num);
}
This is a Java class MyJavaClass.java:
package cz.cesnet.meta.jruby;
public class MyJavaClass implements MyJavaInterface {
public String myMethod(Long num) {
return "I am Java method, num="+num;
}
}
This is a Ruby code example2.rb:
#example2.rb
class MyDerivedClass < Java::cz.cesnet.meta.jruby.MyJavaClass
def myMethod(num)
return "I am Ruby method, num="+num.to_s()
end
end
class MyImplClass
include Java::cz.cesnet.meta.jruby.MyJavaInterface
def myMethod(num)
return "I am Ruby method in interface impl, num="+num.to_s()
end
def mySecondMethod()
return "I am an additonal Ruby method"
end
end
This is the main code.
package cz.cesnet.meta.jruby;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.lang.reflect.Method;
public class JRubyExample2 {
public static void main(String[] args) throws ScriptException, FileNotFoundException {
//get jruby engine
ScriptEngine jruby = new ScriptEngineManager().getEngineByName("jruby");
//process a ruby file
jruby.eval(new BufferedReader(new FileReader("example2.rb")));
//get a Ruby class extended from Java class
MyJavaClass mjc = (MyJavaClass) jruby.eval("MyDerivedClass.new");
String s = mjc.myMethod(2l);
//WOW! the Ruby method is visible
System.out.println("s: " + s);
//get a Ruby class implementing a Java interface
MyJavaInterface mji = (MyJavaInterface) jruby.eval("MyImplClass.new");
String s2 = mji.myMethod(3l);
//WOW ! the Ruby method is visible
System.out.println("s2: " + s2);
//however the other methods are not visible :-(
for (Method m : mji.getClass().getMethods()) {
System.out.println("m.getName() = " + m.getName());
}
}
}
The output is
s: I am Ruby method, num=2
s2: I am Ruby method in interface impl, num=3
m.getName() = myMethod
m.getName() = hashCode
m.getName() = equals
m.getName() = toString
m.getName() = isProxyClass
m.getName() = getProxyClass
m.getName() = getInvocationHandler
m.getName() = newProxyInstance
m.getName() = getClass
m.getName() = wait
m.getName() = wait
m.getName() = wait
m.getName() = notify
m.getName() = notifyAll
So you see, a Ruby method overriding a method in a Java class and a Ruby method implementing a method in a Java interface are visible in Java ! However additional methods are not visible in Java.
A useful usage of JRuby in Java is in web applications, when you need to give your user the option to write some complex user-defined conditions. For example, I needed to allow users to specify conditions for other users to access a chat room, based on attributes of the other users provided by autentization system Shibboleth. So I implemented JRuby scripting in TomCat-deployed web application. I used TomCat 6.0, but 5.5 should work the same way. Here are my findings.
First the easy part. Just add the jar files needed for JRuby to WEB-INF/lib/ directory and it just works. Great ! Now your users can enter any Ruby script, and you can provide it with input data using global variables, execute it and read its output value, in a class called from a servlet. In the following example, I needed a boolean value as the output, so the usual Ruby rules for truth values are simulated:
public boolean evalRubyScript() {
ScriptEngine jruby = engineManager.getEngineByName("jruby");
jruby.put("attrs", getWhatEverInputDataYouNeedToProvide());
try {
Object retval = jruby.eval(script);
if (retval instanceof Boolean) return ((Boolean) retval);
return retval != null;
} catch (ScriptException e) {
throw new RuntimeException(e);
}
}
However, your users can type anything. Sooner or later, somebody will type something harmful, like java.lang.System.exit(1) or File.readlines('/etc/passwd') etc. You have to limit what the users can do. Fortunately, there is a security framework in Java, which is not enabled by default, by you can enable it by starting TomCat with the -security option:
$CATALINA_BASE/bin/catalina.sh start -security
That runs the JVM for TomCat with SecurityManager enabled. But alas, your web application most likely will not work with security enabled, as your code or the libraries you use now cannot read system properties, read files etc. So you have to allow them to do it. Edit the file $CATALINA_BASE/conf/catalina.prolicy and add the following code, where you replacemywebapp with the name of your web application:
//first allow everything for trusted libraries, add you own
grant codeBase "jar:file:${catalina.base}/webapps/mywebapp/WEB-INF/lib/stripes.jar!/-" {
permission java.security.AllPermission;
};
grant codeBase "jar:file:${catalina.base}/webapps/mywebapp/WEB-INF/lib/log4j-1.2.13.jar!/-" {
permission java.security.AllPermission;
};
//JSP pages don't compile without this
grant codeBase "file:${catalina.base}/work/Catalina/localhost/mywebapp/" {
permission java.lang.RuntimePermission "defineClassInPackage.org.apache.jasper.runtime";
};
//if you need to read or write temporary file, use this
grant codeBase "file:${catalina.base}/webapps/mywebapp/WEB-INF/classes/-" {
permission java.io.FilePermission "${java.io.tmpdir}/file.ser", "read,write" ;
};
// and now, allow only the basic things, as this applies to all code in your webapp including JRuby
grant codeBase "file:${catalina.base}/webapps/mywebapp/-" {
permission java.util.PropertyPermission "*", "read";
permission java.lang.RuntimePermission "accessDeclaredMembers";
permission java.lang.RuntimePermission "createClassLoader";
permission java.lang.RuntimePermission "defineClassInPackage.java.lang";
permission java.lang.RuntimePermission "getenv.*";
permission java.util.PropertyPermission "*", "read,write";
permission java.io.FilePermission "${user.home}/.jruby", "read" ;
permission java.io.FilePermission "file:${catalina.base}/webapps/mywebapp/WEB-INF/lib/jruby.jar!/-", "read" ;
};
Your webapp will probably not work even after you added this code to the policy file, as your code may need permissions to do other things to work. I have found that the easiest way how to find what's missing is to run TomCat with security debuging enabled:
$ rm logs/*
$ CATALINA_OPTS=-Djava.security.debug=access,failure bin/catalina.sh run -security
...
reproduce the problem by accessing the webapp
...
$ bin/catalina.sh run stop
$ fgrep -v 'access allowed' logs/catalina.out
This will filter out allowed accesses, so what remains are denied accesses:
access: access denied (java.lang.RuntimePermission accessClassInPackage.sun.misc)
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1206)
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:313)
...
access: domain that failed ProtectionDomain (file:/home/joe/tomcat-6.0.13/webapps/mywebapp/WEB-INF/lib/somelib.jar ^lt;no signer certificates>)
That means, that the code in somelib.jar need the RuntimePermission to run, so you have to add it to the catalina.policy file. Then repeat the steps untill you web application runs without problems.
Now the users cannot do dangerous things. If they try to type java.lang.System.exit(1) in the JRuby code, the VM will not exit, instead they will get a security exception:
java.security.AccessControlException: access denied (java.lang.RuntimePermission exitVM.1)
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:323)
at java.security.AccessController.checkPermission(AccessController.java:546)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
The web application is secured now.
摘要: 跳转到主要内容
登录 (或注册)
中文
技术主题
软件下载
社区
技术讲座
developerWorks 中国
Java technology
文档库
Java 类的热替换... 阅读全文
This tutorial explains how to configure your cluster computers to easily start a set of Erlang nodes on every machine through SSH. It shows how to use the slave module to start Erlang nodes that are linked to a main controler.
Configuring SSH servers
SSH server is generally properly installed and configured by Linux distributions, if you ask for SSH server installation. The SSH server is sometime called sshd, standing for SSH deamon.
You need to have SSH servers running on all your cluster nodes.
Configuring your SSH client: connection without password
SSH client RSA key authentification
To be able to manage your cluster as a whole, you need to set up your SSH access to the cluster nodes so that you can log into them without being prompt for a password or passphrase. Here are the needed steps to configure your SSH client and server to use RSA key for authentification. You only need to do this procedure once, for each client/server.
- Generate an SSH RSA key, if you do not already have one:
- Copy the id_rsa.pub file to the target machine:
scp .ssh/id_rsa.pub userid@ssh2-server:id_rsa.pub
- Connect through SSH on the server:
- Create a .ssh directory in the user home directory (if necessary):
- Copy the contents of the id_rsa.pub file to the authorization file for protocol 2 connections:
cat id_rsa.pub >>$HOME/.ssh/authorized_keys
- Remove the id_rsa.pub file:
Alternatively, you can use the command ssh-copy-id ssh2-server, if it is available on your computer, to replace step 2 to 6. ssh-copy-id is for example available on Linux Mandrake and Debian distributions.
Adding your identity to the SSH-agent software
After the previous step, you will be prompted for the passphrase of your RSA key each time you are initialising a connection. To avoid typing the passphrase many time, you can add your identity to a program called ssh-agent that will keep your passphrase for the work session duration. Use of the SSH protocol will thus be simplified:
- Ensure a program called ssh-agent is running. Type:
to check if ssh-agent is running under your userid. Type:
to check that ssh-agent is linked to your current window manager session or shell process.
- If ssh-agent is not started, you can create an ssh-agent session in the shell with, for example, the screen program:
After this command, SSH actions typed into the screen console will be handle through the ssh-agent.
- Add your identity to the agent:
Type your passphrase when prompted.
- You can list the identity that have been added into the running ssh-agent:
- You can remove an identity from the ssh-agent with:
Please consult ssh-add manual for more options (identity lifetime, agent locking, ...)
Routing to and from the cluster
When setting up clusters, you can often only access the gateway/load balancer front computer. To access the other node, you need to tell the gateway machine to route your requests to the cluster nodes.
To take an example, suppose your gateway to the cluster is 80.65.232.137. The controler machine is a computer outside the cluster. This is computer where the operator is controling the cluster behaviour. Your cluster internal adresses form the following network: 192.0.0.0. On your client computer, launch the command:
route add -net 192.0.0.0 gw 80.65.232.137 netmask 255.255.255.0
|
This will only works if IP forwarding is activated on the gateway computer. |
To ensure proper routing, you can maintain an common /etc/hosts file with entries for all computers in your cluster. In our example, with a seven-computers cluster, the file /etc/hosts could look like:
10.9.195.12 controler
80.65.232.137 gateway
192.0.0.11 eddieware
192.0.0.21 yaws1
192.0.0.22 yaws2
192.0.0.31 mnesia1
192.0.0.32 mnesia2
You could also add a DNS server, but for relatively small cluster, it is probably easier to manage an /etc/hosts file.
Starting Erlang nodes and setting up the Erlang cluster
Starting a whole Erlang cluster can be done very easily once you can connect with SSH to all cluster node without being prompt for a password.
Starting the Erlang master node
Erlang needs to be started with the -rsh ssh parameters to use ssh connection to the target nodes within the slave command, instead of rsh connection. It also need to be started with network enable with the -sname node parameter.
Here is an example Erlang command to start the Erlang master node:
erl -rsh ssh -sname clustmaster
Be carefull, your master node short name has to be sufficent to route from the slave nodes in the cluster to your master node. The slave:start timeout if it cannot connect back from the slave to your master node.
Starting the slave nodes (cluster)
The custom function cluster:slaves/1 is a wrapper to the Erlang slave function. It allows to easily start a set of Erlang node on target hosts with the same cookie.
-module(cluster).
-export([slaves/1]).
%% Argument:
%% Hosts: List of hostname (string)
slaves([]) ->
ok;
slaves([Host|Hosts]) ->
Args = erl_system_args(),
NodeName = "cluster",
{ok, Node} = slave:start_link(Host, NodeName, Args),
io:format("Erlang node started = [~p]~n", [Node]),
slaves(Hosts).
erl_system_args()->
Shared = case init:get_argument(shared) of
error -> " ";
{ok,[[]]} -> " -shared "
end,
lists:append(["-rsh ssh -setcookie",
atom_to_list(erlang:get_cookie()),
Shared, " +Mea r10b "]).
%% Do not forget to start erlang with a command like:
%% erl -rsh ssh -sname clustmaster
Here is a sample session:
mremond@controler:~/cvs/cluster$ erl -rsh ssh -sname demo
Erlang (BEAM) emulator version 5.3 [source] [hipe]
Eshell V5.3 (abort with ^G)
(demo@controler)1> cluster:slaves(["gateway", "yaws1", "yaws2", "mnesia1", "mnesia2", "eddieware"]).
Erlang node started = [cluster@gateway]
Erlang node started = [cluster@yaws1]
Erlang node started = [cluster@yaws2]
Erlang node started = [cluster@mnesia1]
Erlang node started = [cluster@mnesia2]
Erlang node started = [cluster@eddieware]
ok
The order of the nodes in the cluster:slaves/1 list parameter does not matter.
You can check the list of known nodes:
(demo@controler)2> nodes().
[cluster@gateway,
cluster@yaws1,
cluster@yaws2,
cluster@mnesia1,
cluster@mnesia2,
cluster@eddieware]
And you can start executing code on cluster nodes:
(demo@controler)3> rpc:multicall(nodes(), io, format, ["Hello world!~n", []]).
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
{[ok,ok,ok,ok,ok,ok],[]}
|
If you have trouble with slave start, you can uncomment the line:
%%io:format("Command: ~s~n", [Cmd])
before the open_port instruction:
open_port({spawn, Cmd}, [stream]),
in the slave:wait_for_slave/7 function. |
<?php
header('Content-disposition:attachment;filename=movie.mpg');
header('Content-type:video/mpeg');
readfile('movie.mpg');
?>
socket API原本是为网络通讯设计的,但后来在socket的框架上发展出一种IPC机制,就是UNIX Domain Socket。虽然网络socket也可用于同一台主机的进程间通讯(通过loopback地址127.0.0.1),但是UNIX Domain Socket用于IPC更有效率:不需要经过网络协议栈,不需要打包拆包、计算校验和、维护序号和应答等,只是将应用层数据从一个进程拷贝到另一个进程。这是因为,IPC机制本质上是可靠的通讯,而网络协议是为不可靠的通讯设计的。UNIX Domain Socket也提供面向流和面向数据包两种API接口,类似于TCP和UDP,但是面向消息的UNIX Domain Socket也是可靠的,消息既不会丢失也不会顺序错乱。
UNIX Domain Socket是全双工的,API接口语义丰富,相比其它IPC机制有明显的优越性,目前已成为使用最广泛的IPC机制,比如X Window服务器和GUI程序之间就是通过UNIX Domain Socket通讯的。
使用UNIX Domain Socket的过程和网络socket十分相似,也要先调用socket()创建一个socket文件描述符,address family指定为AF_UNIX,type可以选择SOCK_DGRAM或SOCK_STREAM,protocol参数仍然指定为0即可。
UNIX Domain Socket与网络socket编程最明显的不同在于地址格式不同,用结构体sockaddr_un表示,网络编程的socket地址是IP地址加端口号,而UNIX Domain Socket的地址是一个socket类型的文件在文件系统中的路径,这个socket文件由bind()调用创建,如果调用bind()时该文件已存在,则bind()错误返回。
以下程序将UNIX Domain socket绑定到一个地址。
#include <stdlib.h>
#include <stdio.h>
#include <stddef.h>
#include <sys/socket.h>
#include <sys/un.h>
int main(void)
{
int fd, size;
struct sockaddr_un un;
memset(&un, 0, sizeof(un));
un.sun_family = AF_UNIX;
strcpy(un.sun_path, "foo.socket");
if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) {
perror("socket error");
exit(1);
}
size = offsetof(struct sockaddr_un, sun_path) + strlen(un.sun_path);
if (bind(fd, (struct sockaddr *)&un, size) < 0) {
perror("bind error");
exit(1);
}
printf("UNIX domain socket bound\n");
exit(0);
}
注意程序中的offsetof宏,它在stddef.h头文件中定义:
#define offsetof(TYPE, MEMBER) ((int)&((TYPE *)0)->MEMBER)
offsetof(struct sockaddr_un, sun_path)就是取sockaddr_un结构体的sun_path成员在结构体中的偏移,也就是从结构体的第几个字节开始是sun_path成员。想一想,这个宏是如何实现这一功能的?
该程序的运行结果如下。
$ ./a.out
UNIX domain socket bound
$ ls -l foo.socket
srwxrwxr-x 1 user 0 Aug 22 12:43 foo.socket
$ ./a.out
bind error: Address already in use
$ rm foo.socket
$ ./a.out
UNIX domain socket bound
以下是服务器的listen模块,与网络socket编程类似,在bind之后要listen,表示通过bind的地址(也就是socket文件)提供服务。
#include <stddef.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <errno.h>
#define QLEN 10
/*
* Create a server endpoint of a connection.
* Returns fd if all OK, <0 on error.
*/
int serv_listen(const char *name)
{
int fd, len, err, rval;
struct sockaddr_un un;
/* create a UNIX domain stream socket */
if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0)
return(-1);
unlink(name); /* in case it already exists */
/* fill in socket address structure */
memset(&un, 0, sizeof(un));
un.sun_family = AF_UNIX;
strcpy(un.sun_path, name);
len = offsetof(struct sockaddr_un, sun_path) + strlen(name);
/* bind the name to the descriptor */
if (bind(fd, (struct sockaddr *)&un, len) < 0) {
rval = -2;
goto errout;
}
if (listen(fd, QLEN) < 0) { /* tell kernel we're a server */
rval = -3;
goto errout;
}
return(fd);
errout:
err = errno;
close(fd);
errno = err;
return(rval);
}
以下是服务器的accept模块,通过accept得到客户端地址也应该是一个socket文件,如果不是socket文件就返回错误码,如果是socket文件,在建立连接后这个文件就没有用了,调用unlink把它删掉,通过传出参数uidptr返回客户端程序的user id。
#include <stddef.h>
#include <sys/stat.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <errno.h>
int serv_accept(int listenfd, uid_t *uidptr)
{
int clifd, len, err, rval;
time_t staletime;
struct sockaddr_un un;
struct stat statbuf;
len = sizeof(un);
if ((clifd = accept(listenfd, (struct sockaddr *)&un, &len)) < 0)
return(-1); /* often errno=EINTR, if signal caught */
/* obtain the client's uid from its calling address */
len -= offsetof(struct sockaddr_un, sun_path); /* len of pathname */
un.sun_path[len] = 0; /* null terminate */
if (stat(un.sun_path, &statbuf) < 0) {
rval = -2;
goto errout;
}
if (S_ISSOCK(statbuf.st_mode) == 0) {
rval = -3; /* not a socket */
goto errout;
}
if (uidptr != NULL)
*uidptr = statbuf.st_uid; /* return uid of caller */
unlink(un.sun_path); /* we're done with pathname now */
return(clifd);
errout:
err = errno;
close(clifd);
errno = err;
return(rval);
}
以下是客户端的connect模块,与网络socket编程不同的是,UNIX Domain Socket客户端一般要显式调用bind函数,而不依赖系统自动分配的地址。客户端bind一个自己指定的socket文件名的好处是,该文件名可以包含客户端的pid以便服务器区分不同的客户端。
#include <stdio.h>
#include <stddef.h>
#include <sys/stat.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <errno.h>
#define CLI_PATH "/var/tmp/" /* +5 for pid = 14 chars */
/*
* Create a client endpoint and connect to a server.
* Returns fd if all OK, <0 on error.
*/
int cli_conn(const char *name)
{
int fd, len, err, rval;
struct sockaddr_un un;
/* create a UNIX domain stream socket */
if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0)
return(-1);
/* fill socket address structure with our address */
memset(&un, 0, sizeof(un));
un.sun_family = AF_UNIX;
sprintf(un.sun_path, "%s%05d", CLI_PATH, getpid());
len = offsetof(struct sockaddr_un, sun_path) + strlen(un.sun_path);
unlink(un.sun_path); /* in case it already exists */
if (bind(fd, (struct sockaddr *)&un, len) < 0) {
rval = -2;
goto errout;
}
/* fill socket address structure with server's address */
memset(&un, 0, sizeof(un));
un.sun_family = AF_UNIX;
strcpy(un.sun_path, name);
len = offsetof(struct sockaddr_un, sun_path) + strlen(name);
if (connect(fd, (struct sockaddr *)&un, len) < 0) {
rval = -4;
goto errout;
}
return(fd);
errout:
err = errno;
close(fd);
errno = err;
return(rval);
}
我们介绍了nginx这个轻量级的高性能server主要可以干的两件事情:
>直接作为http server(代替apache,对PHP需要FastCGI处理器支持,这个我们之后介绍);
>另外一个功能就是作为反向代理服务器实现负载均衡 (如下我们就来举例说明实际中如何使用nginx实现负载均衡)。因为nginx在处理并发方面的优势,现在这个应用非常常见。当然了Apache的mod_proxy和mod_cache结合使用也可以实现对多台app server的反向代理和负载均衡,但是在并发处理方面apache还是没有nginx擅长。
nginx作为反向代理实现负载均衡的例子:
1)环境:
a. 我们本地是Windows系统,然后使用VirutalBox安装一个虚拟的Linux系统。
在本地的Windows系统上分别安装nginx(侦听8080端口)和apache(侦听80端口)。在虚拟的Linux系统上安装apache(侦听80端口)。
这样我们相当于拥有了1台nginx在前端作为反向代理服务器;后面有2台apache作为应用程序服务器(可以看作是小型的server cluster。;-) );
b. nginx用来作为反向代理服务器,放置到两台apache之前,作为用户访问的入口;
nginx仅仅处理静态页面,动态的页面(php请求)统统都交付给后台的两台apache来处理。
也就是说,可以把我们网站的静态页面或者文件放置到nginx的目录下;动态的页面和数据库访问都保留到后台的apache服务器上。
c. 如下介绍两种方法实现server cluster的负载均衡。
我们假设前端nginx(为127.0.0.1:80)仅仅包含一个静态页面index.html;
后台的两个apache服务器(分别为localhost:80和158.37.70.143:80),一台根目录放置phpMyAdmin文件夹和test.php(里面测试代码为print "server1";),另一台根目录仅仅放置一个test.php(里面测试代码为print "server2";)。
2)针对不同请求 的负载均衡:
a. 在最简单地构建反向代理的时候 (nginx仅仅处理静态不处理动态内容,动态内容交给后台的apache server来处理),我们具体的设置为:在nginx.conf中修改:
location ~ \.php$ {
proxy_pass 158.37.70.143:80 ;
}
>这样当客户端访问localhost:8080/index.html的时候,前端的nginx会自动进行响应;
>当用户访问localhost:8080/test.php的时候(这个时候nginx目录下根本就没有该文件),但是通过上面的设置location ~ \.php$(表示正则表达式匹配以.php结尾的文件,详情参看location是如何定义和匹配的 http://wiki.nginx.org/NginxHttpCoreModule) ,nginx服务器会自动pass给158.37.70.143的apache服务器了。该服务器下的test.php就会被自动解析,然后将html的结果页面返回给nginx,然后nginx进行显示(如果nginx使用memcached模块或者squid还可以支持缓存),输出结果为打印server2。
如上是最为简单的使用nginx做为反向代理服务器的例子;
b. 我们现在对如上例子进行扩展,使其支持如上的两台服务器。
我们设置nginx.conf的server模块部分,将对应部分修改为:
location ^~ /phpMyAdmin/ {
proxy_pass 127.0.0.1:80 ;
}
location ~ \.php$ {
proxy_pass 158.37.70.143:80 ;
}
上面第一个部分location ^~ /phpMyAdmin/,表示不使用正则表达式匹配(^~),而是直接匹配,也就是如果客户端访问的URL是以 http://localhost:8080/phpMyAdmin/ 开头的话(本地的nginx目录下根本没有phpMyAdmin目录),nginx会自动pass到 127.0.0.1:80 的Apache服务器,该服务器对phpMyAdmin目录下的页面进行解析,然后将结果发送给nginx,后者显示;
如果客户端访问URL是 http://localhost/test.php 的话,则会被pass到 158.37.70.143:80 的apache进行处理。
因此综上,我们实现了针对不同请求的负载均衡。
>如果用户访问静态页面index.html,最前端的nginx直接进行响应;
>如果用户访问test.php页面的话, 158.37.70.143:80 的Apache进行响应;
>如果用户访问目录phpMyAdmin下的页面的话, 127.0.0.1:80 的Apache进行响应;
3)访问同一页面 的负载均衡:
即 用户访问http://localhost:8080/test.php 这个同一页面的时候,我们实现两台服务器的负载均衡 (实际情况中,这两个服务器上的数据要求同步一致,这里我们分别定义了打印server1和server2是为了进行辨认区别)。
a. 现在我们的情况是在windows下nginx是localhost侦听8080端口;
两台apache,一台是127.0.0.1:80(包含test.php页面但是打印server1),另一台是虚拟机的158.37.70.143:80(包含test.php页面但是打印server2)。
b. 因此重新配置nginx.conf为:
>首先 在nginx的配置文件nginx.conf的http模块中添加,服务器集群server cluster(我们这里是两台)的定义:
upstream myCluster {
server 127.0.0.1:80 ;
server 158.37.70.143:80 ;
}
表示这个server cluster包含2台服务器
>然后在server模块中定义,负载均衡:
location ~ \.php$ {
proxy_pass http://myCluster ; #这里的名字和上面的cluster的名字相同
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
这样的话,如果访问 http://localhost:8080/test.php 页面的话,nginx目录下根本没有该文件,但是它会自动将其pass到myCluster定义的服务区机群中,分别由127.0.0.1:80;或者158.37.70.143:80;来做处理。
上面在定义upstream的时候每个server之后没有定义权重,表示两者均衡;如果希望某个更多响应的话例如:
upstream myCluster {
server 127.0.0.1:80 weight=5;
server 158.37.70.143:80 ;
}
这样表示5/6的几率访问第一个server,1/6访问第二个。另外还可以定义max_fails和fail_timeout等参数。
http://wiki.nginx.org/NginxHttpUpstreamModule
====================
综上,我们 使用nginx的反向代理服务器reverse proxy server的功能,将其布置到多台apache server的前端。
nginx仅仅用来处理静态页面响应和动态请求的代理pass,后台的apache server作为app server来对前台pass过来的动态页面进行处理并返回给nginx。
通过以上的架构,我们可以实现nginx和多台apache构成的机群cluster的负载均衡。
两种均衡:
1)可以在nginx中定义访问不同的内容,代理到不同的后台server; 如上例子中的访问phpMyAdmin目录代理到第一台server上;访问test.php代理到第二台server上;
2)可以在nginx中定义访问同一页面,均衡 (当然如果服务器性能不同可以定义权重来均衡) 地代理到不同的后台server上。 如上的例子访问test.php页面,会均衡地代理到server1或者server2上。
实际应用中,server1和server2上分别保留相同的app程序和数据,需要考虑两者的数据同步。
很早之前在Infoq上看到Heroku的介绍,不过当时这个网站并没有推出,今天在整理收藏夹的时候发现,Heroku已经推出一段时间,而且现在作为云计算平台已经有很快的发展了。
Heroku是Rails应用最简单的部署平台。只是简单的把代码放进去,然后启动、运行,没人会做不到这些。Heroku会处理一切,从版本控制到 自动伸缩的协作(基于Amazon的EC2之上)。我们提供一整套工具来开发和管理应用,不管是通过Web接口还是新的扩展API。
HeroKu的架构大部分是采用开源的架构来实现的,:)其实构建云计算平台,开源的世界已经解决一切了,不是吗?下面看看HeroKu的架构图,非常漂亮:
一、反向代理服务器采用Nigix
Nigix是一个开源的,高性能的web server和支持IMAP/POP3代理的反向代理服务器,Nigix不采用多线程的方式来支持大并发处理,而是采用了一个可扩展的Event-Driven(信号asynchronous)的网络模型来实现,解决了著名的C10K问题。
Nigix在这里用来解决Http Level的问题,包括SSL的处理,Http请求中转,Gzip的传输压缩等等处理,同时应用了多个前端的Nigix 服务器来解决DNS及负载均衡的问题。
二、Http Cache采用Varnish
Varnish is a state-of-the-art, high-performance HTTP accelerator. It uses the advanced features in Linux 2.6, FreeBSD 6/7 and Solaris 10 to achieve its high performance.
Varnish在这里主要给采用来处理静态资源,包括对页面的静态化处理,图片,CSS等等,这里请求获取不到的再通过下一层的Routing Mess去获取。通常还有另外一个选择Squid,不过近几年来,Varnish 给大型网站应用的更加的多了。
三、动态路由处理层,这里采用了Erlang 实现的,是由该团队自己实现的,Erlang 提供了高可靠性和稳定性的服务端实现能力(其实,我们也可以这样去使用),这个层主要是解决路由寻址的问题,通过合理分配动态过来的请求,跟踪请求的负载能力,并合理的分配可获取的下一层app 服务。这个层实现了对业务app的可扩展性和容错性,可以根据下一层服务的负载容量来合理进行路由的选择。原理上它是一个分布式的动态HTTP请求的路由池子。
四、动态网格层,用户部署的app是部署在这一层,可以看成是一个服务器集群,只是粒度会更加的细小。
五、数据库层,这里不用多说了
六、Memory Cache
也不需要多说,现在大部分互联网公司都在应用,而且基于它开发了很多好的连接器,我们公司其实也有在采用,不过我们还有自己开发的分布式内存系统,如原来的TTC Server,现在好像叫WorkBench。
|