Leo's Blog

  BlogJava :: 首页 :: 新随笔 :: 联系 :: 聚合  :: 管理 ::
  13 随笔 :: 3 文章 :: 18 评论 :: 0 Trackbacks

2006年4月29日 #

    昨天读了一篇关于JDBC4.0设计与性能提高的文章,由于小弟英语翻译水准实在有限,就不在这里献丑了,就把原文给大家转载出来供大家阅读:

转载自:http://www.javaworld.com/javaworld/jw-05-2006/jw- 0501-jdbc .html

Design and performance improvements with JDBC 4.0

Effectively utilize JDBC'S features to get desired results with less code

Summary
Java Database Connectivity (JDBC) 4.0 is ready for release by mid 2006 as a part of Java Standard Edition 6.0. How can you leverage the new specification to improve the design and performance of database access and interactions in Java applications? This article discusses the new features of JDBC 4.0, illustrates its solutions to some existing problems, and presents its improvements in design and performance through examples. (2,100 words; May 1, 2006)
By Shashank Tiwari


Java Database Connectivity (JDBC), which has existed from the first public version of the core Java language, has evolved significantly over the last 10 years. In its current version, 4.0, which will be packaged with Java Standard Edition 6.0 (Java SE is Sun's new name for J2SE), it shows significant improvements in design and provides a richer API, with focus on ease of development and improvement in productivity.

This article discusses some of the important changes in the JDBC specification that either improve the design or facilitate better performance. The article does not enlist or survey every single change incorporated as a part of Java Specification Request 221, the JDBC 4.0 initiative.

After reading this article, you should be ready to leverage the new features in your next set of applications.

Annotations and the generic DataSet
I assume you are already aware of annotations and generics, which were introduced in Java with J2SE 5.0. JDBC 4.0 introduces annotations and the generic DataSet. This change aims to simplify execution of SQL queries (in scenarios that return a single result set) and SQL DML (data manipulation language) statements (that return either a row count or nothing).

The new API defines a set of Query and DataSet interfaces. The Query interface defines a set of methods decorated with the JDBC annotations. These decorated methods describe the SQL select and update statements, and specify how the result set should be bound to a DataSet. The DataSet interface is a parameterized type, as defined by generics. The DataSet interface provides a type-safe definition for the result set data.

All Query interfaces inherit from the BaseQuery interface. A concrete implementation of the interface can be instantiated using either the Connection.createQueryObject() or DataSource.createQueryObject() methods and passing a Query interface type as its parameter.

A DataSet interface inherits from java.util.List. A data class describing the columns of the result set data, returned by an annotated method of the Query interface, is its parameter type. A DataSet can be manipulated and operated upon both in a connected and disconnected mode. Thus, the DataSet is implemented either as a ResultSet or a CachedRowSet, depending on its operating mode: connected or disconnected. DataSet, being a sub-interface of the java.util.List, allows access of its data rows with the Iterator pattern, using the java.util.Iterator interface.

The data class or the user-defined class, which is a parameter type of the DataSet interface, can be specified in two ways: as a structure or as a JavaBeans object. Either method achieves the goal of binding result set data columns to user-defined class definitions, but the JavaBeans component model is more elegant and facilitates object definition reuse within other frameworks that support the JavaBeans model.

Listing 1 illustrates code snippets for a simple example to show how the new API is used to create and run SQL queries, define result set data using a user-defined class, and bind the returned result set to the user-defined specifications.

Listing 1. Employee user-defined type and employeeQueries

pubic class Employee {
   private int employeeId;
   private String firstName;
   private String lastName;

   public int getEmployeeId() {
      return employeeId;
   }
  
   public setEmployeeId(int employeeId) {
      this.employeeId = employeeId;
   }

   public String getFirstName() {
      return firstName;
   }

   public setFirstName(String firstName) {
      this.firstName = firstName;
   }

   pubic String lastName() {
      return lastName;
   }

   public setLastName(String lastName) {
      this.lastName = lastName;
   }
}


interface EmployeeQueries extends BaseQuery {
   @Select (sql="SELECT employeeId, firstName, lastName FROM employee")
   DataSet<Employee> getAllEmployees ();

   @Update (sql="delete from employee")
   int deleteAllEmployees ();
}


Connection con = ...

EmployeeQueries empQueries = con.createQueryObject (EmployeeQueries.class);

DataSet<Employee> empData = empQueries.getAllEmployees ();

Exception-handling enhancements
The exception-handling functionality in the JDBC API prior to version 4.0 is limited and often insufficient. SQLException is thrown for all types of errors. There is no classification of exceptions, and no hierarchy defines them. The only way to get some meaningful information is to retrieve and analyze the SQLState value. SQLState values and their corresponding meanings change from datasource to datasource; hence, getting to the root of the problem and efficiently handling exceptions proves to be a tedious task.

JDBC 4.0 has enhanced exception-handling capability and alleviates some of the mentioned problems. The key changes are as follows:

  • Classification of SQLException into transient and non-transient types
  • Support for chained exceptions
  • Implementation of the Iterable interface

The SQLTransientException is thrown where a previously failed operation may succeed on retrial. The SQLNonTransientException is thrown where retrial will not lead to a successful operation unless the cause of the SQLException is corrected.

Figure 1 illustrates the subclasses of SQLTransientException and SQLNonTransientException.


Figure 1. SQL exception classes: Transient and non-transient

Support for chained exceptions are now included. New constructors add extra parameters to capture possible causes for the exception. Multiple SQLExceptions could be iterated over in a loop, and getCause() could be called to determine the exception's possible cause. The getCause() method can return non-SQLExceptions if they are the underlying cause of the exceptions.

The SQLException class now implements the Iterable interface and supports the J2SE 5.0 for each loop for easier and more elegant looping.

Listing 2 depicts the usage of the new for-each-loop construct:

Listing 2. For each loop

catch(SQLException ex) {
   for(Throwable t : ex) {
      System.out.println("exception:" + t);
   }
}


SQL/XML
A large amount of data now exists in the XML format. Databases have extended support for the XML data type by defining a standard XML type in the SQL 2003 specification. Most database vendors have an implementation of the XML data type in their new releases. With the inclusion of such a type, an XML dataset or document could be one of the fields or column values in a row of a database table. Prior to JDBC 4.0, perhaps the best way to manipulate such data within the JDBC framework is to use proprietary extensions from the driver vendors or access it as a CLOB type.

JDBC 4.0 now defines SQLXML as the Java data type that maps the database SQL XML type. The API supports processing of an XML type as a string or as a StAX stream. Streaming API for XML, which for Java has been adopted via JSR 173, is based on the Iterator pattern, as opposed to the Simple API for XML Processing (SAX), which is based on the Observer pattern.

Invoking the Connection object's createSQLXML() method can create a SQLXML object. This is an empty object, so the data can be attached to it by either using the setString() method or by associating an XML stream using the createXMLStreamWriter() method with the object. Similarly, XML data can be retrieved from a SQLXML object using getString() or associating an XML stream using createXMLStreamReader() with the object.

The ResultSet, the PreparedStatement, and the CallableStatement interfaces have getSQLXML() methods for retrieving a SQLXML data type. PreparedStatement and CallableStatement also have setSQLXML() methods to add SQLXML objects as parameters.

The SQLXML resources can be released by calling their free() methods, which might prove pertinent where the objects are valid in long-running transactions. DatabaseMetaData's getTypeInfo() method can be called on a datasource to check if the database supports the SQLXML data type, since this method returns all the data types it supports.

Connections and Statements
The Connection interface definitions have been enhanced to analyze connection state and usage to facilitate efficiency.

Sometimes database connections are unusable though they may not necessarily be closed and garbage collected. In such situations, the database appears slow and unresponsive. In most of these circumstances, reinitializing the connections is perhaps the only way to resolve the problem. When using the JDBC API prior to version 4.0, there is no way to distinguish between a stale connection and a closed connection. The new API adds an isValid() method to the Connection interface to query if the connection is still valid.

Also, database connections are often shared among clients, and sometimes some clients tend to use more resources than others, which can lead to starvation-like situations. The Connection interface defines a setClientInfo() method to define client-specific properties, which could be utilized to analyze and monitor resource utilization by the clients.

RowId
The RowId in many databases is a unique way to identify a row in a table. Queries using RowId in the search criteria are often the fastest way to retrieve data, especially true in the case of the Oracle and DB2 databases. Since java.sql.RowId is now a built-in type in Java, you could utilize the performance benefits associated with its usage. RowIds are most useful in identifying unique and specific rows when duplicate data exists and some rows are identical. However, it is important to understand that RowIds often are unique only for a table and not for the entire database; they may change and are not supported by all databases. RowIds are typically not portable across datasources and thus should be used with caution when working with multiple datasources.

A RowId is valid for the lifetime defined by the datasource and as long as the row is not deleted. The DatabaseMetadata.getRowIdLifetime() method is called to determine the RowId's lifetime. The return type is an enumeration type as summarized in the table below.

RowIdLifetime enum type Definition
ROWID_UNSUPPORTED Datasource does not support RowId type
ROWID_VALID_OTHER Implementation-dependent lifetime
ROWID_VALID_TRANSACTION Lifetime is at least the containing transaction
ROWID_VALID_SESSION Lifetime is at least the containing session
ROWID_VALID_FOREVER Unlimited lifetime

The ROWID_VALID_TRANSACTION, ROWID_VALID_SESSION, and ROWID_VALID_FOREVER definitions are true as long as the row is not deleted. It is important to understand that a new RowId is assigned if a row is deleted and reinserted, which sometimes could happen transparently at the datasource. As an example, in Oracle, if the "enable row movement" clause is set on a partitioned table and an update of the partition key causes the row to move from one partition to another, the RowId will change. Even without the "enable row movement" flag with the "alter table table_name" move, the RowId could change.

Both the ResultSet and CallableStatement interfaces have been updated to include a method called getRowID(), which returns a javax.sql.RowId type.

Listing 3 shows how RowId could be retrieved from a ResultSet and from a CallableStatement.

Listing 3. Get RowId

//The method signatures to retrieve RowId from a ResultSet is as follows:
   RowId getRowId (int columnIndex)
   RowId getRowId (String columnName)
...

Statement stmt = con.createStatement ();

ResultSet rs = stmt. ExecuteQuery (…);

while (rs.next ()) {

...

java.sql.RowId rid = rs.getRowId (1);

...

}

//The method signatures to retrieve RowId from a CallableStatement is as follows:
   RowId getRowId (int parameterIndex)
   RowId getRowId (String parameterName)

Connection con;
...

CallableStatement cstmt = con.prepareCall (…);
...

cstmt.registerOutParameter (2, Types.ROWID);

...

cstmt.executeUpdate ();

...

java.sql.RowId rid = cstmt.getRowId (2);

The RowId can be used to refer uniquely to a row and thus can be used to retrieve the rows or update the row data. When fetching or updating using RowId references, it is important to know the validity of the RowId's lifetime to assure consistent results. It is also advisable to simultaneously use another reference, such as the primary key, to avoid inconsistent results in circumstances where the RowId could change transparently.

The RowId values can also be set or updated. In the case of an updateable ResultSet, the updateRowId() method could be used to update the RowId for a particular row in a table.

Both the PreparedStatement and the CallableStatement interfaces support a setRowId() method, with different signatures, to set the RowId as a parameter value. This value could be used to refer to data rows or to update the RowId value for a particular row in a table.

The facility to set or update the RowId provides the flexibility to control the unique row identifiers and could be used to make such identifiers unique across the tables used. Perhaps, portability of RowId across supporting datasources could also be achieved by explicitly setting consistent values across them. However, because system-generated RowIds are often efficient, and transparent tasks could alter RowIds, they are best used by an application as a read-only attribute.

Leverage nonstandard vendor implemented resources
The new JDBC API defines a java.sql.Wrapper interface. This interface provides the ability to access datasource-vendor-specific resources by retrieving the delegate instance using the corresponding wrapped proxy instance.

This Wrapper interface has 17 sub-interfaces as per the current specification and includes Connection, ResultSet, Statement, CallableStatement, PreparedStatement, DataSource, DatabaseMetaData, and ResultSetMetaData, among others in the list. This is an excellent design as it facilitates datasource-vedor-specific resource implementation at almost all stages of the query-creation and result-set-retrieval lifecycles.

The unwrap() method returns the object that implements the given interface to allow access to vendor-specific methods. The isWrapperFor() method returns a Boolean value. It returns true if it implements the interface, or if it directly or indirectly is a wrapper for the object.

As an example, when using Oracle, Oracle JDBC drivers provide update batching extensions that are better performing and more efficient as compared to the standard JDBC batch-updating mechanisms. For earlier JDBC versions, this implies using the Oracle-specific definitions, such as OraclePreparedStatement, in the code. This compromises code portability. With the new API, many such efficient implementations can be wrapped and exposed within the standard JDBC definitions.

Service provider mechanism for driver loading
In a nonmanaged or standalone program scenario, prior to JDBC 4.0, you would have to load the JDBC driver class explicitly by invoking the Class.forName method, as shown in Listing 4:

Listing 4. Class.forName

Class.forName ("com.driverprovider.jdbc.jdbcDriverImpl");

With JDBC 4.0, if the JDBC driver vendors package their drivers as services, defined under the server provider mechanism definitions as per the JAR specification, the DriverManager code would implicitly load the driver by searching for it in the classpath. The benefit of this mechanism is that the developer does not need to know about the specific driver class and can write a little less code when using JDBC. Also, since the driver class name is no longer in the code, a name change would not require recompilations. If multiple drivers are specified in the classpath, then DriverManger will try and establish a connection using the first driver it encounters in the classpath and iterate further if required.

Conclusion
In this article, I have discussed some of the new and improved features of JDBC 4.0. Many of these new features enhance a developer's productivity and facilitate development. Also, the specification does not eradicate the possible use of extra JDBC frameworks to provide templating facilities and advanced exception-handling capabilities. However, there is some criticism as well. Some believe that annotations effectively lead to hard-coding in code, which causes problems in code maintainability.

posted @ 2006-05-09 11:17 Leo 阅读(887) | 评论 (2)编辑 收藏

ANT安装、配置

内容摘要:
ant是一个基于JAVA的自动化脚本引擎,脚本格式为XML。除了做JAVA编译相关任务外,ANT还可以通过插件实现很多应用的调用。


ANT的基本概念:
ANT的安装:解包,设置路径
ANT的使用:最好的学习只不过是一个简单实用的例子起步……
ANT的基本概念:Java的Makefile
当一个代码项目大了以后,每次重新编译,打包,测试等都会变得非常复杂而且重复,因此c语言中有make脚本来帮助这些工作的批量完成。在Java 中应用是平台无关性的,当然不会用平台相关的make脚本来完成这些批处理任务了,ANT本身就是这样一个流程脚本引擎,用于自动化调用程序完成项目的编译,打包,测试等。除了基于JAVA是平台无关的外,脚本的格式是基于XML的,比make脚本来说还要好维护一些。


每个ant脚本(缺省叫build.xml)中设置了一系列任务(target):比如对于一个一般的项目可能需要有以下任务。

任务1:usage 打印本脚本的帮助信息(缺省)
任务2:clean <-- init 清空初始化环境
任务3:javadoc <-- build <-- init 生成JAVADOC
任务4:jar <-- build <-- init 生成JAR
任务5:all <-- jar + javadoc <-- build <-- init 完成以上所有任务:jar javadoc
而多个任务之间往往又包含了一定了依赖关系:比如把整个应用打包任务(jar)的这个依赖于编译任务(build),而编译任务又依赖于整个环境初始化任务(init)等。

注:我看到很多项目的ant脚本中的命名基本上都是一致的,比如:编译一般叫build或者compile;打包一般叫jar或war;生成文档一般命名为javadoc或javadocs;执行全部任务all。在每个任务的中,ANT会根据配置调用一些外部应用并配以相应参数执行。虽然ANT可调用的外部应用种类非常丰富,但其实最常用的就2,3个:比如javac javadoc jar等。
ANT的安装
解包后在系统可执行路径中加入指向ant的bin的路径就可以了,比如可以在GNU/Linux上把以下配置加入/etc/profile中:
export ANT_HOME=/home/ant
export JAVA_HOME=/usr/java/j2sdk1.4.1
export PATH=$PATH:$JAVA_HOME/bin:$ANT_HOME/bin

这样执行ant 后,如果不指定配置文件ant会缺省找build.xml这个配置文件,并根据配置文件执行任务,缺省的任务设置可以指向最常用的任务,比如: build,或指向打印帮助信息:usage,告诉用户有那些脚本选项可以使用。


ANT的使用

最好的学习过程就是看懂那些open source项目中的build.xml脚本,然后根据自己的需要简化成一个更简单的,ANT和APACHE上很多非常工程派的项目:简单易用,而且适应性非常强,因为这些项目的建立往往来源于开发人员日常最直接的需求。
以下是的一个WebLucene应用的例子:修改自JDOM的build.xml:

<project default="usage" basedir=".">

<!-- =================================================================== -->
<!-- Initialization target -->
<!-- =================================================================== -->
<target name="init">
<tstamp/>
<property file="${basedir}/build.properties" />
<property name="Name" value="ProjectFullName"/>
<property name="name" value="project_name"/>
<property name="version" value="0.2"/>
<property name="year" value="2003"/>

<echo message="----------- ${Name} ${version} [${year}] ------------"/>

<property name="debug" value="off"/>
<property name="optimize" value="on"/>
<property name="deprecation" value="on"/>

<property name="src.dir" value="./src/WEB-INF/src"/>
<property name="lib.dir" value="./src/WEB-INF/lib"/>
<property name="packages" value="com.chedong.*,org.apache.lucene.*"/>

<property name="build.src" value="./src/WEB-INF/build"/>
<property name="build.dest" value="./src/WEB-INF/classes"/>
<property name="build.javadocs" value="./src/doc"/>

<path id="classpath">
<pathelement path="${jsdk_jar}"/>
<fileset dir="${lib.dir}">
<include name="**/*.jar"/>
</fileset>
</path>

<filter token="year" value="${year}"/>
<filter token="version" value="${version}"/>
<filter token="date" value="${TODAY}"/>
<filter token="log" value="true"/>
<filter token="verbose" value="true"/>
</target>

<!-- =================================================================== -->
<!-- Help on usage -->
<!-- =================================================================== -->
<target name="usage" depends="init">
<echo message="${Name} Build file"/>
<echo message="-------------------------------------------------------------"/>
<echo message=""/>
<echo message=" available targets are:"/>
<echo message=""/>
<echo message=" jar --> generates the ${name}.jar file"/>
<echo message=" build --> compiles the source code"/>
<echo message=" javadoc --> generates the API documentation"/>
<echo message=" clean --> cleans up the directory"/>
<echo message=""/>
<echo message=" Please rename build.properties.default to build.properties"/>
<echo message=" and edit build.properties to specify JSDK 2.3 classpath."/>
<echo message=""/>
<echo message=" See the comments inside the build.xml file for more details."/>
<echo message="-------------------------------------------------------------"/>
<echo message=""/>
<echo message=""/>
</target>

<!-- =================================================================== -->
<!-- Prepares the source code -->
<!-- =================================================================== -->
<target name="prepare-src" depends="init">
<!-- create directories -->
<mkdir dir="${build.src}"/>
<mkdir dir="${build.dest}"/>

<!-- copy src files -->
<copy todir="${build.src}">
<fileset dir="${src.dir}"/>
</copy>
</target>

<!-- =================================================================== -->
<!-- Compiles the source directory -->
<!-- =================================================================== -->
<target name="build" depends="prepare-src">
<javac srcdir="${build.src}"
destdir="${build.dest}"
debug="${debug}"
optimize="${optimize}">
<classpath refid="classpath"/>
</javac>
</target>

<!-- =================================================================== -->
<!-- Creates the class package -->
<!-- =================================================================== -->
<target name="jar" depends="build">
<jar jarfile="${lib.dir}/${name}.jar"
basedir="${build.dest}"
includes="**"/>
</target>

<!-- =================================================================== -->
<!-- Creates the API documentation -->
<!-- =================================================================== -->
<target name="javadoc" depends="build">
<mkdir dir="${build.javadocs}"/>
<javadoc packagenames="${packages}"
sourcepath="${build.src}"
destdir="${build.javadocs}"
author="true"
version="true"
use="true"
splitindex="true"
windowtitle="${Name} API"
doctitle="${Name}">
<classpath refid="classpath"/>
</javadoc>
</target>

<!-- =================================================================== -->
<!-- Clean targets -->
<!-- =================================================================== -->
<target name="clean" depends="init">
<delete dir="${build.src}"/>
<delete dir="${build.dest}/org"/>
<delete dir="${build.dest}/com"/>
<delete>
<fileset dir="${build.dest}" includes="**/*.class"/>
</delete>
</target>
</project>
<!-- End of file -->

缺省任务:usage 打印帮助文档,告诉有那些任务选项:可用的有build, jar, javadoc和clean.

初始化环境变量:init
所有任务都基于一些基本环境变量的设置初始化完成,是后续其他任务的基础,在环境初始化过程中,有2点比较可以方便设置:

1 除了使用却缺省的property设置了JAVA源路径和输出路径外,引用了一个外部的build.properties文件中的设置,
<property file="${basedir}/build.properties" />
这样大部分简单配置用户只要会看懂build.properties就可以了,毕竟XML比起key value的属性文件还是要可读性差一些。用build.properties也可以方便其他用户从编译的细节中解放出来。

2 CLASSPATH设置:使用了其中的:
<path id="classpath">
<pathelement path="${jsdk_jar}"/>
<fileset dir="${lib.dir}">
<include name="**/*.jar"/>
</fileset>
</path>
则相当于设置了:CLASSPATH=/path/to/resin/lib/jsdk23.jar; /path/to/project/lib/*.jar;

文件复制:prepare-src
创建临时SRC存放目录和输出目录。
<!-- =================================================================== -->
<!-- Prepares the source code -->
<!-- =================================================================== -->
<target name="prepare-src" depends="init">
<!-- create directories -->
<mkdir dir="${build.src}"/>
<mkdir dir="${build.dest}"/>

<!-- copy src files -->
<copy todir="${build.src}">
<fileset dir="${src.dir}"/>
</copy>
</target>

编译任务:build
编译时的CLASSPATH环境通过一下方式找到引用一个path对象
<classpath refid="classpath"/>

打包任务:jar
对应用打包生成项目所写名的.jar文件
<!-- =================================================================== -->
<!-- Creates the class package -->
<!-- =================================================================== -->
<target name="jar" depends="build">
<jar jarfile="${lib.dir}/${name}.jar"
basedir="${build.dest}"
includes="**"/>
</target>

生成JAVADOC文档任务: javadoc
<!-- =================================================================== -->
<!-- Creates the API documentation -->
<!-- =================================================================== -->
<target name="javadoc" depends="build">
<mkdir dir="${build.javadocs}"/>
<javadoc packagenames="${packages}"
sourcepath="${build.src}"
destdir="${build.javadocs}"
author="true"
version="true"
use="true"
splitindex="true"
windowtitle="${Name} API"
doctitle="${Name}">
<classpath refid="classpath"/>
</javadoc>
</target>

清空临时编译文件:clean
<!-- =================================================================== -->
<!-- Clean targets -->
<!-- =================================================================== -->
<target name="clean" depends="init">
<delete dir="${build.src}"/>
<delete dir="${build.dest}/org"/>
<delete dir="${build.dest}/com"/>
<delete>
<fileset dir="${build.dest}" includes="**/*.class"/>
</delete>
</target>

TODO:
更多任务/扩展:(样例)

测试任务:JUnit测试
代码风格检查任务:CheckStyle,Jalopy等
邮件警报任务:可以把以上这些任务的输出警告发送到制定的用户列表中,这个任务可以设置每天自动运行。

参考资料:

Jakarta ANT:
http://ant.apache.org



Trackback: http://tb.blog.csdn.net/TrackBack.aspx?PostId=707995

posted @ 2006-05-08 10:54 Leo 阅读(650) | 评论 (3)编辑 收藏

最近写程序已经很少直接用JDBC了,一直都是用ibaits, Hibernate等来招呼,因为现在的集成框架已经很稳定了。不过对JDBC的直接使用还是不可以忽略的,JDBC3.0提供的n多的新特征还是要熟悉了解的,以前学jdbc的时候就是上网找些demo和介绍来学,使用很单一,对JDBC3.0的好多新的特征都忽略了,比如下面一个例子:

Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM user WHERE username='aa'");
stmt.executeUpdate("UPDATE user SET lastdatetime=now() where username='aa'");

这是一个用户登录时,经常用到的代码,先是根据用户名aa查找该用户的详细信息,然后再更新该用户的最后登录时间(lastdatetime)。这这个里面,我们用了两个sql语句,这个是我一直用的方法,但是如果用JDBC2.0给我们提供的便利,我们只要写一条sql就够了,其他的都交给jdbc,看下面的代码:

Statement stmt2 = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
ResultSet rs2 = stmt.executeQuery("SELECT * FROM user WHERE username='aa'");
rs2.next();
rs2.updateDate("lastdatetime", new Date(Calendar.getInstance().getTimeInMillis()));
rs2.updateRow();

这里面最主要的特征就是ResultSet.TYPE_FORWARD_ONLY和ResultSet.CONCUR_UPDATABLE,通过初始化Statement时传不同的参数,可以对ResultSet进行不用的错作限制。con.createStatement的时候,有三种可以掉用的函数:

1、createStatement();
2、createStatement(int resultSetType, int resultSetConcurrency)
3、createStatement(int resultSetType, int resultSetConcurrency, int resultSetHoldability)

其中resultSetType可选值是:
   1、ResultSet.TYPE_FORWARD_ONLY  在ResultSet中只能先前移动游标,
   2、ResultSet.TYPE_SCROLL_INSENSITIVE 在ResultSet中可以随心所欲的先前向后移动游标,
   3、ResultSet.TYPE_SCROLL_SENSITIVE 在ResultSet中可以随心所欲的先前向后移动游标,同时ResultSet的值有所改变的时候,他可以得到改变后的最新的值
其中resultSetConcurrency可选值是:
   1、ResultSet.CONCUR_READ_ONLY  在ResultSet中的数据记录是只读的,可以修改
   2、ResultSet.CONCUR_UPDATABLE  在ResultSet中的数据记录可以任意修改,然后更新会数据库
其中resultSetHoldability可选值是:
   1、ResultSet.HOLD_CURSORS_OVER_COMMIT 表示修改提交时,不关闭ResultSet的游标
   2、ResultSet.CLOSE_CURSORS_AT_COMMIT  表示修改提交时,关闭ResultSet的游标

对于查询操作第一种初始化方法createStatement(),相当于第二种方法的createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY),第三种方法的createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY, ResultSet.CLOSE_CURSORS_AT_COMMIT)

下面写一段demo的代码,我把一些特征函数都用出来,但是只是用来查考和说明名灵活性的。

 Statement stmt2 = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,ResultSet.CONCUR_UPDATABLE);
 ResultSet rs2 = stmt.executeQuery("SELECT * FROM user");
 rs2.next();
 rs2.updateDate("lastdatetime", new Date(Calendar.getInstance().getTimeInMillis()));
 rs2.updateRow();
 rs2.afterLast();
 while(rs2.previous()){ /**....*/ }
 rs.beforeFirst();
 while(rs2.next()){  /**....*/ }
 rs.last();
 rs.first();
 rs.absolute(5); //游标移动到第5条
 rs.absolute(-1);  //游标移动到最后一条
 rs.relative(-5);  //游标向上移动5条
 rs.relative(2);   //游标向下移动2条
 rs.deleteRow(); //删除当前行
 rs.last();  //游标移动到最后
 rs.updateString("summary", "This is ..."); //设置更新的字段值
 rs.cancelRowUpdates();  //取消刚才输入的更新
 rs.getRow(); //得到当前行号
 rs.moveToInsertRow();  //游标移动到要新增的那条记录上
 rs.updateInt("id", 1);
 rs.updateString(2, "my name");
 rs.insertRow(); //插入新记录


JDBC2.0提供的还有一个功能就是数据库的批量操作

  con.setAutoCommit(false);
  Statement stmt3 = con.createStatement();
  stmt3.addBatch("insert .....");
  stmt3.addBatch("insert .....");
  int[] rows = stmt3.executeBatch();
  con.commit();

但是有一点要注意,stmt3.executeBatch()他不会自动给你回滚数据操作,当你有5条update语句的时候,如果第三条发生错误,那么将无法自动回滚前两条update语句的影响,所以一定要自己手工进行事务管理。

在您的事务中使用 Savepoint
JDBC3.0中最令人兴奋的附加特点就是 Savepoint 了。有时候需要的是对事务多一点的控制,而不是在当前的事务中简单地对每一个改变进行回滚。在JDBC3.0下,您就可以通过 Savepoint 获得这种控制。Savepoint 接口允许您将事务分割为各个逻辑断点,以控制有多少事务需要回滚。看下面的代码:

conn.setAutoCommit(false);
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
Statement stmt = conn.createStatement();
int rows = stmt.executeUpdate( "INSERT INTO authors (first_name, last_name) valueS(′Lewis′, ′Carroll′)");
Savepoint svpt = conn.setSavepoint("NewAuthor");
try{
 rows = stmt.executeUpdate( "UPDATE authors set type = ′fiction′ WHERE last_name = ′Carroll′");
}catch(Exception e){
 conn.rollback(svpt);
 rows = stmt.executeUpdate( " update .......... other sql ");
}
conn.commit();

上面代码显示,当UPDATE authors失败的时候,系统事务回滚UPDATE authors的sql的影响,而INSERT INTO authors的sql仍然有效


检索自动产生的关键字
为了解决对获取自动产生的或自动增加的关键字的值的需求,JDBC 3.0现在将获取这种值变得很轻松。要确定任何所产生的关键字的值,只要简单地在语句的 execute() 方法中指定一个可选的标记,Statement.RETURN_GENERATED_KEYS和Statement.NO_GENERATED_KEYS。在执行这条语句后,所产生的关键字的值就会通过从 Statement 的实例方法 getGeneratedKeys() 来检索 ResultSet 而获得。ResultSet 包含了每个所产生的关键字的列。看下面代码:

Statement stmt = conn.createStatement();
stmt.executeUpdate("INSERT INTO authors (first_name, last_name) valueS (′George′, ′Orwell′)", Statement.RETURN_GENERATED_KEYS);
ResultSet rs = stmt.getGeneratedKeys();
if ( rs.next() ) {
 int key = rs.getInt();
}

 参考资料: http://java.sun.com/j2se/1.5.0/docs/api/java/sql/package-summary.html

posted @ 2006-05-08 10:49 Leo 阅读(1527) | 评论 (2)编辑 收藏

< script language = " javascript "  type = " text/javascript " >

function Hashtable()
{
    
this ._hash         =   new  Object();
    
this .add         =  function(key,value){
                        
if (typeof(key) != " undefined " ){
                            
if ( this .contains(key) == false ){
                                
this ._hash[key] = typeof(value) == " undefined " ? null :value;
                                
return   true ;
                            } 
else  {
                                
return   false ;
                            }
                        } 
else  {
                            
return   false ;
                        }
                    }
    
this .remove         =  function(key){delete  this ._hash[key];}
    
this .count         =  function(){var i = 0 ; for (var k in  this ._hash){i ++ ;}  return  i;}
    
this .items         =  function(key){ return   this ._hash[key];}
    
this .contains     =  function(key){  return  typeof( this ._hash[key]) != " undefined " ;}
    
this .clear         =  function(){ for (var k in  this ._hash){delete  this ._hash[k];}}

}

var a 
=   new  Hashtable();

a.add(
" aa " );
a.add(
" bb " , 2342 );
a.add(
" bb " , 2342 );

a.remove(
" aa " );

alert(a.count());

alert(a.contains(
" bb " ));

alert(a.contains(
" aa " ));

alert(a.items(
" bb " ));


</ script >
posted @ 2006-04-29 20:59 Leo 阅读(455) | 评论 (2)编辑 收藏

     摘要: < HTML > < HEAD > < TITLE > print </ TITLE > < meta  http-equiv =...  阅读全文
posted @ 2006-04-29 20:57 Leo 阅读(1324) | 评论 (1)编辑 收藏

     摘要: 事件源对象 event.srcElement.tagName event.srcElement.type 捕获释放 event.srcElement.setCapture();  event.srcElement.releaseCapture();  事件按键 ...  阅读全文
posted @ 2006-04-29 20:50 Leo 阅读(520) | 评论 (2)编辑 收藏