#
scomp -out weather.jar weather_latlong.xsd myconfig.xsdconfig
Compiles a schema into XML Bean classes and metadata.
Usage: scomp [opts] [dirs]* [schema.xsd]* [service.wsdl]* [config.xsdconfig]*
Options include:
-cp [a;b;c] - classpath
-d [dir] - target binary directory for .class and .xsb files
-src [dir] - target directory for generated .java files
-srconly - do not compile .java files or jar the output.
-out [xmltypes.jar] - the name of the output jar
-dl - permit network downloads for imports and includes (default is off)
-noupa - do not enforce the unique particle attribution rule
-nopvr - do not enforce the particle valid (restriction) rule
-noann - ignore annotations
-novdoc - do not validate contents of <documentation>
-compiler - path to external java compiler
-javasource [version] - generate java source compatible for a Java version (1.4 or 1.5)
-ms - initial memory for external java compiler (default '8m')
-mx - maximum memory for external java compiler (default '256m')
-debug - compile with debug symbols
-quiet - print fewer informational messages
-verbose - print more informational messages
-version - prints version information
-license - prints license information
-allowmdef "[ns] [ns] [ns]" - ignores multiple defs in given namespaces (use ##local for no-namespace)
-catalog [file] - catalog file for org.apache.xml.resolver.tools.CatalogResolver. (Note: needs resolver.jar from
http://xml.apache.org/commons/components/resolver/index.html)
if(raMgrService == null){
try {
return raMgrService = (RemoteAccessManager)MBeanProxyExt.create(RemoteAccessManager.class, "mycomp:service=AccessCenter");
} catch (Exception e) {
log.error("Failed to find mycomp:service=AccessCenter", e);
}
}
return raMgrService;
mycomp:service=AccessCenter 是 jboss service mbean 服务名
<?xml version='1.0' encoding='UTF-8' ?>
<server>
<mbean code="com.jhalo.security.AccessCenterService"
name="jhalo:service=AccessCenter">
如果希望远程调用可以:
<attribute name="ExportInterfaces">com.jhalo.security.RemoteAccessManager</attribute>
export 实现类的接口.
WARN:
afterTransactionCompletion() was never called
unclosed connection, forgot to call close() on your session?
警告: afterTransactionCompletion() was never called
执行了tx = session.beginTransaction();
但是后来没处理tx
应该用tx.commit() or tx.rollback()
一般查询时容易出现此问题
<discriminator type="java.lang.String" column="REGION_TYPE"
length="10" force="false" insert="true"/>
Region.hbm.xml(15)
org.xml.sax.SAXParseException: Attribute "insert" must be declared for element type "discriminator".
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.error(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.dtd.XMLDTDValidator.addDTDDefaultAttrsAndValidate(Unknown Source)
at org.apache.xerces.impl.dtd.XMLDTDValidator.handleStartElement(Unknown Source)
at org.apache.xerces.impl.dtd.XMLDTDValidator.emptyElement(Unknown Source)
at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(
Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.DTDConfiguration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.jdom.input.SAXBuilder.build(Unknown Source)
at org.jdom.input.SAXBuilder.build(Unknown Source)
at org.jdom.input.SAXBuilder.build(Unknown Source)
at net.sf.hibernate.tool.hbm2java.CodeGenerator.main(CodeGenerator.java:100)
at net.sf.hibernate.tool.hbm2java.Hbm2JavaTask.processFile(Hbm2JavaTask.java:149)
at net.sf.hibernate.tool.hbm2java.Hbm2JavaTask.execute(Hbm2JavaTask.java:97)
at org.apache.tools.ant.Task.perform(Task.java:341)
at org.apache.commons.jelly.tags.ant.AntTag.doTag(AntTag.java:185)
at org.apache.commons.jelly.impl.TagScript.run(TagScript.java:233)
at org.apache.commons.jelly.impl.ScriptBlock.run(ScriptBlock.java:89)
at org.apache.maven.jelly.tags.werkz.MavenGoalTag.runBodyTag(MavenGoalTag.java:79)
at org.apache.maven.jelly.tags.werkz.MavenGoalTag$MavenGoalAction.performAction(MavenGoalTag
.java:110)
at com.werken.werkz.Goal.fire(Goal.java:639)
at com.werken.werkz.Goal.attain(Goal.java:575)
at com.werken.werkz.Goal.attainPrecursors(Goal.java:488)
at com.werken.werkz.Goal.attain(Goal.java:573)
at org.apache.maven.plugin.PluginManager.attainGoals(PluginManager.java:671)
at org.apache.maven.MavenSession.attainGoals(MavenSession.java:263)
at org.apache.maven.cli.App.doMain(App.java:488)
at org.apache.maven.cli.App.main(App.java:1239)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:324)
at com.werken.forehead.Forehead.run(Forehead.java:551)
at com.werken.forehead.Forehead.main(Forehead.java:581)
build:
----------------------------------
----------------------------------
solution : 升级hiberate verion 2.1.8
----------------------------------
----------------------------------
改maven :project.xml
<dependency>
<groupId>hibernate</groupId>
<artifactId>hibernate</artifactId>
<version>2.1.8</version>
<properties>
<ejb.manifest.classpath>true</ejb.manifest.classpath>
</properties>
</dependency>
这样写,也是没有问题的
<discriminator type="java.lang.String" column="REGION_TYPE" length="10"/>
在 jasperreports-0.6.8\lib 中,使用的apache poi 版本: poi-2.0-final-20040126.jar
此版本poi 还不支持图片的处理。
到现在 poi 3.0 alphal 已经发布,(2005-07-04) 这个版本已经加入插入图片的功能。
下载原代码:
http://apache.freelamp.com/jakarta/poi/dev/src/ poi-src-3.0-alpha1-20050704.zip 详细查看 sample 部分
src/examples/src
org.apache.poi.hssf.usermodel.examples
OfficeDrawing
于2005-05-01加入了新的示例代码:drawSheet5
public class OfficeDrawing
{
public static void main(String[] args)
throws IOException
{
// Create the workbook and sheets.
HSSFWorkbook wb = new HSSFWorkbook();
HSSFSheet sheet5 = wb.createSheet("fifth sheet");
drawSheet5( sheet5, wb );
// Write the file out.
FileOutputStream fileOut = new FileOutputStream("workbook.xls");
wb.write(fileOut);
fileOut.close();
}
private static void drawSheet5( HSSFSheet sheet5, HSSFWorkbook wb ) throws IOException
{
// Create the drawing patriarch. This is the top level container for
// all shapes. This will clear out any existing shapes for that sheet.
HSSFPatriarch patriarch = sheet5.createDrawingPatriarch();
HSSFClientAnchor anchor;
anchor = new HSSFClientAnchor(0,0,0,255,(short)2,2,(short)4,7);
anchor.setAnchorType( 2 );
patriarch.createPicture(anchor, loadPicture( "src/resources/logos/logoKarmokar4.png", wb ));
anchor = new HSSFClientAnchor(0,0,0,255,(short)4,2,(short)5,7);
anchor.setAnchorType( 2 );
patriarch.createPicture(anchor, loadPicture( "src/resources/logos/logoKarmokar4edited.png", wb ));
anchor = new HSSFClientAnchor(0,0,1023,255,(short)6,2,(short)8,7);
anchor.setAnchorType( 2 );
HSSFPicture picture = patriarch.createPicture(anchor, loadPicture( "src/resources/logos/logoKarmokar4s.png", wb ));
picture.setLineStyle( picture.LINESTYLE_DASHDOTGEL );
}
private static int loadPicture( String path, HSSFWorkbook wb ) throws IOException
{
int pictureIndex;
FileInputStream fis = null;
ByteArrayOutputStream bos = null;
try
{
fis = new FileInputStream( path);
bos = new ByteArrayOutputStream( );
int c;
while ( (c = fis.read()) != -1)
bos.write( c );
pictureIndex = wb.addPicture( bos.toByteArray(), HSSFWorkbook.PICTURE_TYPE_PNG );
}
finally
{
if (fis != null)
fis.close();
if (bos != null)
bos.close();
}
return pictureIndex;
}
}
public interface TestEngine {
public void scheduleTest(String[] systemIds,ScheduleData data) throws Exception;
........
}
public interface TestEngineServiceMBean extends TestEngine,com.hygensoft.common.service.Service,org.jboss.system.ServiceMBean {
......
}
String[] systemIds = {"HZ","FZ"};
server.invoke(TestEngineServiceMBean.OBJECT_NAME,"scheduleTest",
new Object[]{ systemIds, data},
new String[]{ systemIds.getClass().getName(),ScheduleData.class.getName()});
jndi.properties
# DO NOT EDIT THIS FILE UNLESS YOU KNOW WHAT YOU ARE DOING
#
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
java.naming.provider.url=jnp://localhost:1099
jndi.properties 文件放于classes 目录.
解决方案:用外部trueType字体
操作方法:
1.将simhei.ttf copy到ireport安装目录下的fonts中,并把fonts加到classpath中,重启ireport就可以用外部字体了。
注:classpath = %IREPORT_HOME%\iReport-0.5.0-src\fonts
2.pdf font name选External TTF font...
在TrueType font下选框选“黑体(simhei.ttf)” or “宋体(simsun.ttc,0)”
注:不要勾选PDF embedded Pdf encoding: Identity-H
you had a table "miners" that looked like this
create table miners (
id BIGINT NOT NULL AUTO_INCREMENT,
first_name VARCHAR(255),
last_name VARCHAR(255),
primary key (id)
) Hibernate class (Miner.java) specifies the fields, getters/setters and xdoclet tags looks like so.
package deadwood;
/**//**
* @hibernate.class table="miners"
*/
public class Miner {
private Long id;
private String firstName;
private String lastName;
/**//**
* @hibernate.id generator-class="native"
*/
public Long getId() { return id; }
public void setId(Long id) { this.id = id; }
/**//**
* @hibernate.property column="first_name"
*/
public String getFirstName() { return firstName; }
public void setFirstName(String firstName) { this.firstName = firstName; }
/**//**
* @hibernate.property column="last_name"
*/
public String getLastName() { return lastName; }
public void setLastName(String lastName) { this.lastName = lastName; }
}
Associations
the Miner class we looked at was single table oriented, mapping to a single miners table. ORM solutions support ways to map associated tables to in memory objects ,
- Many to One/One to one - belongs_to/has_one
- One to Many (set) - has_many
- Many to Many (set) - has_and_belongs_to_many
- Single Table Inheritance
- Components (mapping > 1 object per table)
As a comparative example, lets look at the many to one relationship. We are going to expand our Deadwood example from part I. We add to the Miner a many to one association with a GoldClaim object. This means there is a foreign key, gold_claim_id in the miners table, which links it to a row in the gold_claims table.
(Java)
public class Miner {
// Other fields/methods omitted
private GoldClaim goldClaim;
/**//**
* @hibernate.many-to-one column="gold_claim_id"
* cascade="save"
*/
public GoldClaim getGoldClaim() { return goldClaim; }
public void setGoldClaim(GoldClaim goldClaim) {
this.goldClaim = goldClaim;
}
} Hibernate uses explicit mapping to specify the foreign key column, as well as the cascade behavior, which we will talk about next. Saving a
Miner will save its associated
GoldClaim, but updates and deletes to it won't affect the associated object.
Transitive Persistence
Its important for an ORM solution to provide a way to detect and cascade changes from in memory objects to the database, without the need to manually save() each one. Hibernate features a flexible and powerful version of this via declarative cascading persistence.
Deleting Hibernate offers a number of different cascading behaviors for all associations types, giving it a high degree of flexibility. For example, setting cascade="all" will make GoldClaim save, update and delete along with its parent Miner, like so...
Miner miner = new Miner();
miner.setGoldClaim(new GoldClaim());
session.save(miner); // Saves Miner and GoldClaim objects.
session.delete(miner); // Deletes both of them. By using the cascade="save-update", you could get this behavior on any association, regardless of which table the foreign key lives in. Hibernate doesn't base the transistive persistence behavior off the relationship type, but rather the cascade style, which is much more fine grained and powerful.
Query Languages
Hibernate has its own object oriented query language (Hibernate Query Language - HQL), which is deliberately very similar to SQL. How it differs is that it lets developers express their queries in terms of objects and fields instead of tables and columns. Hibernate translates the query into SQL optimized for your particular database. Obviously, inventing a new query language is very substantial task, but the expressiveness and power of it is one of Hibernate's selling points.
Querying for Objects with HQL
when you have to start navigating across objects with SQL, HQL can be very convenient alternative. Let's take a look at our sample queries for HQL.
// Find first Miner by name
Query q = session.createQuery("from Miner m where m.firstName = :name");
q.setParameter("name", "Elma");
Miner m = (Miner) q.setMaxResults(1).uniqueResult();
// Finds up to 10 miners older than 30, ordered by age.
Integer age = new Integer(30);
Query q = session.createQuery(
"from Miner m where m.age > :age order by age asc");
List miners = q.setParameter("age", age).setMaxResults(10).list();
// Similar to join query above, but no need to manually join
Query q = session.createQuery(
"from Miner m where m.goldClaim.squareArea = :area");
List minersWithSqA = q.setParameter("area", new Integer(1000)).list();
Having covered some of the basics of fetching objects, let's turn your attention to how we can make fetching objects fast. The next section covers the means by which we can tune the performance.
Performance Tuning
Beyond just mapping objects to tables, robust ORM solutions need to provide ways to tune the performance of the queries. One of the risks of working with ORM's is that you often pull back too much data from the database. This tends to happen because it its very easy to pull back several thousand rows, with multiple SQL queries, with a simple statement like "from Miner". Common ORM strategies for dealing with this include Lazy fetching, outer join fetching and caching.
What I mean by lazy is that when you fetch an object, the ORM tool doesn't fetch data from other tables, until you request the association. This prevents loading to much unneeded data. Hibernate allows you to choose which associations are lazy. This leads us to one of the great fallacies of ORM, that Lazy loading is always good. In reality, lazy loading is only good if you didn't need the data. Otherwise, you are doing with 2-1000+ queries what you could have done with one. This is dreaded N+1 select problem, where to get all the objects require N selects + 1 original selects. This problem gets much worse when you deal with collections..
Outer Joins and Explicit Fetching
Generally, one of the best way to improve performance is to limit the number of trips to the database. Better 1 big query than a few small ones. Hibernate has a number ways its handles the N+1 issue. Associations can be explicitly flagged for outer join fetching (via outer-join="true"), and you can add outer join fetching to HQL statements. For example...
/**//**
* @hibernate.many-to-one column="gold_claim_id"
* cascade="save-update" outer-join="true"
*/
public GoldClaim getGoldClaim() { return goldClaim; }
// This does one select and fetches both the Miner and GoldClaim
// and maps them correctly.
Miner m = (Miner) session.load(Miner.class, new Long(1)); In addition, when selecting lists or dealing with collection associations, you can use an explicit outer join fetch, like so...
// Issues a single select, instead of 1 + N (where N is the # miners)
List list = session.find("from Miner m left join fetch m.goldClaim"); The performance savings from this can very significant.
Caching
While object caching isn't always going to be helpful or a performance silver bullet, Hibernate has a huge potential advantage here. It provides several levels of caching, including a session (UnitOfWork) level as well as an optional second level cache. You always use the '1st level' cache, as it prevents circular references and multiple trips to the database for the same object. Using a second level cache can allow much of the database state to stay resident in memory. This is especially useful for frequently read and reference data.
shark1.0新特性:
* Added new functionality of handling Deadlines.
Shark now has defined client API, and its implementation for handling Activity deadlines.
This API is supposed to be used by shark client to periodically ask shark to check deadines.
Shark can be setup to re-evaluate deadlines every time deadline check is performed,
or to initially calculate deadline times and store it into DB, and when asked to
check deadlines, deadline limit is retrieved from DB.
Shark comes with an example XPDL processes contained in deadlineexamples.xpdl file,
that shows ASYNC and SYNC deadline handling.
In shark deadline expressions along with all process variables, you can use special variables called:
1. PROCESS_STARTED_TIME - the time when the process is started
2. ACTIVITY_ACTIVATED_TIME - the time when process flow comes to activity and
assignments for the activity are created
3. ACTIVITY_ACCEPTED_TIME - the time when the first assignment for the activity is accepted
NOTE: If activity is being rejected after its acceptance, or it is not accepted at all,
the ACTIVITY_ACCEPTED_TIME is set to some maximum value in the future
IMPORTANT:
- There shouldn't be process variables (DataField or FormalParameter entities from XPDL)
that have the same Id as the one of previously listed - The Java type of these variables is java.util.Date.
- deadline expression result must be java.util.Date
- if shark is setup to not re-evaluate deadlines, but to initially evaluate
deadline limit times, ACTIVITY_ACCEPTED_TIME should not be used in expressions
because it will contain some maximum time in the future.
When starting Shark CORBA server, it can be configured if it will open a thread for checking Deadlines.