JavaServer Faces (JSF) has a reputation for having poor performance.
Some claim that this "runtime tax" is simply the cost of using a
component-based abstraction layer. After focused research, I have
determined that by following a handful of best practices, you can get
your JSF data tables to perform almost as well as hand-crafted HTML
while still being able to retain the benefits of developing with an
event-driven programming model. I begin by identifying some performance
problems that occur when using JSF UI components, Seam components, and
the EL carelessly, and then present, through a series of four lessons,
ways to eliminate these problems one-by-one until you have observed a
remarkable, two orders of magnitude improvement.
All the test results reported in this article were gathered on a
Lenovo R60 with 2.5GB RAM, Dual Core T2300 @ 1.66Ghz processor running
Ubuntu Linux 7.10. The application was built using Seam 2.0.3.CR1, Sun
JSF 1.2_04-b16-p02, and RichFaces 3.2.2.GA. It was deployed to JBoss
4.2.2.GA running on Sun JVM 1.6.0_03. The timing results are shown in
six progressive phases. Each result shows the total request time and
the time to render a data table with 50 records. All metrics were
captured using the FireBug extension for Firefox.
Introduction
Developing a data management application is just a matter of getting
data up on the screen in tabular format, correct? Oh, right, and being
able to filter the data. Ah, and also allowing the data to be changed.
Unfortunately, once those challenges are behind us, we tend to wash our
hands of the application and move on. But the principle goal of most
web applications is to enable users to perform their work more
efficiently than they did before we introduced our "solution." In fact,
none of those fancy features you add have any value at all if you can't
improve the user's productivity. That's why, before you step away, you
have to make sure that you have addressed the issue of performance.
My colleagues and I recently completed the first stage of an open
source data management application based on JSF, Seam, and RichFaces in
which we addressed this very concern. The application, named EDAS2, was developed for a group of scientists for managing water quality data (stored in the WQX
database schema). Now, you have to understand that these scientists,
they like their data. Hordes of data. And they like to view it all at
once. So much, in fact, that it tends to cause the browser to crash.
Naturally, we needed to condition the scientists to some degree that
browsers have limits. But regardless, we were going to be dealing with
large data sets. Our goal was to make sure that working with those data
sets was not painful.
This article documents the bottlenecks that we discovered and a set
of best practices for eliminating them. But we went beyond merely
removing obstacles in performance. We tuned the application to the
point where paginating, sorting, and filtering the data is actually
faster than any desktop application our scientists had ever used. Find
that hard to believe? Read on.
About the EDAS2 application
The intent of the EDAS2 application is to house and analyze
water-quality measurement results. The results are taken from a
location, known as a monitoring location, during a given visit, known
as an activity. There are various types of results, depending on what
is being measured. In this article, we will be focusing on the benthic
measurement result, which in layman's terms is a sampling of mud with
bugs in it. That data is recorded on site and later entered into the
database and analyzed using the EDAS2 interface.
There isn't anything revolutionary about the interface of the EDAS2
application. Rather, the emphasis is on efficiency. We want to provide
the experience of the MS Access database-which our scientists are
currently using to manage this data-in a web application.
The application has two types of views. The first is a list view,
which displays a paginated table of records for the currently selected
parent entity, such as monitoring location, activity, or result. You
will learn shortly that what makes this interface efficient is that it
offers in-place editing of each row (it also has a floating popup
dialog for detailed editing of the row).
The editable data table
The key feature of this application is that the data rendered in
each table can be modified in place. To implement this functionality,
we decided against using an off-the-shelf grid editor from a JSF
component library. Instead, we took the RichFaces step-wise approach by
building a composite, Ajax-enabled component using the partial page
rendering technology that the Ajax4jsf core provides.
Ajax4jsf provides a set of tag libraries that can tie a JSF
generated event to the rerendering of one or more regions of the
user-interface. Those regions are identified by their JSF client IDs.
When the JSF event is sent to the server, instead of the JSF servlet
returning an entire page response, it returns only fragments of the
HTML. Ajax4jsf then cuts out the old branches from the live view and
stitches in the replacements returned from the server. The result is
that the user observes the page updating without any noticeable
refresh, in real-time, so to speak. And Ajax4jsf's declarative approach
let's us fine-tune this behavior.
Figure 1 provides a view of the editable data table with one of the rows in edit mode.
Figure 1. The data table demonstrating its single-line editing capabilities.
When the page is first rendered, all rows have an edit and delete
button. Clicking on an edit button puts the corresponding row in edit
mode, at which point the outputs in the selected row become inputs.
From edit mode, the user can make changes to the visible data and
either save or cancel the update, which returns the table to read-only
mode.
The strategy we use to deliver this row-level editing functionality
is to have two UI components in each column of a standard JSF data
table: one output component (e.g., <h:outputText>) and one input
component (e.g., <h:inputText>). We then use the JSF rendered
attribute on each component to control which one is displayed.
A backend supported by Seam
To support our editable data table, we put together a small
hierarchy of classes that are instantiated as Seam components to manage
the query (filtering, pagination) and the editing process (select row
for editing, update, delete, cancel). We chose to put the components in
Seam's conversation scope. The conversation scope is a slice of the
HTTP session that is managed by Seam and associated with a sequence of
pages through the use of a special request token (known as the
conversation id). The conversation provides the developer the
convenience of using the HTTP session without the memory leakage
concerns, since the conversation has a much shorter lifetime and is
kept isolated from other parallel conversations in the same session.
We chose to leverage the conversation for three reasons:
- to reduce the number of times the database needs to be queried
- to ensure the record remains managed by the persistence context while being edited
- to maintain the search criteria and pagination offset
Using the conversation has the added bonus of making previously
viewed result pages load faster since the records are already sitting
in the persistence context (i.e., the first-level cache).
Given all the benefits the conversations provide, you may be
wondering where the performance problem is. Let's take at look at where
things began to go wrong and what we did about it.
The performance roadblock
Development was going smoothly until my colleague noticed something
peculiar about the performance. Five or ten records on the page took a
reasonable amount of time to render, but when that number went up to 50
or 100 records, the performance of the page plummeted. It turns out
that the degradation was linear, but the slope was very steep. The page
with 100 records was taking over 15 seconds to render. Obviously, that
just wasn't going to fly. And so our optimization work began. Could we
find the bottleneck and how low could we go?
As we optimized, we first looked at the most basic page and
established a performance baseline. Then we added additional components
to the page and tried to identify which ones were contributing to the
major slowdown. As it turns out, in a Seam application, the first area
to place your focus is on page actions, which are methods that execute
before the page begins to render.
Page actions and component initialization
When a page is taking 15 seconds to render, there is likely a single
culprit that is chewing up a bulk of that time. To establish a
baseline, and to make sure I was focusing on the right problem, I first
stripped everything from the page and requested it. The response took a
couple of seconds to come back. This had me puzzled for a moment. I
soon realized that that a Seam page action was registered with the page
(i.e., view ID). A page action is a method-binding expression that is
assigned to a JSF view ID (or a group of view IDs if a wildcard is
used) in a Seam page descriptor and is evaluated by Seam just before
the view ID is rendered by the JSF view handler. Here's the expression
that was registered with the view ID of the results page.
<action execute="#{benthicMsmntEditor.prepareResults}" />
The page action is there to eagerly fetch the results that are to be
displayed on the page. However, the query in that method was executing
in about a tenth of a second. So that wasn't the problem. After
studying the code a bit longer, I recognized that the problem was not
in the page action method, but rather the @Create method of the
component being invoked. The @Create method is a synonym to the
standard @PostConstruct method in Java EE and marks a method to be
evaluated immediately after a component is instantiated.
Inside the @Create method was a handful of queries that retrieved
more than 10,000 records of reference data. This data is used by select
menus in various forms on the page, but those forms are all being
conditionally rendered. So basically, we were charging the user a toll
to enter the page with a chance that that reference data would never be
referenced. That brings us to lesson #1.
Lesson #1: Don't make the user pay an upfront fee to view a page. Defer logic where possible.
Since the forms are rendered conditionally, and some via Ajax, the
reference data can be retrieved at the same time the forms are
activated. If you must display a form unconditionally, think about the
most efficient way to prepare the data (perhaps using a cache). It's
also preferable to use Ajax-based autocomplete rather than select menus
with a large list of options, since making this switch can drastically
reduce the speed of the initial rendering of the form. The user will
likely be more patient when working on the field with autocomplete, and
you can even keep the number of options delivered to a minimum as the
user types.
With the toll skimmed off the top, we could get back to the
performance of the elements on the page. Bringing back the page
piece-by-piece, I determined that the next big time hog was in fact the
data table. Again, I stripped out elements in the data table until I
pinned down what was causing the problem. As it turns out, it was the
expressions in the rendered attributes that I was using to hide or show
various components in the table.
The cost of conditional rendering
In each row there are 6 "editable" columns, each containing an
output and an input component and 4 icons for controlling editing
(edit, delete, approve, cancel). In total, there are 16 uses of the
rendered attribute appearing in each row. (Initially I had a couple
columns with multiple input components, which I realized I needed to
group within a panel group [i.e., <h:panelGroup>] so that the
rendered attribute was only applied once).
As you know, logic that occurs in a single row is multiplied by the
number of rows in the table. In a table with 100 rows, there are 1600
uses of the rendered attribute! But wait, there's even more! The
rendered attribute is a nasty beast in JSF because it's often evaluated
multiple times as the UI component tree is assembled. In fact, during
the render response phase, it's resolved 3 or 4 times per component
in a data table. What that means is that for our data table, the
conditional rendering logic we are using is executed 5200 times for 100
rows! Whatever that logic is better be darn efficient or else it will
have a huge impact on performance.
Warning: After hearing the bad news about how
the rendered expression is abused by JSF, you might be inclined to use
the <c:if> tag from Facelets. This tag emulates the behavior of
the equivalently named tag from the JSTL tag library. You have to be
careful with this tag, though, because it's not a true JSF component.
It's processed by the Facelets compiler prior to building the UI
component tree and can thus exclude a region of the markup from
contributing to the UI component tree for that view. The benefit of
this tag is that it can reduce the size of the component tree when you
know that certain parts of the page aren't needed. However, the
conditional rendering that you expect to happen on a postback does not,
because at that point, the tree is already built and Facelets does not
reprocess the <c:if> statements.
As it turns out, we were not being very efficient. Let's take a look
at one of the columns of the data table in the
/BenthicMeasurementList.xhtml view template:
<h:column>
<f:facet name="header">Taxonomic Unit</f:facet>
<h:outputText
value="#{_item.taxon.name}"
rendered="#{!benthicMsmntEditor.editing(_item)}"/>
<h:selectOneMenu id="taxonomicUnit"
rendered="#{benthicMsmntEditor.editing(_item)}"
defaultLabel="No value set" value="#{_item.taxon}">
<s:selectItems value="#{taxonValues}" var="_taxon"
label="#{_taxon.name}" noSelectionLabel="No value set"/>
<s:convertEntity/>
</h:selectOneMenu>
</h:column>
As you can see here, I am calling the editing() method on a Seam
component named benthicMsmntEditor to test whether the current row is
in edit mode. We can pass the iteration variable, _item, to the method
because Seam integrates with the JBoss EL, which introduces
parameterized method calls to the Unified EL. The editing() method
performs an identity check between the row data and the selected row.
public boolean editing(T itemInRow) {
return itemInEditMode == itemInRow;
}
Here we are only allowing one row to be in edit mode at a time, but
this logic could easily be enhanced to support editing multiple rows
simultaneously.
So where's the bottleneck? Initially, you may be inclined to point
the finger at the EL or Java reflection. I did some testing and
determined that the EL is surprisingly fast and Java reflection is
equally optimized. And if you are inclined to believe that the slowness
is caused by the parameterized method call, I'll inform you that
comparing the current item using an EL operator to the item in edit
mode retrieved using the JavaBean style accessor yields the same timing
results:
rendered="#{_item == benthicMsmntEditor.itemInEditMode}"
The culprit is that the editing() method resides on a Seam component
and each method call to a Seam component passes through a stack of
interceptors, unless otherwise skipped by the presence of the
@BypassInterceptors annotation at the component or method level.
When you call an intercepted method once, you would never notice the
impact of the interceptors. However, when you call the method 5200
times, the time spent in the interceptors adds up. How much of a
difference does it make and what other options do we have?
To determine the impact, I timed both the rendering of the entire
page and the rendering of the data table region, as described in the
introduction. The data table region consists of the data table and the
pagination controls and summary information for the table. A test of 4
requests on 50 rows (2600 calls) produced these timing results:
Stage 1 timing results (50 rows)
Request |
Elapsed time of request (ms) |
Time to render table (ms) |
1 |
6330 |
6090 |
2 |
6340 |
6096 |
3 |
6400 |
5883 |
4 |
6100 |
5850 |
avg |
6292.4 |
5979.8 |
Not many people are going to stick around for a page that takes 6
seconds to render (in the best case scenario), and that doubles for 100
rows. The trick is to outject the selected row so that the comparison
can be done without having to invoke a Seam component. Outjection is a
mechanism in Seam that takes the value of a JavaBean property on a
component and assigns it directly to the name of a variable in the
specified scope (such as the conversion scope). You outject a property
by annotating it with the @Out annotation, as shown here:
@Out(required = false) private T itemInEditMode;
(For readers with Seam experience, there is a reason why you cannot
simply add @BypassInterceptors to the editing() method, which I will
provide in a moment.)
Now we can check if the row is in edit mode by comparing the
iteration variable in the data table to the outjected property using
the following EL expression in the view:
"#{item == itemInEditMode}"
Here's how things improve after making this change:
Stage 2 timing results (50 rows)
Request |
Elapsed time of request (ms) |
Time to render table (ms) |
1 |
904 |
663 |
2 |
807 |
608 |
3 |
813 |
569 |
4 |
823 |
592 |
avg |
836.8 |
608 |
Less than one second is certainly a nice place to be. We can do
better, but let's first focus on the 5 second discrepancy because it is
a cause of concern regarding Seam's performance.
The truth is, interceptors come with a cost. Again, this cost only
adds up when you are pounding the component, like the rendered
attribute does. Unfortunately, that is more of a limitation (and a fact
of life) in the way that the data table component in JSF was designed.
On the other hand, that is why Seam provides the @BypassInterceptors
annotation. This annotation is intended to be used on methods that read
the state of a component, as opposed to a method with behavior. After
all, Seam is a stateful framework and espouses using objects as they
were intended, to have behavior and state.
So why not just add @BypassInterceptors to the editing() method to
reduce the overhead of invoking it? Theoretically that would work. The
only problem is that Seam relies on interceptors to restore the state
of conversation-scoped components, at least in Seam 2.0. In Seam 2.1,
this behavior is disabled by default, so you could just add
@BypassInterceptors to the method. However, if you plan to use stateful
session beans (SFSBs) in your application or run the application in a
cluster, you will need to enable the behavior I am about to describe,
so it's important to understand why interceptors on conversation-scoped
components are important.
Seam's managed entity interceptor
Seam 2.0 uses an interceptor that aids with maintaining the object
identity of persistent objects (managed entities) across passivation of
a SFSB, or when components jump nodes in a cluster. This interceptor
has good intentions, but can have bad consequences. At the end of a
method call on a conversation-scoped component, the interceptor
transfers the values of all fields holding references to entity
instances directly into the conversation context, and then nullifies
the values of those fields. It reverses the process at the beginning of
the next call. The consequence is that without interceptors, your
conversation-scoped object is missing state. Eeek! What's worse is that
the performance of the interceptor is naturally challenged if your
component happens to be storing large data sets, because it takes time
for the interceptor to perform its work. It's an unfortunate sacrifice
in order to achieve transparent replication.
Rather than worrying about whether to use this interceptor, or
locking the design of your application into its absence, it's best just
to avoid the need to disable interceptors. Besides, there are still
other interceptors that may need to be enabled on a component's methods
(its all or nothing when you disable them) and working with outjected
data is the fastest approach anyway. You can feel comfortable calling
methods on Seam components in other areas of your page, but you should
avoid doing so inside of a data table, which brings us to our second
lesson.
Lesson #2: Don't call intercepted methods inside a data table (or in excess)
The question is, have we done all that we can do to optimize? Not
even close. There is another important lesson to learn, and this one
has to do with the EL, or more specifically, the EL resolver mechanism.
Resolving variables efficiently
In JSF, the view is bound to server-side components using a syntax
known as the Unified Expression Language (EL). The root of an EL string
(e.g., #{itemInEditMode}) is presumed to be a variable in one of the
available web application scopes, or the name of a component that must
be created (such as a JSF managed bean or Seam component). The name
resolution is handled by the EL resolver chain, which is a collection
of objects that know how to locate or create objects that map to a
name. All resolvers in the chain are consulted until the end of the
chain is reached or a value is found. This lookup happens a tremendous
number of times while rendering a JSF view, especially while rendering
a data table. Thus, it's a potential source of performance problems.
As the EL resolver chain seeks out a variable, it becomes increasingly
more aggressive. The standard EL resolver looks in the familiar places:
the request, session, and application scope. It then turns the task
over to the Seam resolver, which is where things start to slow down.
Seam has lots of different places to look to resolve a variable: a
component, a factory, a Seam namespace, a Seam context, and the list
goes on. Thus, not finding a variable comes at a high cost.
So the solution is simply to avoid referencing a missing variable,
right? Well, what happens when a null value for a variable is
meaningful in your application, as is the case of our editable data
grid. A null value for the itemInEditMode variable means the row is not
in edit mode. Unfortunately, the EL resolver chain doesn't know that a
null value means something, and will keep working through its crib
sheet until it has tried all possible combinations. Thus, we need to
find some way to tell Seam exactly where to look rather than allowing
Seam to send out its search party, so to speak.
Again, taking advantage of the flexibility afforded to us by the JBoss
EL, we can reach directly into the conversation context to look for the
row in edit mode:
rendered="#{_item == conversationContext.get('itemInEditMode')}"
Here's the reward we get for telling Seam exactly where to look:
Stage 3 timing results (50 rows)
Request |
Elapsed time of request (ms) |
Time to render table (ms) |
1 |
491 |
207 |
2 |
495 |
224 |
3 |
493 |
201 |
4 |
444 |
211 |
avg |
480.8 |
210.8 |
We roughly doubled our performance and now have 100 rows coming in
under a second, with an order of magnitude improvement over the first
run. But there is still a slight bottleneck. The variable _item is
stored in request scope by the data table and is resolved quickly, but
the variable conversationContext is a pseudo-variable that Seam
interprets after looking in all the usual places for a real variable
named conversationContext. Not only that, conversationContext is an
imported context variable, the qualified name being
org.jboss.seam.context.conversationContext. It turns out that
referencing a context variable in an imported namespace has a
measurable cost associated with it. A better choice would be to pull
the result of this lookup somewhere closer so that Seam doesn't have to
keep searching for it. We can set that up using an alias (an
event-scoped factory) in the Seam component descriptor named
conversationScope (to match requestScope, sessionScope, and
applicationScope provided by the standard EL resolver):
<factory name="conversationScope" value="#{conversationContext}"/>
We now reference this name in our rendered logic:
rendered="#{_item == conversationScope.get('itemInEditMode')}"
Here's how the timing results improve:
Stage 4 timing results (50 rows)
Request |
Elapsed time of request (ms) |
Time to render table (ms) |
1 |
399 |
161 |
2 |
373 |
150 |
3 |
458 |
207 |
4 |
560 |
163 |
avg |
447.5 |
170.25 |
Those are the kinds of numbers we want to see! Just as I mentioned
at the start of this article, there was likely a single culprit that
was squandering a majority of the rendering time. It turns out to have
been the logic in the rendered attribute of components within a data
table. But really any logic inside of a data table has to be optimized
because it's going to be compounded by the number of rows being
rendered. For instance, you might be conditionally rendering columns
based on the user's preferences. That brings us to the third lesson.
Lesson #3: Be extremely frugal with the logic you use within a data table
Incidentally, I thought about using an action listener to toggle the
rendered state on components in a row when the user clicks on the edit
button, since that's the "object-oriented" way of doing things.
Unfortunately, the design of the data table is extremely naive and does
not support this usage pattern. A data table doesn't have any concept
of rows, only columns. The rows are introduced dynamically based on the
data fed to the table (they are not represented in the state of the
component tree). Thus, if you change the rendered attribute on a
component in one of the rows, you end up affecting every row in the
table. The dynamic nature of the data table leads to many other
problems in JSF, including the "ghost click" which I discuss in my
book, Seam in Action
If you are committed to squeezing as much performance as possible out
of your page, then there is one more way you can optimize the speed of
the rendered logic: don't use it. I'm not suggesting that we throw out
the editable grid functionality. If you think about it, that logic only
needs to be performed once the user has selected a row. Before that
time, you know that you only need to display the table in read-only
mode (and you know which controls to provide in that case). Thus, the
best thing to do is split the table into two, one that has the rendered
logic in the columns and one that does not, then toggle the rendering
of the entire table. That way, the person just browsing the data does
not have to pay the tax of checking for the row selected for editing.
While this does increase the amount of code to maintain, it introduces
the possibility of having different columns displayed when the user is
editing than when they are just viewing (or even having the table look
different in some way). You can move common code into templates to
prevent duplication. Of course, the performance is now going to
increase noticeably.
Stage 5 timing results (50 rows)
Request |
Elapsed time of request (ms) |
Time to render table (ms) |
1 |
537 |
174 |
2 |
355 |
127 |
3 |
372 |
127 |
4 |
374 |
127 |
avg |
409.5 |
138.8 |
You are probably feeling pretty happy with the progress so far.
Where to next? In each of the performance results, I have provided two
columns of data for a reason: to emphasize that we are paying yet
another tax to render the remainder of the page. That is the focus of
the next round of optimizations. Obviously, the size of this tax is
going to depend on what else you have on your screen and won't
necessarily amount to the ~270ms appearing in these test results.
Regardless, the amount of this tax now exceeds the cost of the product
and we need to do something about it. That's where Ajax comes in.
Cutting costs with Ajax
The bottleneck in any decently performing web application is the
process of the browser requesting and ultimately rendering a new page.
What makes the process worse is that it happens synchronously, forcing
the user to wait until it finishes. It's extremely disruptive and it
makes the application feel slow. A far better approach is to have the
browser only replace the portions of the page that need to be changed
and to insert those changes into the page when they arrive (i.e.,
partial page rendering), which doesn't interrupt what the user is
currently doing (or at least keeps the disruption localized).
The exchange just described is achieved using Ajax. Fortunately, the
RichFaces component library for JSF makes adding Ajax interactions to a
page extremely straightforward. In the next part of this article,
you'll learn to use RichFaces' partial-page rendering to only update
the data table when the user change its state, such as to select a row
for editing or paginating the table, thus eliminating the tax that
comes with rerendering the entire page. Once this change is made, the
two orders of magnitude performance boost will be realized.
Speed up your Data-Driven JSF/Seam Application by Two Orders of Magnitude – Part 2
by Dan Allen
27 Mar 2009 01:30 EDT
In the second
installment of this two-part article, Dan Allen continues his
discussion of some common performance problems you may encounter when
using JSF components, Seam components, and the EL. You'll learn about
the set of best practices for eliminating them that led to an
improvement of two orders of magnitude in the performance of his
application.
In the first part of this article, I began briefing you on
optimizations I made to maximize the responsiveness of a JSF
application that I developed out in the field. I cited performance
problems caused by casually accessing components from a JSF view, then
presented a set of best practices to eliminate this unnecessary
overhead. Despite the progress made by the end of the first part, you
had not yet witnessed the two orders of magnitude in performance
improvement that was promised.
In this part, the additional gains will be achieved by
leveraging partial page rendering-provided by the RichFaces JSF
component library and by slimming the response. Partial page rendering
cuts out the overhead of rerendering the entire page after each user
interaction, which turns out to be the real bottleneck in most
traditional web applications, and instead redraws only the areas of the
page that have changed. Naturally, you want the replacement HTML source
to be as condensed as possible. These optimizations allow the
responsiveness of a web application to measure up to its desktop
counterpart.
Tapping into Ajax
When the user performs an
operation on the screen, such as selecting a row for editing or
paginating a result set, we only want to rerender the area of the page
that is affected. Using the data-driven application presented in the
first part, that means redrawing the data table and its pagination
controls. (Note that it's possible with Ajax4jsf to rerender a single
row, when applicable, but I have found it to be more trouble than it's
worth).
Putting numbers aside, using Ajax is going to make the application feel
far more responsive because the browser does not have to bootstrap a
whole new page and all the assets that come along with it. Research
has shown that creating HTTP connections is more costly than rendering
large pages. Partial page rendering accounts for these findings by
treating the static areas of the page and its associated assets as
completed work, and focusing solely on retrieving updates. This section
will support my recommendation that you should always consider using
Ajax in your application, as it truly does eliminate a lot of overhead.
Accessible Ajax with Ajax4jsf
Turning regular JSF postbacks into Ajax requests is pretty simple
with Ajax4jsf. However, if used inappropriately, you won't get all the
performance gains you are looking for. We'll get to that in a second.
First, begin by changing your <h:commandLink> and
<h:commandButton> components to the ones from Ajax4jsf:
<a:commandLink> and <a:commandButton>. Next, select what
you want to rerender. You reference the areas of the page to update
using a comma-separated list of the regions' component IDs in the
reRender attribute of the command component. If you are paginating, you
need to rerender the whole table and the pagination controls
(dataTableContainer). If you are transitioning into edit mode, there's
no need to requery or update the pagination, so you only have to
rerender the table itself. Here's the code for the pagination controls:
<a:commandLink id="previous" action="#{benthicMsmntEditor.previous}"
rendered="#{benthicMsmntEditor.previousAvailable}"
reRender="dataTableContainer"
ajaxSingle="true">
<h:graphicImage value="/img/previous.png" alt="Previous" title="Previous"/>
</a:commandLink>
<a:commandLink id="next" action="#{benthicMsmntEditor.next}"
rendered="#{benthicMsmntEditor.nextAvailable}"
reRender="dataTableContainer"
ajaxSingle="true">
<h:graphicImage value="/img/next.png" alt="Next" title="Next"/>
</a:commandLink>
The code for the edit control buttons in each row is similar. Here's how the edit button is defined:
<a:commandLink id="edit" action="#{benthicMsmntEditor.editItem}"
rendered="#{_item != conversationScope.get('itemInEditMode')}"
reRender="dataTable" ajaxSingle="true">
<h:graphicImage value="/img/edit.png" alt="Edit" title="Edit"/>
</a:commandLink>
What's important about using an Ajax request is to keep it simple.
You don't want to perform a lot of processing on the server because
then the users aren't going to get the "instant" feedback they are
expecting. You can drastically reduce the portion of the component tree
that is processed by JSF by using either the <;a:region> tag or
the ajaxSingle attribute on an Ajax4jsf component. Let's focus on
ajaxSingle.
Keeping the Ajax call brief
By default, JSF processes the whole component tree on a postback
because it doesn't know where events may originate or which input
fields within the submitted form contain new data. What the ajaxSingle
attribute does is tell JSF to advance directly to the component that
was activated, process events from that component, and re-encode it as
if it were the only component on the page. The result is a drastic
speed increase in the processing, independent of the size of the
component tree. In fact, the only time you would forgo using ajaxSingle
is when you need to capture input data from a form (a classic form
submit).
When deciding whether or not to use this attribute, ask yourself if you
are capturing form data or whether JSF is simply using the form submit
to perform server-side work (a contrived form submit). In JSF, it's
most often the latter.
Note: Interestingly enough, ajaxSingle is also a drop-in
replacement for immediate, which is excellent since immediate is so
poorly understood. When ajaxSingle is placed on a command component
(such as a button), the form data is not processed, and hence no
validation/conversion failures can occur, thus eliminating the need for
immediate.
Having shifted all interactions to the Ajax bridge, it's
now time to look at the performance gains. Of course, when using Ajax,
the tax of rendering the portions of the page outside of the data table
is gone. The performance on the server is also no longer a primary
concern. Now what matters is the size of the response. Unfortunately,
giving timing results here would be arbitrary because it's highly
dependent on the speed of the network (and I'm testing against my own
box). We have to focus on what we can control.
Trimming down the response
So what affects response size? The answer is, every character in the
view. Every character that you type that is encoded into the response,
as well as the markup that the JSF components generate, affects the
size of the response. That includes:
- Component IDs
- View IDs
- The path of images, external JavaScript files, and CSS
- The application context path
- Embedded JavaScript
- Inline styles and names of style classes
- Erroneous markup that is encoded into the response
As you can see, there's lots of room for improvement in this category.
I want to focus on component IDs first, since they are the biggest
culprit.
Resources
Dan Allen is a Senior Software Engineer at Red Hat and author of Seam
in Action. He has over eight years of development experience using
technologies that include Java frameworks (Seam, JSF, EJB3, Hibernate,
Spring, Struts), testing frameworks (JUnit, TestNG), JavaScript and
DOM scripting, CSS and page layouts, Maven 2, Ant, Groovy, and many
others.
After graduating from Cornell University with a degree in Materials
Science and Engineering in 2000, Dan became captivated by the world
of free and open source software, which gave him his debut in
software development. He soon discovered the combination of Linux and
the Java EE platform to be the ideal blend on which to build his
professional career, with his interests equally divided between the
two platforms.
Dan is a member of the Seam project, a dedicated open source
advocate, and a Java blogger. He lives with his extremely supportive
wife in Laurel, MD. You can keep up with Dan's development
experiences by subscribing to his blog at
Mojavelinux